Hi, I am a Matlab user. In Matlab, we can specify which vector is n x 1 and which is 1 x n easily. They are different because 1 x n * n x n is defined while n x 1 * n x n is undefined. We can also type size(vector) to tell whether it is 1 x n or n x 1. Moving to Python, I am getting confused. How does it work in python, numpy and panda? Considering the following example (Example 1):

# Example 1

```
In [141]: arr2d = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
In [142]: arr2d
Out[142]:
array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
In [143]: arr2d[:2, 2]
Out[143]: array([3, 6])
```

Why the system displayed [3, 6] which is a 1 x 2 vector? Why doesn’t it display [3; 6] vertically to indicate that it is a 2 x 1 vector? It seems that python, numpy and pandas “prefer” 1 x n vector. Is it just the way they prefer to display a vector? Intenally, is it 1 x n as shown or n x 1?