Anyone doing a lot of linear algebra knows the columns of a matrix are more significant than the rows. In MIT Strang’s linear algebra course, we even have the notion of column space. And indeed, in both matlab and fortran, the convention is column-first.

However, C++ and python are row-first.

Isn’t this annoying? Below is a code for QR-decomposition of a matrix:

def qr_factorization(X):
"""
Compute the QR factorization of a square matrix using iterative Gram-Schmidt
orthonormalization.
Args:
X (numpy.ndarray): A square matrix.
Returns:
Q (numpy.ndarray): An orthonormal matrix.
R (numpy.ndarray): The upper triangular matrix.
"""
X = X.T # want first index to be the column index (for convenience)
q, r = np.zeros(X.shape), np.zeros(X.shape) # preallocate Q and R
## Pick first vector as the "pivot" and normalize it
q[0] = X[0] / np.linalg.norm(X[0])
## Project rest of the vectors onto the pivot
r[0] = X @ q[0]
for i in range(1, X.shape[0]):
## Orthogonalize the vector by subtracting off the projections onto all previous vectors
q[i] = X[i] - np.sum(np.dot(X[i], q[:i].T) * q[:i].T, axis=-1)
## Normalize the orthogonalized vector
q[i] /= np.linalg.norm(q[i])
## Update the upper triangular matrix R
r[i, i:] = X[i:] @ q[i]
q = q.T # because we wrote into rows instead of columns
return q, r

I think you may be conflating two different things. In both MATLAB and Fortran, the indexing convention for a two-dimensional array is the same as in NumPy: row, then column. And all three match the prevailing mathematical convention^{[1]} - again, row index, then column index.

For example, given a MATLAB matrix A, the first column of A is given by A(:, 1). Given a NumPy two-dimensional array A, the first column of A is similarly given by A[:, 0]. So if what you’re after is syntactical convenience (which is how I read your post), I don’t think MATLAB is going to do significantly better than Python here.

What you may be thinking of is the fact that MATLAB and Fortran use column-major order for storing matrix entries, while NumPy defaults to row-major (but can also be persuaded to use column-major). This is in some sense an internal detail: it’s not going to make your QR-decomposition algorithm look any different in the code (the exact same Python + NumPy code will produce the exact same QR-decomposition regardless of whether the NumPy array X happens to be stored in row-major or column-major order), but it may make it perform differently. So you might find yourself caring about row-major versus column-major for performance reasons, but not for notation / syntax reasons.

On performance: note that X = X.T in NumPy doesn’t actually change the memory layout of the elements of X: it doesn’t move the array data at all, but merely constructs new array metadata that tell NumPy to index the exact same data block in a different way.

“prevailing” because I’m not actually aware of any mathematical sources that try to use column then row when indexing a matrix, but who knows: mathematicians are a funny bunch ↩︎

What is ‘annoying’ is that fact that for some purposes, ‘by row’ is better, and for some purposes, ‘by column’ is better, and either decision is a compromise. But this has been known and the choice debated for at least half a century. High performance matrix routines are written differently depending on the storage. More recently, the details of multilevel microprocessor caches, which depend on the microprocessor, are also taken into account. So are possibilities of parallel computation. Numpy is, and its predecessors were, built on top of the best work done in assembler, Fortran, and C, and maybe, either today or in the future, other languages.