# Fastest way of contracting multidimensional arrays

Hello everyone,

I am trying to find the fastest way to contract multidimensional arrays.
Basically, a more efficient way to do this:

``````import numpy as np

def naive_contraction(tensor, indices):

result = 0.0

for i in range(5):
for j in range(5):
for k in range(5):

result += tensor[i,j,k] * tensor[indices[0], indices[1], i] * tensor[indices[2], indices[3], j] * tensor[indices[4], indices[5], k]

return result
``````

Then we simply perform the contraction like this:

``````tensor = np.random.rand(5,5,5)

indices = [0,1,4,3,0,1]

result = naive_contraction(tensor, indices)
``````

Notice that the index values must be received as input. We donâ€™t want to store the full multidimensional array obtained with all possible combinations of indices as it would be way too large (this is just a naive example to explain the situation).

Itâ€™s my understanding that Numpy stores data in row-major order. Therefore I put the â€śsummation indexâ€ť along the last axis of the multidimensional array wherever possible. How can I vectorize (and/or optimize) a tensor contraction like this in Python?

Thank you so much,
all the best.

Hey there!

If youâ€™re looking to optimize your tensor contraction, there are a couple of functions in NumPy that can help you do that: `numpy.tensordot` and `numpy.einsum`. These functions are great because they use vectorized operations instead of explicit loops, which can make your tensor contractions more efficient.

Hereâ€™s an example of how you can implement the contraction using both functions:

``````import numpy as np

def optimized_contraction(tensor, indices):
# Using tensordot
t1 = np.tensordot(tensor, tensor[indices[0], indices[1]], axes=([0], [0]))
t2 = np.tensordot(t1, tensor[indices[2], indices[3]], axes=([1], [0]))
result = np.tensordot(t2, tensor[indices[4], indices[5]], axes=([1, 2], [0, 1]))

# Alternatively, using einsum
result_einsum = np.einsum('ijk, mi, nj, ok ->', tensor, tensor[indices[0], indices[1]], tensor[indices[2], indices[3]], tensor[indices[4], indices[5]])

return result, result_einsum

tensor = np.random.rand(5, 5, 5)
indices = [0, 1, 4, 3, 0, 1]
result, result_einsum = optimized_contraction(tensor, indices)

print("Result using tensordot:", result)
print("Result using einsum:", result_einsum)
``````

Using either `numpy.tensordot` or `numpy.einsum` can make your tensor contraction operation more efficient. `tensordot` and `einsum` are both great options to take advantage of the underlying hardware and memory layout. `einsum` is more flexible and can handle more complex contractions, but might be slower than `tensordot` for some cases.

Hope this helps! Let me know if you have any further questions.

1 Like