Dimensionality Error in Encoder-decoder model LSTM (Attention)

I’m trying to implement attention model on numeric dataset (non-NLP) and I’ve set the dimensions of my input and output (array) as: (1335,5,5) and (1335,3,5) respectively.
where the batch size or total samples of my data=1335,
Input has 5 features and output has 3 features to predict.
Both input and output features are further made up of 5 values (as list) or elements, hence those dimensions mentioned above.

But I keep on getting dimensional error even for basic lstm encoder-decoder model.