Hi everyone,
I’m runnig into some memory problems when executing my python script. I’m stacking images which are cropped and in another step adjusted. The code works fine but the memory usage is higher than expected. The array will have a size of 77110001500 dtype uint8 after stacking and I’m using about 15 Gb. Size array should be around 1 Gb with sys.getsizeof(). Before my loop I’m creating an empty array with the size of the final array.
In the step of adjusting there is another memory bump of about 1.6 Gb, which is about the expected size for the new array (116610001500 dtype uint8). The required memory doesn’t change in the loop of adjusting the images.
Do you have any idea why it takes so much more space in the first loop? The reason I’m asking is because I plan to execute this on an raspberry pi and I’m limited to 8 Gb memory.
Here are my loops:
def construct_3d_matrix(self):
"""
this method crop, scale each layer to neglect distance to camera, concatenate all layers along y-axis
and scales each concatenated layer (new layer in x-axis and y-axis) to adjust size of pixel
:return: shape of matrix
"""
self._get_roi()
max_intensity = 0
concatenated_layers = np.empty(shape=[self.num_layers, self.roi['height'], self.roi['width']], dtype=np.uint8)
for i, layer in enumerate(tqdm.tqdm(self.layers)):
img = layer.img
scaled_x_left = int(self.roi['x'] * (1 - self.config.delta_m * layer.layer_num / 2))
scaled_x_right = int((self.roi['x'] + self.roi['height']) * (1 + self.config.delta_m * layer.layer_num / 2))
scaled_z_below = int(self.roi['z'] * (1 - self.config.delta_m * layer.layer_num / 2))
scaled_z_above = int((self.roi['z'] + self.roi['width']) * (1 + self.config.delta_m * layer.layer_num / 2))
orig_crop = img[scaled_x_left:scaled_x_right, scaled_z_below:scaled_z_above]
scaled_crop = cv2.resize(orig_crop, dsize=(self.roi['width'], self.roi['height']),
interpolation=cv2.INTER_LANCZOS4)
#scaled_crop = np.flipud(color.rgb2gray(scaled_crop * ((1 + self.config.delta_m * layer.layer_num) ** 2)))
scaled_crop = np.flipud(scaled_crop[:,:,0] * ((1 + self.config.delta_m * layer.layer_num) ** 2))
concatenated_layers[i] = scaled_crop
max_intensity = np.max(concatenated_layers)
self._adjust_resolution(concatenated_layers, max_intensity)
return self.processed_images.shape
def _adjust_resolution(self, concatenated_layers, max_intensity):
self.processed_images = np.empty(shape=[int(self.num_layers * self.config.scale), self.roi['height'], self.roi['width']],
dtype=np.uint8)
if max_intensity < 255:
for z in tqdm.tqdm(range(self.processed_images.shape[2])):
layer = concatenated_layers[:, :, z]
self.processed_images[:, :, z] = cv2.resize(layer, dsize=(layer.shape[1], int(layer.shape[0] * self.config.scale)),
interpolation=cv2.INTER_LANCZOS4)
self.processed_images[:, :, z] = np.uint8(self.processed_images[:, :, z] / max_intensity * 255)
else:
for z in tqdm.tqdm(range(self.processed_images.shape[2])):
layer = concatenated_layers[:, :, z]
self.processed_images[:, :, z] = cv2.resize(layer, dsize=(layer.shape[1], int(layer.shape[0] * self.config.scale)),
interpolation=cv2.INTER_LANCZOS4)