I obviously don’t understand how numpy works with images.
I have this image:

I am using this code to get the location of all the green pixels in Image A:
# Load image
pngImage = PIL.Image.open('Test.png').convert('RGB')
# convert to NumPy array
npPixels = np.array(pngImage)
# draw image
draw = PIL.ImageDraw.Draw(pngImage)
# Set fill color for the points
point_fill_color = "white"
# Define the target color (RGB)
colorR = 0.0
colorG = 255.0
col0rB = 0.0
colorPos = np.zeros(97, dtype=object)
np.array(colorPos.tolist())
for col in range(97):
if colorR > 255:
colotR = 255
if colorG < 131:
colotG = 131
color = [colorR, colorG, col0rB] # 97.0, 131.0, 0.0
# Find coordinates of all pixels matching the color
coordinates = np.argwhere(np.all(npPixels == color, axis=-1))
# The axis=-1 parameter ensures that the comparison is done across the color
# channels (RGB), and np.all ensures that all three channels match the target color.
colorPos[col] = coordinates
print(len(coordinates))
if len(coordinates) > 0:
for i in range(len(coordinates)):
# Output the coordinates
print(coordinates[i])
xx, yy = coordinates[i]
# pyautogui.moveTo(xx, yy)
draw.point((xx, yy), fill=point_fill_color)
colorR += 1.0
colorG -= 1.0
# Save image
pngImage.save(f'coords.png')
# Display the image
pngImage.show()
However, numpy rotates and flips the image, then gets the coordinates, which results in the coordinates being in the wrong place (the white pixels): Image B
To try and correct this, I flip and rotate the image assuming that numpy flips and rotates the image, so that it is then correct.
# The ImageOps.flip() method flips the image vertically, and ImageOps.mirror()
# mirrors it horizontally
editedImg = PIL.ImageOps.mirror(svgImage)
# For rotating images in PIL, the primary method is Image.rotate(), which
# rotates an image by a specified angle in degrees counter-clockwise by default.
# To rotate clockwise, a negative angle can be used.
# The expand parameter can be set to True to enlarge the output image to fit the
# entire rotated image, preventing cropping.
svgImage = editedImg.rotate(90, expand = True)
# editedImg = PIL.ImageOps.flip(imgCopy)
# #editedImg = PIL.ImageOps.mirror(svgImage)
However, numpy leaves the image as it is… as though it needs the image in that position: Image C
How can I get numpy to get the coordinates with the image in its upright position and facing in the right direction? Image A
@MadPoet regarding your question Am I right in thinking one needs the ‘coordinates’ to be integers? And the output is in strings? You are correct. You need to convert the string to an integer with int(‘coords‘).
I did not reply in that post, in respect of the guidelines:
This topic has been solved
Only reply here if:You have additional details
The solution doesn’t work for you
If you have an unrelated issue, please start a new topic instead.
@MadPoet I should have mentioned that I abandoned using getPixel, because it’s 50 times slower that iterating over an image pixel by pixel in c++. It moves like a snail.
So, I would recommend you use another method.
This method is very fast, but the only problem is what I am having here.
