Sure, if there is an object that has distinct hue or value (brightness). For hue probably wraparound / range thresholds would be needed, and maybe some other color space (rgb2lab?) would be better for that.
Possible sure, but probably typically less useful / more difficult to define useful criteria than in more intuitive / physical spaces like brightness.
These are distances in pixels between 1 and however large your image is. If you know how large the pixels are you could calculate the distance in pixels from a user specified distance in meters. Or use a percentage of image size. However the “1” is more to deal with pixel noise and not really related to the image size / resolution.
Error:
Exception in Tkinter callback
Traceback (most recent call last):
File “C:\Users.…\Python310\lib\tkinter_init_.py”, line 1921, in call
return self.func(*args)
File “C:\Users.…\temp.py”, line 146, in entw1
if mask.count_nonzero() < mask_temp.count_nonzero():
AttributeError: ‘numpy.ndarray’ object has no attribute ‘count_nonzero’
Also, without using such approaches, the output of the images which have intersected the image border is empty (or a completely black image). For instance, please see the below image in which the white ribbon tied to the quadrat has reached the image edge:
Do you think that I (as a beginner) may use the advanced approach i.e. “machine learning convolutional networks”? For instance, how can I prepare the training sets? Should these sets include some raw images on one hand, and the cropped areas on the other hand? what about the size of training sets? Is it a good idea to use AI for writing such code and model?
Thanks.