Segmentation of thermal image based on temperature values

Are you wanting to subsample (crop, so to speak) the images by temperature? For example, by “binning” the image’s pixel values and then pulling those bins out of the image?

Here’s an example of drawing segmentation lines on an image with scikit-image: Comparison of segmentation and superpixel algorithms.

@mlgtechuser Don’t want to crop. Want to do the regular segmentation. The difference is that regular segmentation is based on colors of pixels, whereas, I want to base my segmentation on temperature values instead of on colors. So, the regions with similar temperature around each other will get classified as one segment, and so on.

I went through the link you sent. It is doing the regular image segmentation using different algorithms. I want to do the same, but using pixel temperature instead of pixel color.

I do thermal analysis now and then for electrical equipment or process machinery with heaters and every thermal image I’ve ever taken uses color to represent temperature in a standard format like png or jpg (note: my thermal camera is a pretty basic Flir that only cost $1,400).

What encoding do your thermal image files use? This one is a jpg of bottles coming out of a blow molder with too much heat in the necks, causing them to droop.

IR_0248

I see.

My thermal images are grayscale images with radiometric metadata. I am using the metadata to calculate the temperature of each pixel of the thermal image in degrees Celsius. Which I am able to do. I am now interested in using these temperature values for segmentation.

It is a JPEG too

Update: I just realized that when I use cv2.imwrite to write an image using the temperature matrix, I get the original image, just with lower brightness.

cv2.imwrite("Desktop", myTemperatureMatrix)

Those colors are just from the pseudocor rendering to make it easier for human to see the temperature differences. The original image is not a real color image.

Normal color images are tri-chromatic (they have three color components) while normal thermal images are monochromatic.

Do not the common segmentation algorithms also work on monochromatic images? BTW I know nothing about the algorithms.

…except the ones that are trichromatic, like the example I gave. [Edit: ] Perhaps you meant “raw” thermal images? (I was surprised several years ago when someone with a very expensive digital camera sent me a photo as a “raw” file that literally had an extension of ‘raw’, as in ‘sunset.raw’ .

This is simply a case of “what’s your input data?” the OP might have wanted to analyze or segment the pseudocolor. Ya just gotta ask, right? :smiley:

What’s the pixel value scale (0-255, 0.0-1.0, 0-65535…?)

when I use cv2.imwrite to write an image using the temperature matrix, I get the original image, just with lower brightness.

Check to see if this is because the value scale shifts after you export. Might be helpful to know.

Good to know you’re using OpenCV. That’s what I’m most familiar with, by far.

The pixel value scale is 0-255.

Yes it does. When I created the image, the values were temperature values (Celsius) including negative values (ranging from -20 to 50). When I check the values of the saved image, the values start from 0, and are single digit values.

Will performing segmentation on the image I save using the temperature value matrix, segment the image based on temperature values?

My first thought is that you won’t want to just do an imwrite() because this risks losing data due to truncation. I’ll see if I can nail down what imwrite() does with negative values; truncation seems to explain the loss of brightness you described. That isn’t a big deal, though, since the image manipulation can all be done by the program and then saved afterward.

If there’s no Watershed or SLIC algorithm that will handle your raw temperature data, Numpy can efficiently scale and apply an offset to your image since it’s just an array (thanks to OpenCV handling images as numpy arrays). Numpy can also be used to transform the data into a form that imwrite() just saves without modifying.

So that brings us back to your question about whether you can feed your image array straight to Watershed or SLIC. I’ll check again now that I have better understanding about your input data.

Is the range of all of your existing data -20 to 50?

More importantly, is this the range you expect to always work with?

[Edit: ]

cv2.imwrite("Desktop", myTemperatureMatrix)

I just noticed that the filename is “Desktop” with no file extension. If there’s no imwrite() file extension to tell OpenCV what encoding to use, it defaults to 8-bit unsigned values. From OpenCV docs:

16-bit unsigned (CV_16U) images can be saved in the case of PNG, JPEG 2000, and TIFF formats

32-bit float (CV_32F) images can be saved in TIFF

If the image format is not supported, the image will be converted to 8-bit unsigned (CV_8U) and saved that way.

I see. Thanks for the information. I’ll read more about imwrite().

Yes, the range of all my existing data is from -20 to 50. But the range can sometimes vary slightly from image to image.

I just tried both ways. I am getting the same result both times (with and without extension).

OpenCV’s Watershed function might not be what you’re looking for. I tried to feed your image to it and got an error saying that the input image must be 3-channel, so it won’t process grayscale images. NOTE that the binning approach mentioned below will handle this since the bins can be assigned color. This brings us back full circle to the pseudocolor (false color) practice that Václav mentioned at the beginning of this topic.

OpenCV’s Watershed also requires that the cell regions be identified before running the segmentation. This tutorial does just that and this document states it:

Before passing the image to the function, you have to roughly outline the desired regions in the image markers

SLIC appears to be simpler to use but this is unconfirmed (for me, at least) at the moment.

This OpenCV doc has some technical information about SLIC but no implementation examples. I’ve never done image segmentation so will run some tests and report back.

Okay, I added cv2.~SuperpixelSLIC() into my Image Hobnigator™ [1] and after a deep dive through the documentation that only an engineer could love, produced the following superpixel results (in reverse sequence).

How close is this to what you’re looking for? [2]


  1. A commercial vision system that I developed, but this isn’t its actual name. ↩︎

  2. Other than this not being thermal data, that is. I just used a reference image that I have handy. Since this isn’t StackOverflow 2017, I assume no one will rashly post anything about this obvious departure from what’s sought in this topic, but it’s probably worth acknowledging just to put everyone on the same page. [End of disclaimer.] ↩︎

@mlgtechuser Thank you for your hard work. I am grateful. The segmentation results look good, and seem like what I want to achieve with the grayscale thermal images I have. However, my question remains, that how can I achieve segmentation based on temperature values?
Basically areas with similar temperature need to be classified as a single segment if they are around each other (Given that I have the pixelwise temperature matrix for a grayscale thermal image).

You’re very welcome, BigTree. Hit the Like button and support us on Patreon. :cowboy_hat_face:

In exchange, you can tell us what the value of superpixels is. That is, what applications are they useful for? I’ve plugged it into my computer vision toolkit but am not at all sure what this new toy is good for.

As for your images, if you can post a raw example, I’ll feed it to the function and see if it processes as-is. If it works, I’ll post the core code for the function instance and contour generation.

If SuperpixelSLIC gets indigestion instead, then we just have to scale the image array from -20 to +50 → 0-255. (This is just an offset shift and then multiplication by the ratio of the two ranges.) This is pretty simple and you can handle it if you like, or I’ll do it. Just let me know. Reversing the scaling back to -20 to +50 is equally simple.

[Edit: ] Come to think of it, we should scale the image and compare to unscaled to see if there’s any difference. SuperPixelSLIC could just silently discard the negative values or something and lose data.

Now, what did I just accomplish, exactly…? :smile:

1 Like

I ran your posted image above while I’m waiting for the unmodified raw values. BTW, the posted jpg’s pixel values are  np.min() -> 0  and  np.max() -> 253 so the imsave() contains full image intensity depth, at least within the 0-255 range. imsave() seems to have scaled it to full range available in the jpg format but I still suspect that the negative values were truncated since the brightness changed (most likely the contrast). Having the raw -20 to +50 image will allow us to reveal the full story.

Anyway, here are the results using the image posted above. You can see that it’s necessary to adjust the region size and “ruler” value to granulate the contours enough to reveal the spaces between the panels, or whatever those are. My working theory is that the slot edges tend to get overlooked if the region size is larger than the slot width. This is probably a safe hypothesis. Click the images to view full size.

My understanding is that the segmentation will do this clustering natively (or at least some of the segmentation algorithms will [1]). If not, you might get some benefit by ‘binning’ the values to aggregate the similar temperature bands before passing it through SLIC. Image processing is very trial-and-error with a lot of educated guessing. Some call it a black art and I guess that’s “not inaccurate”, though often it’s just a matter of having the right tuning tools combined with being patiently methodical.

@bigtree, I upgraded the function to invoke all three SLIC algorithms (SLIC, SLICO, MSLIC). SLICO only uses a regionSize parameter and will be the easiest to tune, though the mosaic is more organic (like reptile skin; not a rectilinear grid like SLIC). MSLIC also produces an organic pattern but uses that “ruler” parameter to produce what appears comparable to SLICO but with more fiddling required.

Of course, there is some ‘optimal’ number of superpixels, depending on whatever subsequent analysis and processing needs to be done. That subsequent processing will probably govern the choice of ideal algorithm.

How the slots between panels are treated is a major difference between them all and how those black regions in the slot are treated is also very different.

SLICO ALGORITHM - Composite and Mask


  1. I’ve been studying up on segmentation and there is a wealth of algorithms to choose from ↩︎

@mlgtechuser
I understand what you’re saying. The results you have shared look good to me. The only things is that I would like to have much fewer superpixels.

Does this mean, that you would like the image as an attachment?

I’m here to help (and also to promote successful applications of Python), so if you send me a raw image to work with we can continue until you achieve the end result you need. We’re only at the proof-of-concept stage and can produce fewer superpixels with some parameter tuning. We need to use the full data in order to see what temperature levels are produced in your regions of interest. Then we can see if the combination of a given algorithm and its parameter tuning suits your needs. My thought is that I can do the heavy lifting on the OpenCV library function side so you can focus on the Python code for functions to load and display the images. We can discuss these details in a PM and just post general content here in the forum.

This superpixel work also helps me with professional development, so I’m happy to make time for it. If you’d prefer to continue via PM so you can share and discuss more freely than otherwise might make sense in a public forum, you’re more than welcome to do that.

I found a fairly recent paper (2015) on superpixels that contains a nice survey of the more popular algorithms. They’re very different and some tend to aggregate clusters more than others. Some are “blobby” and tend to make uniform, rounded cells; some make sharp, erratic ones. These are the ones available in OpenCV:

  • SLIC - Simple Linear Iterative Clustering
    • three different versions
  • LSC - Linear Spectral Clustering
  • SEEDS - Superpixels Extracted via Energy-Driven Sampling

I also found a GitHub repository with a collection of additional superpixel algorithms. The three (or five) listed above have a good chance of being adequate, and possibly perfect, and the GitHub algorithms provide additional options. These additional options might be complicated by Python integration and licensing complexities, though, (I know one of them is not usable for commercial projects) so we should work with the standard OpenCV library first (the three algorithms above).

The GItHub repository’s owner has also published a short but broad survey of superpixel algorithms at ResearchGate. This paper has a very nice thumbnail gallery of many superpixel algorithms.

1 Like

hei, would you mind to share any git link for your way of calcuating temperature matrix? I am also trying but struggling a bit.

How many would you like? Python is extremely good for computation, which is why it’s used so often for data science and computational research. We can perform many, many calculations and manipulations! For example, I realized that your application may not need superpixels at all. I need more information about what your desired end result is, though. If we can clearly define the problem you’re solving, then the solution is usually equally clear.

  • What amount of detail do you need to keep? In other words, is it okay to lose small areas of temperature information (pixels) to produce the much fewer superpixels that you would like to have?
  • How should the gaps between panels come out? Is it okay if the gaps are blended in with the panel areas or would you prefer to have one or two long, skinny superpixels there?

I also need some clean input data. If you will post (or send me) a raw thermal image to work with, I will be able to evaluate the outputs we can achieve. I can produce hundreds of very different output results in a few minutes.

A simple approach is Binning (grouping) of temperature ranges. This can be done by scaling down and then back up with integer math. Here’s an example: [1]

rawdata = [randint(0,19) for i in range(20)]
print(rawdata)
compressed = [item//5 for item in data] #rounds values down into 5 bins
restored = [item*5 for item in compressed]
print(restored)

SAMPLE OUTPUT

[6, 2, 12, 8, 10, 2, 18, 15, 1, 8, 16, 15, 14, 7, 3, 15, 12, 7, 10, 5]
[5, 0, 10, 5, 10, 0, 15, 15, 0, 5, 15, 15, 10, 5, 0, 15, 10, 5, 10, 5]

  1. Note to everyone: I had to add ‘python’ to the first set of backtics to properly render the // and the text after it. ↩︎

Hello Alexander, and welcome to Python Foundation’s Discussion forum!

As you can see from my post above, we are still working out the best calculation to use for this particular temperature data.

If you start a new topic, I and others will see if we can help you. Here are some guidelines for starting a topic about code help:

NOTE: The purpose of Help here is to help people write code, not to write programs for them. :cowboy_hat_face: If you just need some ideas on how to start and structure a problem, that’s 100% okay. We can give you some general ideas and you can start coding from there. Any code suggested will probably be ‘pseudocode’ containing some useful Python instructions.

  1. So share the part of your code that fully shows what you would like help with, and…
  2. Paste some input data
  3. Paste enough output data to show the undesired result and its context
  4. Show the output you expect
  5. IMPORTANT: paste the complete text of any error messages you get
    …and fence the error text in backticks like this:
<code goes here>

You can also use inline monospace by enclosing the text in single backticks.
use inline monospace

The Complete Question Checklist