I have an image source that provides 12 bits per pixel data with very subtle intensity differences. I’d like to create an improved visualization using pseudo coloring but haven’t succeeded so far. I know there is an example in Foundation Package, but that seems to only work on 8 bts per pixel image’s. Am I missing something?
Welcome back @taserface!
I guess the :cvb: Foundation Package example you are referring to is the C# Multi Threaded Pseudo Color Example (found in %CVB%\Tutorial\Foundation\CSharp\CSPseudoColorMultiThreading), which shows how to use multi threading to generate a pseudo color image. Let’s ignore the multi threaded aspect here for the sake of this discussion. This example program will indeed not generate useful output when working on input images with more than 8 bits per pixel. The reason is that the lookup table levels array defined in line 38 of MainForm.cs
private double[] _LUTLevels = new double[7] { 0, 42, 86, 128, 170, 221, 255 };
is not suitable for images with more than 8 bits per pixel (the levels are only spanning the intensity range of an image with 8 bits per pixel. If you adapt these levels to the images you are working on then you should see better results.
Remember that adaption of the levels to the images you are working on does not necessarily mean covering the entire value range of the pixel data type! It often makes a lot more sense to restrict the pseudo coloring to the intensity range you are actually interested in. You might also want to have a look at the C# HDR Display Example (located in %CVB%\Tutorial\Foundation\CSharp\CSHDRDisplay). This example uses an input image with a value range of roughly 0 to 32768 (I say “roughly” because there are actually a few pixels that exceed this value range, but they are meaningless in content and neglectible in number).
If you have the example program generate a pseudo color image that covers the entire range [0…32767] in one image, the result looks like this with the color scheme this tutorial uses:
this uses only about a quarter of the available colors:
Why is that? Simply because the vast majority of the pixels in this image are in the upper 25% of the value range, and therefore the pseudo color representation does not really reveal more detail than mapping the range [0…32767] to the interval [0…255]:
(using the display mode HBSM_Global
)
When restricting the range to be mapped to e.g. 26000 to 32767 the result differs vastly
this is where pseudo coloring starts making sense
By tweaking the range further you can also use pseudo coloring to mask out any irrelevant part of the images - like the stretches between the waffle-grid:
(pseudo coloring for the interval [26000…29000])
Thanks for the description, @illusive. It does make some things clearer for me, but it still does not add up. Looking at the source code both programs seem to use a totally different approach and I still don’t see waht I need to do to create the pseudo color image
How do I get the range of pixel values that I am interested in?
What do I need to code for the Pseudocolor image?
Which of the two examples you mentioned is better?
Ok, let’s go through this systematically step by step.
The generation of a pseudo color image is effectively the task of mapping an intensity range from the input image (which is usually monochrome and not unlikely to have more than 8 bits per pixel) to an RGB image with 24 bits per pixel (or, from the :cvb: perspective, 3x8 bits per pixel).
For the range of intensities to be mapped there are two possibilities: You can either have the operator enter fixed values (as is the case with the C# HDR Display Example) or you can try to determine a useful range by analyzing the histogram of the input image. For the latter, the LightMeter tool (part of the :cvb: Foundation Package) will be helpful. You could for example go through the histogram (LMGetSingleHistogramEntry
) starting at the median value and determine the gray value range into which e.g. 50% of the pixels fall.
Once you have determined the minimum and maximum intensity (, ) the next step is to come up with a mapping scheme of the monochrome intensities to the RGB values of the pseudo color output image. In the two demos I mentioned in my previous post, three schemes are implemented:
- C# Multi Threaded Pseudo Color Example uses two mapping schemes that divide the input intensity interval (in case of the demo [0…255] into 6 equally sized segments. The input values in these segments are then mapped to RGB values roughly according to the following graphs:
- Case 1:
produces this output palette:
- Case 1:
- Case 2:
produces this output palette: - C# HDR Display Example divides the input range into just four segments and assigns the RGB output values according to the following scheme:
produced this output palette:
Which palette you choose is mostly a matter of taste. The first two examples have been used a lot in thermography applications and therefore are likely to satisfy user expectations in that field, but from a functional point of view the three suggestions are fairly similar. They are also fairly similar in that they easily lend themselves to an implementation using lookup tables, because inside the 6 resp. 4 segments the the mapping is strictly linear - which is why the C# Multi Threaded Pseudo Color Example uses the :cvb: Foundation Package function ApplyLUTLinear
to generate the three planes of the output image and concatenates the three images using CreateConcatenatedImage
. The relevant code snippets are:
// note: substitute these level values for
// the ones that apply to your intensity range!
private double[] _LUTLevels = new double[7] { 0, 42, 86, 128, 170, 221, 255 };
// LUT 1: Black, purple, orange, white
private double[][] _LUT1 = new double[][]
{
new double[] { 0, 140, 255, 255, 255, 255, 255 }, // Red plane
new double[] { 0, 0, 70, 125, 230, 240, 255 }, // Green plane
new double[] { 0, 170, 80, 0, 50, 128, 255 } // Blue plane
};
// LUT 2: Black, blue, green, yellow, orange, red, white
private double[][] _LUT2 = new double[][]
{
new double[] { 0, 0, 0, 205, 255, 255, 255 }, // Red plane
new double[] { 0, 85, 135, 205, 185, 35, 255 }, // Green plane
new double[] { 0, 185, 205, 0, 45, 45, 255 } // Blue plane
};
...
private SharedImg ApplyLUTSequential(double[][] LUT, Image.IMG image)
{
// Array of image planes to be concatenated later
SharedImg[] planes = new SharedImg[3];
try
{
// Apply 3 different LUTs to the first plane of the image to create a result image
for (int i = 0; i < planes.Count(); i++)
{
// Applying LUT to the first plane
if (Cvb.Foundation.ApplyLUTLinear(image, 0, _LUTLevels.Count(), _LUTLevels, LUT[i], out planes[i]) < 0)
return null;
}
// Merge planes to create a pseudocolor image
SharedImg imgOut;
if (!Image.CreateConcatenatedImage(planes, true, out imgOut))
return null;
return imgOut;
}
finally
{
// Clean up
foreach (var plane in planes)
if (plane != null)
plane.Dispose();
}
}
The C# HDR Display Example takes a different approach in that it calculates the pseudo color output pixel by pixel (functions HotColdColorMap
and CreateHeatmapImage
in ProcessingFunctions.cs). This has the advantage that no license for the :cvb: Foundation Package will be needed, but using the ApplyLUTLinear
approach will be faster. The arrays to use for the hot/cold map with ApplyLUTLinear
would have to look like this.
// note: substitute these level values for
// the ones that apply to your intensity range!
private double[] _LUTLevels = new double[7] { 0, 64, 128, 192, 255 };
// Hot/cold LUT
private double[][] _LUThotcold = new double[][]
{
new double[] { 0, 0, 0, 255, 255 }, // Red plane
new double[] { 0, 255, 255, 255, 0 }, // Green plane
new double[] { 255, 255, 0, 0, 0 } // Blue plane
};
And that’s about it. The fact that C# Multi Threaded Pseudo Color Example is using multi threading to generate the result image is an additional little bonus, but it is not strictly necessary in most cases because the LUT functions are quite fast. And I hope that this also answers your question which demo is better: Neither of them - it’s a matter of taste mostly .
that did it for me - thanks a lot!
Is the example still valid? I can’t find description about SharedImg and ApplyLUTLinear in CVB.API.Net Reference.
Hi @pshemas,
you’re right, there is no SharedImg
in CVB.Net (SharedImg
was a semi-successful attempt of letting the CLR take care of lifetime management rather than having to work with ReleaseObject
all the time; it did indeed work, but more often than not the garbage collector would kick in too late and the function semantics in the classic API made it in most cases impossible or at least cumbersome to take advantage of the using
keyword and the IDisposable
interface…). Instead the CVB.Net API uses the type Stemmer.Cvb.Image
(and its descendants) for essentially the same purpose, but now in a framework that accomodates CLR features more nicely than before.
The LUT functionality is now located in the static class Stemmer.Cvb.Foundation.Lut
, and the above example rewritten to the CVB.Net API would look like this:
// LUT 1: Black, purple, orange, white
private LutLevel[][] LUT1 = { new LutLevel[]{ new LutLevel(0, 0),
new LutLevel(42, 140), new LutLevel(86, 255), new LutLevel(128, 255),
new LutLevel(170, 255), new LutLevel(221, 255), new LutLevel(255, 255) },
new LutLevel[]{ new LutLevel(0, 0),
new LutLevel(42, 0), new LutLevel(86, 70), new LutLevel(128, 125),
new LutLevel(170, 230), new LutLevel(221, 240), new LutLevel(255, 255) },
new LutLevel[]{ new LutLevel(0, 0),
new LutLevel(42, 170), new LutLevel(86, 80), new LutLevel(128, 0),
new LutLevel(170, 50), new LutLevel(221, 128), new LutLevel(255, 255) }};
// LUT 2: Black, blue, green, yellow, orange, red, white
private LutLevel[][] LUT2 = { new LutLevel[]{ new LutLevel(0, 0),
new LutLevel(42, 0), new LutLevel(86, 0), new LutLevel(128, 205),
new LutLevel(170, 255), new LutLevel(221, 255), new LutLevel(255, 255) },
new LutLevel[]{ new LutLevel(0, 0),
new LutLevel(42, 85), new LutLevel(86, 135), new LutLevel(128, 205),
new LutLevel(170, 185), new LutLevel(221, 35), new LutLevel(255, 255) },
new LutLevel[]{ new LutLevel(0, 0),
new LutLevel(42, 185), new LutLevel(86, 205), new LutLevel(128, 0),
new LutLevel(170, 45), new LutLevel(221, 45), new LutLevel(255, 255) }};
...
private static Image ApplyLUTSequential(LutLevel[][] LUT, Image image)
{
Image[] planes = new Image[LUT.Length];
for (int i = 0; i < LUT.Length; ++i)
planes[i] = Lut.Apply(image.Planes[0], LUT[i], LutInterpolation.Linear);
return Image.FromImages(MappingOption.LinkPixels, planes);
}
The declaration of the lookup tables has beencome a bit more verbose than before (and having to declare 2x3 LUTs is not exactly making the code shorter) as the levels and the values are now linked into one type (LutLevel
) - but the actual code has become shorter and more descriptive.