I’m currently looking at the topics relating to converting from to OpenCV, and wondering if I actually need to do it or not. I’m not familiar with all the available functionality in .
Currently I’m only using one pure OpenCV functionality, which is MergeMertens, for fusioning multiple different exposures of an image.
There are also other image manipulations that I’m performing:
‘deinterleaving’ lines from a linea camera into separate images that can be fusioned
‘tiling’ several images into one bigger image
rescale/flip/rotate - I know I can do those easily in .
provides a CreateImageMap function that easily lets you seperate parts of an image by specifying an area of interest (AOI).
To stitch two images of the same size together, you can use the CreatePanoramicImageMap function. This can be done either vertically or horizontally. Of course it is also possible to copy the images “by hand” into a target image and arrange them manually as you like.
Regarding the other tasks you mentioned, we better ask @parsd or @illusive
In the old .Net API it can be found in the iCVCImg.dll and in the new wrappers, the function is called Map and can be found in the Stemmer.Cvb.dll. It can directly be called on the image object.
public MappedImage Map(
Rect sourceRect,
Size2D targetSize
)
For me it is not clear what exactly is meant by deinterleaving (do you mean meta data from pixel data?) and how to tile (horizontally and/or vertically, combined?).
With we aim to make your life easier for acquisition (the Linea is a very special case ) and provide some basics and have some nice vertical solutions. We will not have the coverage what OpenCV will give you. Thus we do not have a MergeMertens algorithm. But our aim is also to play nice with these libraries. For example if you search for OpenCV you can get some hint on how to use both like here:
I must admit I’m not too sure what the proper term is for what we’re doing
We’re cycling 3 different expositions on each line of a linea camera. Then I split the image into three images (by taking one row at a time from the source image), so that I can combine the three resulting images into MergeMertens. So special case on top of a special case
(illustration only shows 2 expositions, but you get the idea)
The ‘Horizontal Tile’ looks like this:
I’m currently trying to work out if for a straight 1:1 copy, I can get the OpenCV.Mat to ‘point’ at the Image data, without getting tripped up by garbage collection. I think I read something about that here on the forum, but can’t find it again. Maybe it was C++ related? This obviously wouldn’t work for the special cases I just described, but for other images it would be nice to be able to ‘blit’ them directly.
It actually can if called repetitively, but only in either x or y direction - never in both simultaneously. The reason for this that PanoramicMappedImaged.Create and the likes always create mapped images, meaning that the image that is returned does not actually possess pixel memory of its own but simply refers to the memory of the input images (thus reducing the memory footprint and removing the need to sort the pixel data by copying them). This is done by simply generating pixel offset tables (the VPATs that are occasionally mentioned in the documentation) that point to the right addresses. As a side effect, whenever one of the constituents is updated, the tiled image will of course also have updated pixel data (useful when working with drivers/cameras…).
While this system works nicely in that it reduces resource consumption, it is by design limited to benign situations (which luckily are very likely fulfilled by your average image source), namely:
The offset sequence in the VPATs of the input images must be linear (by default this is usually the case)
When merging two images, the offsets in the table that is shared between both input images must be identical (this is usually the case if both images have the same size, depth and data type and a linear VPAT layout)
If you were to tile four images in a 2x2 arrangement then it would generally not be possible to create the correct offset tables for the bottom right section of the tiling result. Therefore images may only be tiled in x direction or in y direction, but never in both
Of course the simple alternative to a function that tiles the images together would be to create a sufficiently big destination image, then copy the sources images into their respective section by means of the Image.CopyTo method.
Well, that looks like a good case for the Map method. If you are not overly concerned about loosing the last three lines, then this method will do the trick:
public static Image[] SeparateInterleavedChannels(Image src, int numChannels)
{
if (numChannels < 2)
throw new ArgumentException($"{nameof(numChannels)} calling for less than 2 channels makes no sense");
if (src.Height < 2 * numChannels)
throw new InvalidOperationException($"{nameof(src)} must be at least 2 * {nameof(numChannels)} pixels high");
var dstHeight = (src.Height + numChannels - 1) / numChannels - 1;
var retVal = new Image[numChannels];
for (var i = 0; i < numChannels; ++i)
{
var aoiSrc = new Rect(new Point2D(0, i), new Size2D(src.Width, dstHeight * numChannels));
retVal[i] = src.Map(aoiSrc, new Size2D(src.Width, dstHeight));
}
return retVal;
}
Dropping the last bunch of lines (as it happens in the calculation of dstHeight) is necessary because because as soon as dstHeight * numChannels = aoiSrc.Height is violated, rounding will lead to destination image VPATs that are unsuitable for our purpose.
Garbage collection should not bite you here: Although Stemmer.Cvb.Image is a reference counted object, its pixel buffers are not and are therefore not subject to relocation during the garbage collection process. All you need to do is make sure the image is not disposed/collected while OpenCV is still working on it (this would be a major concern when working with e.g. System.Drawing.Bitmap for example; there you’d need to fix the pixel buffer while unmanaged code is working on it).
My experience with OpenCV is limited, but I would guess that a Mat can be created with user-specified y-stride - and if you set it to 3 * the line increment of your source image I guess you’re where you wanted to be. (of course this might render the whole mapped image discussion moot - unless you wanted to display them somewhere in your UI…).
Thanks for all this, I’ll give it a try.
Straight mapping turns out to be quite easy:
private static Mat ImageToMat(Image image)
{
var access = image.Planes[0].GetLinearAccess();
return new Mat(image.Height, image.Width, DepthType.Cv8U, image.Planes.Count,
access.BasePtr, (int)access.YInc);
}
The only problem is that OpenCV defaults to BGR so the colors come out a bit funny! I guess I’ll have to try to persuade my user that grayscale is a good idea
Yes, I’ve discovered that - I wanted to do an asynchronous call insideForEachAsync, and under certain conditions got ObjectDisposedExceptions. I guess this means that if I do want to introduce a buffer at some stage I need to clone the images (or Mats)
The ObjectDisposedException may be raised in this case because you use a StreamImage from the .Wait() call with default acquisition mode. With that each new .Wait() call will unlock (i.e. Dispose) the previous one. We use a ring buffer underneath and either the acquisition engine owns the memory or you.
The default is for synchronous acquisition and processing (a .Wait()/process loop). If you want to do asynchronous processing you should set stream.RingBuffer.LockMode to on. With that you can (and must ) decide when you are finished with it by disposing it. If you do that you need to copy less and become faster. Important: also increase the number of buffers in the ring buffer by the number of images you have “in flight”. You will loose frames otherwise.
No, they are owned by the ring buffer which is owned by the stream which is owned by the device.
And as long as you have a refererence to any image it won’t be collected. If you want to be safe also when you wrap newly created images in a Map I would propose deriving from the Map if thats possible. The derived type just references its wrapped Image like we do in MappedImage. Then you don’t need to think about this anymore and can work flexibly.