Hello, I am not sure which category to put this in, so please feel free to re-assign it.
I have a specific question, but I think it is generally applicable to CVB. I would like to be able to use any shape of ROI for processing. Right now, I am thinking about Blob, but it could be any processing.
As far as I can tell, it is only possible to use rectangular ROIs (also parallelograms) in any CVB tool. I started to look into whether it was possible to use masking to create something more complicated, but I have not succeeded.
Is there a way, or do I have to deal with the results in post-processing?
Hmm, I dont think that special shapes in regions of interest do much in the way of speed increase. That being said, I dont think it’s possible directly, altough its possible to check if the blob locations are within the mask.
Well, answering in a general way I must say: it depends…
In your specific case you could apply a mask. With FBlob for example you can split the binarize operation from the actual search (which is also how we did the color blob in our user meeting’s hands on session). You could create a mask image (0 to ignore/255 to include a pixel) and bitwise AND it with your binarized image. Such a mask image can be generated e.g. via thresholding functions or our destructive drawing functions. (The mask image can also be applied to the original image if you have a threshold that excludes the 0 pixel value.)
ROIs in general: here I want to differentiate two use cases for ROIs:
- ignore certain areas as in your specific case
- process pixels in a certain order
Regarding 1.: often you can apply the mask technique as with blob. In some of our tools like Minos you can use overlay bit images to ignore certain parts (e.g. for training). In these kind of images the lower most bit is used to indicate whether an area is ignored (set to 1) or used (set to 0). Here also you can use our drawing functions to set/clear this bit.
Regarding 2.: here we use functions to create new images as this often results in better processing speed for the algorithms that do the actual processing. Also such ROIs may be complicated (meaning slow) for random access. An example for such a function is
So, what did you have in mind regarding irregular shapes? And what kind of problems did you have?
Sorry for the long delay. What started as a Blob application seems to be a good Polimago application. I didn’t expect that - I am looking at variable dark stains and Polimago is remarkably good at finding them.
I can see how using binarisation and logic operations achieves a mask in FBlob, but what about Polimago? Are you proposing a polar unwrap and then apply Polimago to the result? I imagine I should train Polimago on polar-unwrapped images to learn the distortion?
I’ve not done polar unwrapping before - does that give me access to the original pixel co-ordinates? (That may not be needed, I am just wondering).
Thank you for the help.
@JonV Stains, polar unwrapping and polimago, that was one of the applications you did a presentation about on the CVB user meeting right?
Yes, this has been done before (and you are right - sometimes the machine learning solutions are not exotic applications, just difficult or awkward ones - and sometimes it is not obvious until you try).
Indeed, you should train on the images as they will be presented to Polimago, so after unwrapping. The polar unwrap functionality is available in Image Manager (there’s a nice VB Polar Transformation example in the tutorials for your reference). Depending on the application you might need to perform a search first to locate the centre of the polar unwrap - Polimago search could be a good choice for that too.
You don’t keep the co-ordinates, it is a new image with ‘normal’ co-ordinates.
I hope that is ok, let me know if you need more.
… and the conversion of coordinates is a matter of not so very advanced math: Every pixel (or, to be precise, the center of every pixel…) in the result image of polar unwrap corresponds to an angle and a radius (in this case read “distance to polar unwrap center”). If you still know the parameters
RMax (and optionally
AlphaTotal) that have been used in the
Ex) call that did the unwrap then determining the angle and the radius is fairly straigthforward:
// imgUnwrapped = result image of polar unwrap operation
// xu, yu = coordinates in imgUnwrapped for which to calculate x, y in the original image
// input parameters to the CreatePolarImageEx call that created imgUnwrapped:
// cx, cy, alpha0, alphaTotal, rMin, rMax
auto uMaxX = static_cast<double>(ImageWidth(imgUnwrapped) - 1);
auto uMaxY = static_cast<double>(ImageHeight(imgUnwrapped) - 1);
auto alpha = static_cast<double>(xu)/uMaxX * alphaTotal + alpha0;
auto r = static_cast<double>(yu)/uMaxY * (rMax - rMin) + rMin;
auto x = cx + r * cos(alpha);
auto y = cy + r * sin(alpha);
If you used
CreatePolarImage instead of
alphaTotal simply amounts to 360°. In the object-oriented APIs for C#, C++ and Python (see here) these two functions correspond to the
PolarTransform extension/static member method on the
Image object (C# and C++) or the
cvb.polar_transform function (Python).