Foundation FBlob

Hello Cvb!

We’ve been making use of the Foundation FBlob detector here and it is generally working well.

Occasionally, when presented a very dusty image, the blob detector can take a long time to return. I see that there is an FBlobSetMaxExecTime(blob, timeoutInMilliseconds) routine. However, even after setting the timeout, the blob detector still occasionally takes 100s of ms to complete.

Is there anyway to force execution to stop after 20ms say? We’re not interested in the images where the blob detector takes too long as they are always too dusty to be of use to us.

The code snippet is as follows…

                    // Set the maximum execution time
                    iReturn = Foundation.FBlobSetMaxExecTime(blob, 20);
                    if (iReturn < 0)
                    {
                        ErrorHandler(null, $"Exception in Foundation.FBlobSetMaxExecTime\r\n" + BlobErrorCode(iReturn));
                        return false;
                    }

                    // Run the blob detection
                    iReturn = Foundation.FBlobExec(blob);
                    if (iReturn < 0)
                    {
                        ErrorHandler(null, $"Exception in Blob.Exec for X,Y={eGrab.PositionX},{eGrab.PositionY}\r\n" + BlobErrorCode(iReturn));
                        return false;
                    }

Thanks in advance for any suggestions,
Ian

Hi @ianw ,

maybe this function is what you need:

Cheers
Chris

Hi @ianw

There is no (more) reliable way to interrupt execution. What happens internally is that while the blob tool goes through the image line by line there are interruption points where it checks whether the preselected time limit has already been reached. However, to prevent this check from having to much influence on the processing time it is not carried out too frequently.

I presume that what happens in your case is that the time spent between two checks exceeds the processing time limit that you have set (which in principle is possible if you have a large image and an extremely high number of very small blobs - the kind of situation that may occur if you have a very noisy/dusty image).

Is there a way of checking for “noisy image” situations before the blob processing? (e. g. by looking at the histogram of the image or the histogram of a small part of the image)?

1 Like

Hi @Chris

FBlobSetMaxExecTime is what @ianw already uses, but there is a not-too-fine granularity to the limits that can be set here and 20ms is probably below that limit in this case.

You are right, I actually only read the question without spending too much attention on the given code snippet, my bad.
Thank you for the clarification however.

1 Like

Actually the link is helpful as the documentation states that the minimum time spent is the time required for binarization - something I did not actually remember when I wrote my reply, but it’s of course true, this adds to the other effect I described…

Hi ianw,

since you have a dusty image, you may be able to detect it by filtering with a mean filter and measure the difference between filtered and original image. This may be a way to detect critical images before processing them for a long time.

Thanks for the discussion.

@Phil, I agree there is probably a statistical method to detect the problem images; I’ll see if we’ve got a spare 10ms available for that.

The other solution we’re considering is to monitor a time-averaged blob-detection time. We’ll notify the user to clean the dust off if the blob detector is taking too long. If the blob detection takes longer than 100ms per frame, over 30 frames say, then there is too much dust.

Maybe you could start the blob detection in its own task, thus you are free to simply kill the task whenever you like.

Not the nicest way but might solve your problem.

@Chris I am not sure that @ianw will get down to the targeted 20ms granularity with this approach as threading operations (create thread, abort thread) also tend to take a significant amount of time (this is often worked around with a thread pool - but if the plan is to abort long-runners then I guess we are talking about create/abort).

But having the blob execution in a pool thread and just abandoing the result if it took too long to process might be an alternative if the computational resources allow for it.

1 Like