Pixel Positions


How to get X,Y pixel positions at the end of the drops as shown in the picture below

Please ignore the watermark as it won’t be present on our actual system.


Hi @Siva,

this is a bit hard to tell with only one image, but you could try to use the CVB ShapeFinder tool.
You can find an example of that in your local CVB installation under %cvb%Tutorial\ShapeFinder\CSharp\CSShapeFinder\bin\Release


1 Like

With adequate filterings (e.g. erosion) the image might also be pre-processed in a way that the slightly bigger end of the drop is isolated. :shushing_face:

Then the exact position can easily be measured using CVB Blob.

1 Like


We jet different materials on our print head and not all of them end up with the same drop shape.

Tim: I tried shapefinder but the accuracy is not good unless I specifically select the region manually.

Moli: Blob function did better job but didn’t work on different shapes, again unless I specifically select the region manually.

I am thinking if there is a built in function that can implement tracing algorithms such as “Fast contour tracing” or others where the Maximum Y- Pixel position (in our case) will give us what we need.

Will there be a major effect in performance with CVB display function if we implement these algorithms in C#(in our current application) on a live video(1456x1088 @ 68fps)?


Hi Siva,

i need some more information to help you. Is the foreground (drop) always as well seperated from the background as in the provided Image? Do you need to considered physical possible shapes of the drop for its lowest coordinate?

Since you mention performance and live Video, what are the requirements on the processing speed?


Hi Phil,

Yes, foreground (drop) is always separated from the background.
Yes, we need the lowest possible pixel point of the drop.

We just provide our developed software to our customers so that they can install on their machines. Unless there is a specific minimum RAM and processing power otherwise they install on their work/personal computers.


Hi Siva,

can you tell us a little bit more about why the blob detection didn’t work well enough?
If the drop (foreground) is always clearly separated from the background, it should be possible to segment it with global pre-processing filtering algorithms.

I could imagine to solve your case with the following combination:

  1. binarization
  2. eroding & dilating
  3. blob detection


EDIT: Sorry, a dilating is actually useless (or even counterproductive) when the objective is only to measure the position of the drop. An eroding should be enough to eliminate the drop. :slight_smile: