I need some guidance for the following use case:
I have a C6 sensor, and an object on an x,y stage.
I want to scan the moving object, and as an output to produce a text file containing a 2D matrix representing the Z coordinates of the scanned object.
I’m using CVB 3rd Gen, using multipart acquisition.
the hard points:
Timing: currently I’m using acquisition from stream, and use delay to define the frequency to acquire profiles like this:
try
{
_stream = ((GenICamDevice)_device).GetStream<CompositeStream>(0);
_isAcquisitionStarted = true;
var ringBuffer = _device.Stream.RingBuffer;
ringBuffer.LockMode=RingBufferLockMode.On;
_stream.Start();
for (int i = 0; i < numberOfImages; i++)
{
var returnedImages = ReturnImageFromStream(_stream);
if (returnedImages != null)
{
_RangeImages[i] = returnedImages[0];
_ReflectanceImages[i] = returnedImages[1];
_ScatterImages[i] = returnedImages[2];
}
Thread.Sleep(delay);
}
_stream.TryAbort();
_stream.Dispose();
}
the ReturnImageFromStream(_stream):
private List<Image> ReturnImageFromStream(CompositeStream stream)
{
using (Composite composite = stream.Wait(out WaitStatus status))
{
using (MultiPartImage image = MultiPartImage.FromComposite(composite))
{
using (NodeMapDictionary nodeMaps = NodeMapDictionary.FromComposite(composite))
{
List<Image> images = new List<Image>();
foreach (var part in image.Parts)
{
if (part is Image partImage)
{
images.Add(partImage.Clone());
}
}
return images;
}
}
}
return null;
}
is this a good approach, or is it better to use the software trigger to get the profiles with the desired delay?
Image extraction: am I extracting the images the right way?
I want to get the three channels - Range, Reflection, Scatter.
how to construct the “ZMap”, which represents the the height map for the profiles?
is this done by constructing point cloud?
The usage of a Thread.Sleep I would not recommend as it is very unprecise. There are two time domains you can distinguish between. One is you PC that is depending on the time required for procedures. The other is on the camera itself. A more convincing approach would be to set a certain linerate one the camera to have a fixed time delay between two lines. This is quite precise. Currently I understand that you are aquiring individual profiles instead of frames containing multiple profiles, is that correct? In most cases it is better to use frames and time the delay between two profiles on the camera. When receiving on the software side you can divide between the individual profiles by software. From the timing perspective you will be independent from random delays on your system when you will use the fixed linerate on the camera.
Using the Multipart acquisition receiving the three image parts this is the correct approach. Currently you are generating an array of images that are processed later. There is also the option to directly process all images after receiving. this way the application would not be that quickly be limited to the RAM size.
There is CVB Tutorial that shows how to create a Z-Map from a Rangemap. First apply the intrinsic calibration to the rangemap to receive a pointcloud. Than project the pointcloud onto a 2D image to receive a Z-Map: (CVB installation folder) %CVB%\Tutorial\Foundation\Cvb.Net\Metric3D\bin\x64\Release
Hi Simon, and thank you for the reply.
I agree regarding the thread sleep. Is there any examples on how to set the line rate?
I’m not familiar on how use frames, can you give an example for that?
this topic is acutally not CVB related and very camera specific. The scope of this forum is only CVB.
In case you need further assistance on C6 specific questions I recommend you to contact de.support@stemmer-imaging.com.
There is a document attached to the thread from the forum link I posted that covers the initial usage of a C6 camera: https://forum.commonvisionblox.com/t/switching-from-automation-technology-c5-to-c6-cameras-in-cvb/1868/9
To set the linerate search the camera nodemap for “Acquisition Line Rate”. You need to enable “Acquisition Line Rate Enable” first.