With AT’s 3D cameras multiple different image data can be obtained and transmitted to the host. This can be either images including the range information, intensity information or laser thickness information. All output channels can be selected individually and in combinations. Every DC is saved in a new image row, resulting in multi information images which must be split on the host side afterwards if multiple DCs are enabled.
For detailed information regarding the data channel assignment DC0-DC2 please take a look into the manufacturer’s sensor manual.
The following two functions demonstrate an example way (C#) to split an image consisting of multiple DCs into its sub-images, depending on its bit depth.
Split 8Bit Image
private static Boolean Split8BitImage(Cvb.Image.IMG cameraImg, ref Cvb.Image.IMG[] singleImages)
{
// Define variables
int numSingleImages = singleImages.GetLength(0);
IntPtr baseIn;
int xIncIn, yIncIn;
int imageWidth, imageHeight;
imageWidth = Cvb.Image.ImageWidth(cameraImg);
imageHeight = Cvb.Image.ImageHeight(cameraImg);
// Images might not have a multiple of lines of the requested images
if (imageHeight % numSingleImages != 0)
imageHeight = imageHeight / numSingleImages;
// Get linear access to the base image
Cvb.Utilities.GetLinearAccess(cameraImg, 0, out baseIn, out xIncIn, out yIncIn);
// Init a pointer array for every image
IntPtr[] baseOut = new IntPtr[numSingleImages];
// Prepare variables
int xIncOut, yIncOut;
xIncOut = yIncOut = 0;
// Create the images and get access
for (int i = 0; i < numSingleImages; i++)
{
// 8 bit images
Cvb.Image.CreateGenericImageDT(1, imageWidth, imageHeight / numSingleImages, 8, out singleImages[i]);
Cvb.Utilities.GetLinearAccess(singleImages[i], 0, out baseOut[i], out xIncOut, out yIncOut);
}
// split data
unsafe
{
char* grayval;
char* pixel;
for (int y = 0; y < imageHeight; y++)
{
for (int x = 0; x < imageWidth; x++)
{
grayval = (char*)((int)baseIn + x * xIncIn + y * yIncIn);
// Assign correct address to pixel
pixel = (char*)((int)baseOut[y % numSingleImages] + x * xIncOut + (y / numSingleImages) * yIncOut);
// Write value of grayval to pixel address
*((char*)pixel) = (char)*grayval;
}
}
}
return true;
}
Split 16Bit Image
private static Boolean Split16BitImage(Cvb.Image.IMG cameraImg, ref Cvb.Image.IMG[] singleImages)
{
// Define variables
int numSingleImages = singleImages.GetLength(0);
IntPtr baseIn;
int xIncIn, yIncIn;
int imageWidth, imageHeight;
imageWidth = Cvb.Image.ImageWidth(cameraImg);
imageHeight = Cvb.Image.ImageHeight(cameraImg);
// Images might not have a multiple of lines of the requested images
if (imageHeight % numSingleImages != 0)
imageHeight = imageHeight / numSingleImages;
// Get linear access to the base image
Cvb.Utilities.GetLinearAccess(cameraImg, 0, out baseIn, out xIncIn, out yIncIn);
// Init a pointer array for every image
IntPtr[] baseOut = new IntPtr[numSingleImages];
// Prepare variables
int xIncOut, yIncOut;
xIncOut = yIncOut = 0;
// Create the images and get access
for (int i = 0; i < numSingleImages; i++)
{
// For 16 bit images
Cvb.Image.CreateGenericImageDT(1, imageWidth, imageHeight / numSingleImages, 16, out singleImages[i]);
Cvb.Utilities.GetLinearAccess(singleImages[i], 0, out baseOut[i], out xIncOut, out yIncOut);
}
// split data
unsafe
{
// For 16 bit images
ushort* grayval;
ushort* pixel;
for (int y = 0; y < imageHeight; y++)
{
for (int x = 0; x < imageWidth; x++)
{
// grayval is the address of the pixel holding the value we want to access
grayval = (ushort*)((int)baseIn + x * xIncIn + y * yIncIn);
// Assign correct address to pixel
pixel = (ushort*)((int)baseOut[y % numSingleImages] + x * xIncOut + (y / numSingleImages) * yIncOut);
// Write value of grayval to pixel address
*((ushort*)pixel) = (ushort)*grayval;
}
}
}
return true;
}
Please notice, that these functions require previous knowledge about the selected number of DCs. This information can be determined by a GenICam query on the value of the specific features (EnableDC0-DC2).
CVB offers the functionality to monitor connected GenICam devices in real time. This allows notifications as soon as cameras get disconnected or reconnected from or to the host. It can be very useful in automated processes if the communication with a device is temporarily lost, e.g. due to a power-off and needs to be automatically re-established.
The Connection Monitoring of CVB is realized over the INotify interface of the driver, with its DEVICE_DISCONNECTED and DEVICE_RECONNECT event. There is sample code available for the C-API (C++, C#) and can be found in the corresponding section of the CVB User Guide.
GigE Vision Events are typically used to synchronize the host application with some events happening in the device. A typical usecase in machine vision applications is for example a host that waits to be notified in real time of the sensor’s exposure end to move the inspected part on a conveyer belt.
This post describes how to register a callback function to monitor an event of an AT-3D Camera with CVB.
For AT cameras there are a number of different events available:
Event Name
Event ID
Description
AcquisitionStart
36882
Frame Acquisition is started
AcquisitionEnd
36883
Frame Acquisition is terminated
TransferStart
36884
Frame transfer is started from the camera
TransferEnd
36885
Frame transfer is terminated
AOITrackingOn
36886
The AOI tracking process is started and the laser line image is valid for AOI alignment
AOITrackingOff
36887
The AOI tracking process is stopped and the AOI position is not updated anymore
AOISearchFailed
36888
AOI-Search failed to detect the laser line
AutoStarted
36889
Frame Acquisition is initiated through AutoStart
The number of events might increase for newer firmware versions. Please refer to the current sensor manual for all supported events.
Implementation
As @parsd already pointed out in this posting, there are two ways to register events with CVB. Either by using the preferred way over the GenApi with NRegisterUpdate() or by using the INotify interface. Which one you take depends on whether the camera supports the event handling via the GenApi or not.
In general, the 3D cameras of AT support the standardized way of the event handling. However, this requires a current firmware version to be running on the sensor, that includes timestamps for each event. Please refer to the Firmware Release Notes to check if your camera supports the „EventNotification via GenICam node access“.
For the object oriented CVB-APIs there is another (and even easier!) example to register an Event Callback (CVB++) to a node. It can be found in this post.
When working with AT cameras it is important to understand the influence of the camera’s subpixel parameter on the measurement precision and its calibration. A changing of the number of subpixels can result in wrong metric values if the calibration file wasn’t changed acoordingly.
How to choose matching number of subpixels
Rangemaps from AT are transmitted with a fixed bit depth (8Bit or 16Bit). In most applications 16Bit is the recommended data format, however, the choice of the format is a trade-off between a better height resolution or a higher data rate.
The bit depth of a range map limits the amount of information that can be stored for each pixel (height), whereas each height is described by the number of the row in the sensor and its subpixel position. The number of bits needed to unambiguously describe the sensor row as an integer value depends on the height of the used AOI set in the camera (see table below).
Once the AOI height is set, it defines the number of bits per pixel left for the subpixel information.
Parameter NumSubPixels has a valid range between 0 and 6 bits. This value describes the possible maximum resolution for the calculation of the laser line positions in the COG and FIR-Peak modes. More precisely, the resolution of those 3D modes is 1/(2^n), where n is the number of subpixel bits.
max. AOI height
max. subpixels (8Bit)
resolution (8Bit)
max. subpixels (16Bit)
resolution (16Bit)
3
6
0.015625
6
0.015625
7
5
0.03125
6
0.015625
15
4
0.0625
6
0.015625
31
3
0.125
6
0.015625
63
2
0.25
6
0.015625
127
1
0.5
6
0.015625
255
0
1
6
0.015625
511
0
bit overflow
6
0.015625
1023
0
bit overflow
6
0.015625
2047
0
bit overflow
5
0.03125
4095
0
bit overflow
4
0.0625
Conclusion: The smaller you can set your AOI height, the more subpixels you can use and therefore the more precisely you can theoretically calculate the heights. Using more rows than allowed will lead to bit overflow and ambiguity. Note that the sum of camera AOIHeight + AOIOffsetY must not exceed the maximum AOIHeight.
Calibration File
The knowledge of the number of used subpixels is required when applying a set of calibration parameters to a range map. For this reason it is included as a parameter in the calibration file. Hence, it is important that the number of used subpixels set in the camera matches the number of subpixels set in the calibration file!
This parameter can be changed in the different data formats of the calibration files as follows:
It is important to understand that all 3D Sensors from AT output uncalibrated 2D range maps with no relation to real world units. However, it is possible to easily get calibrated metric 3D points by applying a number of calibration parameters to this range map.
When using an AT Compact Sensor (CS) these calibration parameters are previously calculated by the manufacturer and provided together with the camera. They are stored on the sensor’s memory and can be downloaded in a calibration file (.dat, .xml) with the manufacturer’s cxExplorer software (Device → Load/Save Calibration Metric… ).
Alternatively, it is possible to download the parameters in the .dat format programmatically from the camera using CVB:
Load Calibration File from AT camera (cvb.NET)
// load driver
Device camera = DeviceFactory.Open("GenICam.vin");
NodeMap cameraNodeMap = camera.NodeMaps[NodeMapNames.Device];
// get calibration file from camera
var files = cameraNodeMap.GetAvailableFiles();
if (files.Contains("UserData"))
cameraNodeMap.DownloadFile("CalibrationFile.dat", "UserData");
When using a modular AT camera setup a user has to manually perform a calibration in order to get these parameters. In this case the CVB Metric Tool (part of the Foundation package) can be used. Please refer to the specific calibration post or the CVB documentary for further information.
Reconstructing Point Cloud
With the acquired rangemap and the matching calibration parameters it is possible to reconstruct the metrically correct 3D points. This can be done either with one of the existing GUI tutorials from CVB (e.g. VBCore3D) or programmatically in CVB:
Sample Code: Classic API (C#)
// load calibration file
Cvb.SharedCalibrator calibrator = null;
if (Cvb.Core3D.LoadCalibrator(fileName, out calibrator) < 0)
MessageBox.Show("Error loading calibration file");
else
{
// reconstruct point cloud
Cvb.SharedComposite pointCloud = null;
Cvb.Core3D.CreateCalibratedPointCloudFromRangeMap(rangeMap, calibrator, out pointCloud);
}
For a better troubleshooting during the development of an application or while running live, it is recommended to add a few additional lines to your code.
Acquisition Health
When streaming with a GenICam compliant camera you can query statitistics to monitor your acquisition health via the transport layer of Stemmer Imaging. This also applies to the AT GigE cameras.
With these statistics a user can detect easily when e.g. packets or images got lost during transmission. In a couple of posts parsd described how to use it in cvb.Net: https://forum.commonvisionblox.com/t/getting-started-with-cvb-net/246/13?u=moli
Poll Specific Camera Parameters
Another way to monitor your device is to continuously check health data by polling the camera’s parameters.
The manufacturer recommends that the temperature does not exceed 65 °C when measuring. Furthermore, keep in mind that dark current and noise performance for CMOS sensors will degrade at higher temperatures.
The AT sensors features a Chunk Data mode for providing additional information to the acquired image data. The ChunkData generated by the camera have the following format:
ChunkImage
1…N x ChunkAcqInfo
ChunkImageInfo
Depending on camera mode (image or 3D) the ChunkData block („ChunkAcqInfo“) can
be sent as follows:
In image mode, the camera can send only one ChunkAcqInfo block per image frame.
In 3D mode, the camera can send one ChunkAcqInfo block either per 3D frame (“OneChunkPerFrame”) or per 3D profile (“OneChunkPerProfile”)
The „ChunkImageInfo“ is the last ChunkData sent by the camera and contains following
data:
Number of valid rows in ChunkImage
Number of valid ChunkAcqInfo blocks
Flags identifying the current frame as „Start“ or „Stop“ and the buffer status in AutoStart mode
The ChunkAcqInfo block consists of totally 32 bytes containing following data:
64 bit timestamp
32 bit frame counter
32 bit trigger coordinate
8 bit Trigger status
32 bit I/O Status
72 bit AOI information
The data of timestamp, frame counter, trigger coordinate, trigger status and I/O status are assigned at the start of every image integration.
When ChunkMode is disabled, the camera uses the “regular“ GEV image protocol, in which the optional transfer of frames with variable height and payload is supported.
Furthermore, when ChunkMode is enabled, the camera sends the full payload, even if the ChunkImage or ChunkAcqInfo blocks contain partially valid data. The number of valid ChunkImage rows and ChunkAcqInfo blocks can be read from ChunkImageInfo. For example, when in Start/Stop mode with instant frame transmission, the camera stops the frame acquisition as soon as the stop trigger occurs and transfers the complete contents of internal image buffer. Using the ChunkImageInfo data block, it is possible to detect how many image rows and ChunkAcqInfo blocks are valid in the payload buffer. The tag of ChunkData has big endian byte order. The data of ChunkData has little endian byte order. An endian converter for ChunkData is not supported.
using (var device = DeviceFactory.Open("GenICam.vin"))
{
var nodemap = device.NodeMaps[NodeMapNames.Device];
var chunkModeActive = nodemap["ChunkModeActive"] as BooleanNode;
chunkModeActive.Value = true;
var stream = device.Stream;
stream.Start();
try
{
stream.Wait();
var deviceImage = device.DeviceImage;
var chunks = GevChunkParser.Parse(deviceImage);
var info = DereferenceAcqInfoOn(deviceImage, chunks.First(chunk => chunk.ID == ATC5AcqInfoChunk.ID));
Console.WriteLine($"FrameCount: {info.FrameCount}");
}
finally
{
stream.Stop();
}
}
With this helper method and struct:
private static unsafe ATC5AcqInfoChunk DereferenceAcqInfoOn(DeviceImage image, GevChunk chunk)
{
Debug.Assert(chunk.ID == ATC5AcqInfoChunk.ID);
Debug.Assert(chunk.Length >= sizeof(ATC5AcqInfoChunk));
var chunkPtr = new IntPtr(image.GetBufferBasePtr().ToInt64() + chunk.Offset);
return *(ATC5AcqInfoChunk*)chunkPtr;
}
[StructLayout(LayoutKind.Sequential, Pack = 1)]
struct ATC5AcqInfoChunk
{
public const uint ID = 0x66669999;
public uint TimeStampLow;
public uint TimeStampHigh;
public uint FrameCount;
public int TriggerCoordinate;
public byte TriggerStatus;
public ushort DAC;
public ushort ADC;
public byte IntIdX;
public byte AoiIdX;
public ushort AoiYs;
public ushort AoiDy;
public ushort AoiXs;
public ushort AoiThreshold;
public byte AOIAlgorithm;
}
For this to work you need these three new classes @parsd put on github:
Hi,
There is a easier way to split the image.
Using CreateImageMap() and start point line0, line1 and line2.
We can easily get DC0, DC1 and DC2 images.
Like as below:
Extracting the information from TriggerStatus as byte will result in a decimal value between 0 ant 255.
From this the information on the 6 status elements can already be read unambiguous. For interpretation reasons it might be easier to convert the decimal number to a binary number.
In our example we receive that decimal byte value of 193 from the TriggerStatus Chunk value.
Decoded to binary the value can be read like this:
11000001
Each bit is now describing the status of one of the 6 elements. The logic is set to the following:
Out1 = high, Out0 = high, In1 = low, In0 = low, -, -, EncoderStatus = off/back, TriggerOverrun = true.
Hi support team,
When I use this method, how can I get imgDC0, imgDC1 and imgDC2 pointer?
If using
Dim imageDataDC1 As LinearAccessData = imgDC1 .Planes(0).GetLinearAccess()
Dim basePtrDC1 As IntPtr = imageDataDC1.BasePtr
the error will occure at GetLinearAccess().
If you get the LinearAccessData keep following in mind:
For Visual Basic .Net you need to use the helper functions provided in the System.Runtime.InteropServices.Marshal class for the increments.
When I use
Dim rect As New Rect(0, 1, img.Width - 1, img.Height - 1)
Image = img.Map(rect, Size)
Dim imageData As LinearAccessData = Image.Planes(0).GetLinearAccess() 'the error is at this line
Dim basePtr As IntPtr = imageData.BasePtr
it will show the error:
System.ArgumentException
HResult=0x80070057
Message=Operation Linear Access only supported on linear VPATs.
Source=Stemmer.Cvb
The problem should be the pixel is skipping. It is non-linear.
Can we use GetImageVPA() to get non-linear mapped image pointer?
Which function of CVB.Net can be used?
Hi @Sebastian
Could you give me a example how to use GetVPATAccess()?
I am not familier with the code how to use GetVPATAccess().
I cannot find out CVB sample to use this function.
Hi @silas
Thanks for your promp answer.
How can I convert vpaAccessData to “IntPtr” data type so that I can convert CVB image format to other image format?
@Charlene
Hey!
I´ve come to a possible workaround regarding the VPAT Access. If you either copy or clone the image you´ll be able to do linear access on the copied/cloned image and avoid VPAT access.
Note that VPAT access is one of the slower access modes, so linear access is preferred.
var copiedImage = image.Copy();
var clonedImage = image.Clone();