Working with 3D-Cameras of Automation Technology in CVB

NETWORK OPTIMIZATION

It is crucial to correctly set up a number of networking parameters when using the high performant 3D cameras of Automation Technology (and GigEVision cameras in general) in order to guarantee the expected performance in your software. These parameters must be set on both sides, network card and camera.

Further information can be found in the GenICam standard itself or in the CVB GenICam User Guide:


CVB GenICam Driver Configuration

The required network efficiency parameters inside the camera are non-volatil and are lost at a power-off of the camera. Hence, it is necessary to set the parameters after every camera reboot. This can be either accomplished manually via the GenICam Grid (e.g. in the GenICam Browser), programmatically with the GenApi or automatically by the CVB GenICam driver.

The GenICam driver can be configured to propagate important camera TL settings to the device at driver loading time. It also has capabilities to determine these parameters automatically by querying the device. We however recommend to set them statically in case of non-default configurations. Additionally, the driver has some more parameters, which don’t affect the camera operation.

To modify the respective values, open the file GenICam.ini, which is to be found in the Drivers directory of your CVB installation (%CVB%\Drivers).

When using an AT 3D-camera, we recommend to set the parameters as listed:

Parameter Value Description
Color Mode Raw The raw color mode ensures that all information is transmitted from the camera and no auto-mapping to 8Bit is done on the images.
Mandatory if user want to work on original 16Bit range maps.
Packet Size 8192 Using jumbo frames reduces overhead and increases data rate.
Notice: Network adapter (and switch) must be set to jumbo packets as well!
Interpacket Delay 2000 In case of corrupted frames an increasing of the inter-packet delay is suggested.
A lowering of the value might be necessary when the data rate is insufficient.
The interpacket delay is usually specified in GenICam TimestampCounterTicks, which commonly differs between camera models. For AT cameras one tick is 10 ns.
1 Like

IMAGE ACQUISITION

All 3D cameras from Automation Technology follow the GenICam standard which means that the image acquisition of the AT cameras in CVB is not different as with any other GenICam device.

C-style API

Within the installation directory of CVB you can find a variety of example tutorials for the different programming languages that demonstrate the main concepts of CVB, including how to:

  • load driver and continuously acquire images (e.g. Image Manager → GrabConsole & SizeableDisplay)
  • run multiple cameras in parallel (e.g. Image Manager → CSharp → MultiCam)
  • set paramaters of the camera (e.g. Hardware → GenICam → GenICamExample)

Please take a look at the corresponding section in the CVB User Guide.

Object Oriented APIs

New users are encouraged to use the object oriented APIs of CVB. Getting Started Guides include basic image acquisition concepts and can be found for CVB.Net and CVBpy in this forum. Tutorials for CVB++ can be found within the directory of current CVB installations.

However, please be aware that not all functionality of the C-API is covered by the new APIs yet! If you encounter missing tools, be aware that you may have to combine the object oriented wrappers with CVB’s classic C-API.

1 Like

SPLIT DATA CHANNELS (DCs)

With AT’s 3D cameras multiple different image data can be obtained and transmitted to the host. This can be either images including the range information, intensity information or laser thickness information. All output channels can be selected individually and in combinations. Every DC is saved in a new image row, resulting in multi information images which must be split on the host side afterwards if multiple DCs are enabled.

For detailed information regarding the data channel assignment DC0-DC2 please take a look into the manufacturer’s sensor manual.

The following two functions demonstrate an example way (C#) to split an image consisting of multiple DCs into its sub-images, depending on its bit depth.

Split 8Bit Image
private static Boolean Split8BitImage(Cvb.Image.IMG cameraImg, ref Cvb.Image.IMG[] singleImages)
{
   // Define variables
   int numSingleImages = singleImages.GetLength(0);
   IntPtr baseIn;
   int xIncIn, yIncIn;
   int imageWidth, imageHeight;

   imageWidth = Cvb.Image.ImageWidth(cameraImg);
   imageHeight = Cvb.Image.ImageHeight(cameraImg);

   // Images might not have a multiple of lines of the requested images
   if (imageHeight % numSingleImages != 0)
     imageHeight = imageHeight / numSingleImages;

   // Get linear access to the base image
   Cvb.Utilities.GetLinearAccess(cameraImg, 0, out baseIn, out xIncIn, out yIncIn);

   // Init a pointer array for every image
   IntPtr[] baseOut = new IntPtr[numSingleImages];

   // Prepare variables
   int xIncOut, yIncOut;
   xIncOut = yIncOut = 0;

   // Create the images and get access
   for (int i = 0; i < numSingleImages; i++)
   {
     // 8 bit images
     Cvb.Image.CreateGenericImageDT(1, imageWidth, imageHeight / numSingleImages, 8, out singleImages[i]);
     Cvb.Utilities.GetLinearAccess(singleImages[i], 0, out baseOut[i], out xIncOut, out yIncOut);
   }

   // split data
   unsafe
   {
     char* grayval;
     char* pixel;

     for (int y = 0; y < imageHeight; y++)
     {
       for (int x = 0; x < imageWidth; x++)
       {
         grayval = (char*)((int)baseIn + x * xIncIn + y * yIncIn);
         // Assign correct address to pixel
         pixel = (char*)((int)baseOut[y % numSingleImages] + x * xIncOut + (y / numSingleImages) * yIncOut);
         // Write value of grayval to pixel address
         *((char*)pixel) = (char)*grayval;
       }
     }
   }
   return true;
}
Split 16Bit Image
private static Boolean Split16BitImage(Cvb.Image.IMG cameraImg, ref Cvb.Image.IMG[] singleImages)
{      
  // Define variables
  int numSingleImages = singleImages.GetLength(0);
  IntPtr baseIn;
  int xIncIn, yIncIn;
  int imageWidth, imageHeight;

  imageWidth = Cvb.Image.ImageWidth(cameraImg);
  imageHeight = Cvb.Image.ImageHeight(cameraImg);
  
  // Images might not have a multiple of lines of the requested images
  if (imageHeight % numSingleImages != 0)
	imageHeight = imageHeight / numSingleImages;

  // Get linear access to the base image
  Cvb.Utilities.GetLinearAccess(cameraImg, 0, out baseIn, out xIncIn, out yIncIn);

  // Init a pointer array for every image
  IntPtr[] baseOut = new IntPtr[numSingleImages];

  // Prepare variables
  int xIncOut, yIncOut;
  xIncOut = yIncOut = 0;

  // Create the images and get access
  for (int i = 0; i < numSingleImages; i++)
  {
	// For 16 bit images
	Cvb.Image.CreateGenericImageDT(1, imageWidth, imageHeight / numSingleImages, 16, out singleImages[i]);
	Cvb.Utilities.GetLinearAccess(singleImages[i], 0, out baseOut[i], out xIncOut, out yIncOut);
  }

  // split data
  unsafe
  {
	// For 16 bit images
	ushort* grayval;
	ushort* pixel;

	for (int y = 0; y < imageHeight; y++)
	{
	  for (int x = 0; x < imageWidth; x++)
	  {
		// grayval is the address of the pixel holding the value we want to access
		grayval = (ushort*)((int)baseIn + x * xIncIn + y * yIncIn);
		// Assign correct address to pixel
		pixel = (ushort*)((int)baseOut[y % numSingleImages] + x * xIncOut + (y / numSingleImages) * yIncOut);
		// Write value of grayval to pixel address
		*((ushort*)pixel) = (ushort)*grayval;
	  }
	}
  }
  return true;
}

Please notice, that these functions require previous knowledge about the selected number of DCs. This information can be determined by a GenICam query on the value of the specific features (EnableDC0-DC2).

2 Likes

CONNECTION MONITORING

CVB offers the functionality to monitor connected GenICam devices in real time. This allows notifications as soon as cameras get disconnected or reconnected from or to the host. It can be very useful in automated processes if the communication with a device is temporarily lost, e.g. due to a power-off and needs to be automatically re-established.

The Connection Monitoring of CVB is realized over the INotify interface of the driver, with its DEVICE_DISCONNECTED and DEVICE_RECONNECT event. There is sample code available for the C-API (C++, C#) and can be found in the corresponding section of the CVB User Guide.

1 Like

GIGE VISION EVENTS

GigE Vision Events are typically used to synchronize the host application with some events happening in the device. A typical usecase in machine vision applications is for example a host that waits to be notified in real time of the sensor’s exposure end to move the inspected part on a conveyer belt.

This post describes how to register a callback function to monitor an event of an AT-3D Camera with CVB.

For AT cameras there are a number of different events available:

Event Name Event ID Description
AcquisitionStart 36882 Frame Acquisition is started
AcquisitionEnd 36883 Frame Acquisition is terminated
TransferStart 36884 Frame transfer is started from the camera
TransferEnd 36885 Frame transfer is terminated
AOITrackingOn 36886 The AOI tracking process is started and the laser line image is valid for AOI alignment
AOITrackingOff 36887 The AOI tracking process is stopped and the AOI position is not updated anymore
AOISearchFailed 36888 AOI-Search failed to detect the laser line
AutoStarted 36889 Frame Acquisition is initiated through AutoStart

The number of events might increase for newer firmware versions. Please refer to the current sensor manual for all supported events.


Implementation

As @parsd already pointed out in this posting, there are two ways to register events with CVB. Either by using the preferred way over the GenApi with NRegisterUpdate() or by using the INotify interface. Which one you take depends on whether the camera supports the event handling via the GenApi or not.

In general, the 3D cameras of AT support the standardized way of the event handling. However, this requires a current firmware version to be running on the sensor, that includes timestamps for each event. Please refer to the Firmware Release Notes to check if your camera supports the „EventNotification via GenICam node access“.

For the object oriented CVB-APIs there is another (and even easier!) example to register an Event Callback (CVB++) to a node. It can be found in this post.

1 Like

CORRECT USAGE OF SUBPIXELS

When working with AT cameras it is important to understand the influence of the camera’s subpixel parameter on the measurement precision and its calibration. A changing of the number of subpixels can result in wrong metric values if the calibration file wasn’t changed acoordingly.

How to choose matching number of subpixels

Rangemaps from AT are transmitted with a fixed bit depth (8Bit or 16Bit). In most applications 16Bit is the recommended data format, however, the choice of the format is a trade-off between a better height resolution or a higher data rate.

The bit depth of a range map limits the amount of information that can be stored for each pixel (height), whereas each height is described by the number of the row in the sensor and its subpixel position. The number of bits needed to unambiguously describe the sensor row as an integer value depends on the height of the used AOI set in the camera (see table below).

Once the AOI height is set, it defines the number of bits per pixel left for the subpixel information.
Parameter NumSubPixels has a valid range between 0 and 6 bits. This value describes the possible maximum resolution for the calculation of the laser line positions in the COG and FIR-Peak modes. More precisely, the resolution of those 3D modes is 1/(2^n), where n is the number of subpixel bits.

max. AOI height max. subpixels (8Bit) resolution (8Bit) max. subpixels (16Bit) resolution (16Bit)
3 6 0.015625 6 0.015625
7 5 0.03125 6 0.015625
15 4 0.0625 6 0.015625
31 3 0.125 6 0.015625
63 2 0.25 6 0.015625
127 1 0.5 6 0.015625
255 0 1 6 0.015625
511 0 bit overflow 6 0.015625
1023 0 bit overflow 6 0.015625
2047 0 bit overflow 5 0.03125
4095 0 bit overflow 4 0.0625

Conclusion: The smaller you can set your AOI height, the more subpixels you can use and therefore the more precisely you can theoretically calculate the heights. Using more rows than allowed will lead to bit overflow and ambiguity. Note that the sum of camera AOIHeight + AOIOffsetY must not exceed the maximum AOIHeight.

Calibration File

The knowledge of the number of used subpixels is required when applying a set of calibration parameters to a range map. For this reason it is included as a parameter in the calibration file. Hence, it is important that the number of used subpixels set in the camera matches the number of subpixels set in the calibration file!

This parameter can be changed in the different data formats of the calibration files as follows:

file format parameter
.xml < rangeScale> 0.015625 < /rangeScale>
.txt 10th parameter

POINT CLOUD RECONSTRUCTION

It is important to understand that all 3D Sensors from AT output uncalibrated 2D range maps with no relation to real world units. However, it is possible to easily get calibrated metric 3D points by applying a number of calibration parameters to this range map.

Getting the calibration parameters

When using an AT Compact Sensor (CS) these calibration parameters are previously calculated by the manufacturer and provided together with the camera. They are stored on the sensor’s memory and can be downloaded in a calibration file (.dat, .xml) with the manufacturer’s cxExplorer software (DeviceLoad/Save Calibration Metric… ).

Alternatively, it is possible to download the parameters in the .dat format programmatically from the camera using CVB:

Load Calibration File from AT camera (cvb.NET)
      // load driver
      Device camera = DeviceFactory.Open("GenICam.vin");
      NodeMap cameraNodeMap = camera.NodeMaps[NodeMapNames.Device];

      // get calibration file from camera
      var files = cameraNodeMap.GetAvailableFiles();
      if (files.Contains("UserData"))
        cameraNodeMap.DownloadFile("CalibrationFile.dat", "UserData");

When using a modular AT camera setup a user has to manually perform a calibration in order to get these parameters. In this case the CVB Metric Tool (part of the Foundation package) can be used. Please refer to the specific calibration post or the CVB documentary for further information.

Reconstructing Point Cloud

With the acquired rangemap and the matching calibration parameters it is possible to reconstruct the metrically correct 3D points. This can be done either with one of the existing GUI tutorials from CVB (e.g. VBCore3D) or programmatically in CVB:

Sample Code: Classic API (C#)
// load calibration file
Cvb.SharedCalibrator calibrator = null;
if (Cvb.Core3D.LoadCalibrator(fileName, out calibrator) < 0)
    MessageBox.Show("Error loading calibration file");
else
{
    // reconstruct point cloud
    Cvb.SharedComposite pointCloud = null;
    Cvb.Core3D.CreateCalibratedPointCloudFromRangeMap(rangeMap, calibrator, out pointCloud);
}
Sample Code: ObjectOriented Wrapper (CVBPy)
// load calibration file
calibrator = cvb.Calibrator3D(fileName)

// reconstruct point cloud
pointCloud = cvb.PointCloudFactory.create(rangeMap,calibrator, cvb.PointCloudFlags.Float)
3 Likes

HEALTH STATISTICS

For a better troubleshooting during the development of an application or while running live, it is recommended to add a few additional lines to your code.

Acquisition Health

When streaming with a GenICam compliant camera you can query statitistics to monitor your acquisition health via the transport layer of Stemmer Imaging. This also applies to the AT GigE cameras.
With these statistics a user can detect easily when e.g. packets or images got lost during transmission. In a couple of posts parsd described how to use it in cvb.Net:
https://forum.commonvisionblox.com/t/getting-started-with-cvb-net/246/13?u=moli

Poll Specific Camera Parameters

Another way to monitor your device is to continuously check health data by polling the camera’s parameters.
The manufacturer recommends that the temperature does not exceed 65 °C when measuring. Furthermore, keep in mind that dark current and noise performance for CMOS sensors will degrade at higher temperatures.

Parsd described in this thread how to control the nodemap parameters of a camera via the GenApi.
https://forum.commonvisionblox.com/t/getting-started-with-cvb-net/246/7?u=moli

1 Like

USING CHUNK DATA MODE

The AT sensors features a Chunk Data mode for providing additional information to the acquired image data. The ChunkData generated by the camera have the following format:

  • ChunkImage
  • 1…N x ChunkAcqInfo
  • ChunkImageInfo

Depending on camera mode (image or 3D) the ChunkData block („ChunkAcqInfo“) can
be sent as follows:

  • In image mode, the camera can send only one ChunkAcqInfo block per image frame.
  • In 3D mode, the camera can send one ChunkAcqInfo block either per 3D frame (“OneChunkPerFrame”) or per 3D profile (“OneChunkPerProfile”)

The „ChunkImageInfo“ is the last ChunkData sent by the camera and contains following
data:

  • Number of valid rows in ChunkImage
  • Number of valid ChunkAcqInfo blocks
  • Flags identifying the current frame as „Start“ or „Stop“ and the buffer status in AutoStart mode

The ChunkAcqInfo block consists of totally 32 bytes containing following data:

  • 64 bit timestamp
  • 32 bit frame counter
  • 32 bit trigger coordinate
  • 8 bit Trigger status
  • 32 bit I/O Status
  • 72 bit AOI information

The data of timestamp, frame counter, trigger coordinate, trigger status and I/O status are assigned at the start of every image integration.
When ChunkMode is disabled, the camera uses the “regular“ GEV image protocol, in which the optional transfer of frames with variable height and payload is supported.
Furthermore, when ChunkMode is enabled, the camera sends the full payload, even if the ChunkImage or ChunkAcqInfo blocks contain partially valid data. The number of valid ChunkImage rows and ChunkAcqInfo blocks can be read from ChunkImageInfo. For example, when in Start/Stop mode with instant frame transmission, the camera stops the frame acquisition as soon as the stop trigger occurs and transfers the complete contents of internal image buffer. Using the ChunkImageInfo data block, it is possible to detect how many image rows and ChunkAcqInfo blocks are valid in the payload buffer. The tag of ChunkData has big endian byte order. The data of ChunkData has little endian byte order. An endian converter for ChunkData is not supported.

Payload Layout
Percent|

Code example

All credit goes to @parsd for this post.

using (var device = DeviceFactory.Open("GenICam.vin"))
{
  var nodemap = device.NodeMaps[NodeMapNames.Device];
  var chunkModeActive = nodemap["ChunkModeActive"] as BooleanNode;
  chunkModeActive.Value = true;

  var stream = device.Stream;
  stream.Start();
  try
  {
    stream.Wait();
    var deviceImage = device.DeviceImage;

    var chunks = GevChunkParser.Parse(deviceImage);
    var info = DereferenceAcqInfoOn(deviceImage, chunks.First(chunk => chunk.ID == ATC5AcqInfoChunk.ID));

    Console.WriteLine($"FrameCount: {info.FrameCount}");
  }
  finally
  {
    stream.Stop();
  }
}

With this helper method and struct:

private static unsafe ATC5AcqInfoChunk DereferenceAcqInfoOn(DeviceImage image, GevChunk chunk)
{
  Debug.Assert(chunk.ID == ATC5AcqInfoChunk.ID);
  Debug.Assert(chunk.Length >= sizeof(ATC5AcqInfoChunk));

  var chunkPtr = new IntPtr(image.GetBufferBasePtr().ToInt64() + chunk.Offset);
  return *(ATC5AcqInfoChunk*)chunkPtr;
}

[StructLayout(LayoutKind.Sequential, Pack = 1)]
struct ATC5AcqInfoChunk
{
  public const uint ID = 0x66669999;

  public uint TimeStampLow;
  public uint TimeStampHigh;
  public uint FrameCount;
  public int TriggerCoordinate;
  public byte TriggerStatus;
  public ushort DAC;
  public ushort ADC;
  public byte IntIdX;
  public byte AoiIdX;
  public ushort AoiYs;
  public ushort AoiDy;
  public ushort AoiXs;
  public ushort AoiThreshold;
  public byte AOIAlgorithm;
}

For this to work you need these three new classes @parsd put on github:

2 Likes

Hi,
There is a easier way to split the image.
Using CreateImageMap() and start point line0, line1 and line2.
We can easily get DC0, DC1 and DC2 images.
Like as below:

numSingleImages = 3;
Cvb.Image.CreateImageMap(imgIn, 0, 0, Cvb.Image.ImageWidth(imgIn) - 1, Cvb.Image.ImageHeight(imgIn) - 1, Cvb.Image.ImageWidth(imgIn), Cvb.Image.ImageHeight(imgIn) / numSingleImages, imgDC0);
Cvb.Image.CreateImageMap(imgIn, 0, 1, Cvb.Image.ImageWidth(imgIn) - 1, Cvb.Image.ImageHeight(imgIn) - 1, Cvb.Image.ImageWidth(imgIn), Cvb.Image.ImageHeight(imgIn) / numSingleImages, imgDC1);
        Cvb.Image.CreateImageMap(imgIn, 0, 2, Cvb.Image.ImageWidth(imgIn) - 1, Cvb.Image.ImageHeight(imgIn) - 1, Cvb.Image.ImageWidth(imgIn), Cvb.Image.ImageHeight(imgIn) / numSingleImages, imgDC2);
2 Likes

Reading TriggerStatus from Chunk Data

refering to the previous post from @Arno on Reading Chunk Data, reading the “TriggerStatus” is a bit tricky.

TriggerStatus is defined as:

public byte TriggerStatus;

While TriggerStatus is an 8-bit value it contains the information of 6 elements:

#define CHUNKACQINFO_TRIGGERSTATUS_BIT_TRIGGER_OVERRUN 
#define CHUNKACQINFO_TRIGGERSTATUS_BIT_RESOLVER_CNT_UP 
#define CHUNKACQINFO_TRIGGERSTATUS_BIT_IN0 
#define CHUNKACQINFO_TRIGGERSTATUS_BIT_IN1 
#define CHUNKACQINFO_TRIGGERSTATUS_BIT_OUT0 
#define CHUNKACQINFO_TRIGGERSTATUS_BIT_OUT1 

Extracting the information from TriggerStatus as byte will result in a decimal value between 0 ant 255.
From this the information on the 6 status elements can already be read unambiguous. For interpretation reasons it might be easier to convert the decimal number to a binary number.

In our example we receive that decimal byte value of 193 from the TriggerStatus Chunk value.
Decoded to binary the value can be read like this:
11000001
Each bit is now describing the status of one of the 6 elements. The logic is set to the following:
Out1 = high, Out0 = high, In1 = low, In0 = low, -, -, EncoderStatus = off/back, TriggerOverrun = true.

The EncoderStatus is described by the following:

  1. If no Encoder is used: EncoderStatus = 0
  2. If an Encoder is used:
    • Encoder Status = 0: Encoder is moving backward.
    • Encoder Status = 1: Encoder is moving forward.

Hi,help me, I want to save the chunk image(3D mode) binary format to file by float four byte code, Can you give me a example?

wugx followed up his own question in the following thread:
Hi,help me, I want to save the chunk image(3D mode) binary format to file by float four byte code, Can you give me a example? - Programming Questions - Common Vision Blox User Forum

Hi support team,
When I use this method, how can I get imgDC0, imgDC1 and imgDC2 pointer?
If using
Dim imageDataDC1 As LinearAccessData = imgDC1 .Planes(0).GetLinearAccess()
Dim basePtrDC1 As IntPtr = imageDataDC1.BasePtr
the error will occure at GetLinearAccess().

Thank you for your support.

Hi @Charlene,

What kind of error do you get? If it is an exception, the documentation might give you a hint:

FormatException If the plane’s pixels are not accessible linearly
System.ObjectDisposedException If Parent has already been disposed
InvalidOperationException Thrown if the size of T and DataType.BytesPerPixel are unequal.

If you get the LinearAccessData keep following in mind:
For Visual Basic .Net you need to use the helper functions provided in the System.Runtime.InteropServices.Marshal class for the increments.

Hi @Sebastian

When I use
Dim rect As New Rect(0, 1, img.Width - 1, img.Height - 1)
Image = img.Map(rect, Size)
Dim imageData As LinearAccessData = Image.Planes(0).GetLinearAccess() 'the error is at this line
Dim basePtr As IntPtr = imageData.BasePtr
it will show the error:
System.ArgumentException
HResult=0x80070057
Message=Operation Linear Access only supported on linear VPATs.
Source=Stemmer.Cvb

The problem should be the pixel is skipping. It is non-linear.
Can we use GetImageVPA() to get non-linear mapped image pointer?
Which function of CVB.Net can be used?

Thank you for your reply.

@Charlene
In the ImagePlane you can use the GetVPATAccess() function to access the VPAT access to the plane´s pixels.