Switching from Automation Technology C5 to C6 cameras in CVB

Since the release of the Automation Technology C6 series there are some changes required in the image acquisiton method of you CVB application. AT C6 features MultiPart an thus regularly requires the 3rd Generation image acquisition stack of CVB.
The C5 cameras from Automation Technology used to transmitt the acuired data containing Range, Intensity and Scatter data in an interlaced format. In this format a transmitted image would look like this line by line:

Intensity Line 0
Scatter Line 0
Range Line 0
Intensity Line 1
Scatter Line 1
Range Line 1
Intensity Line 2

While this was a fine way of transmitting associated data it was limited to one AOI and all data required the same bit depth.

Using the same acquisition using MultiPart at the C6 the format would look like this while still transmitting all date in one buffer not requiring any prostprocessing to extract the images:

Range Image 16 bpp
Intensity Image 10 bpp
Scatter Image 8 bpp

When now exchanging a C5 by a C6 in a existing system a whole change of the image acquisiton stack would be required and probably an update of the SDK version as well, bringing in multiple uncertainties.
Luckily the C6 offers a “backdoor” to be able to keep using GenICam 1.0 for the image acquisiton which also enables a relativly smooth transition when an exchange of a C5 to a C6 is required in a existing system.

For that the C6 offers the NodeMap parameter Cust::GevSCCFGMultiPart that allows to disable the MultiPart functinality and to use a GenICam 1.0 acquisiton mode, when disabled.
In this case the camera transmitts Range, Intensity and Scatter data of one AOI in a way that the Range data are transmitted in the so called image buffer while Intensity and Scatter are transmitted over image Chunk and can be read from the memory while both keep their own bit depth.
So for exchanging a C5 by C6 camera only requires to exchange the image extraction method after the identical image acquisiton method.
Here is an example how to change the extraction code for Range, Intensity and Scatter.
First the known way on extracting the three images from a C5 image in .NET:

public void ExtractInputImage()
    {
      //Data are transmitted in one single image of the size width * ProfilesPerFrame * 3 * 16 bpp. Alle three images are in 16 bpp.
      RangeImage = new Image(InputImage.Width, NumberOfProfiles, 1, DataTypes.Int16BppUnsigned);
      IntensityImage = new Image(InputImage.Width, NumberOfProfiles, 1, DataTypes.Int16BppUnsigned);
      ScatterImage = new Image(InputImage.Width, NumberOfProfiles, 1, DataTypes.Int16BppUnsigned);

      LinearAccessData<UInt16> imageDataInputImage = InputImage.Planes[0].GetLinearAccess<UInt16>();
      LinearAccessData<UInt16> imageDataRangeImage = RangeImage.Planes[0].GetLinearAccess<UInt16>();
      LinearAccessData<UInt16> imageDataIntensityImage = IntensityImage.Planes[0].GetLinearAccess<UInt16>();
      LinearAccessData<UInt16> imageDataScatterImage = ScatterImage.Planes[0].GetLinearAccess<UInt16>();
      for (int y = 0; y < NumberOfProfiles; y++)
      {
        for (int x = 0; x < InputImage.Width; x++)
        {
          //Iterate through input image by stepping every third line for each image type.
          imageDataIntensityImage[x, y] = imageDataInputImage[x, y*3];
          imageDataScatterImage[x, y] = imageDataInputImage[x, (y*3)+1];
          imageDataRangeImage[x, y] = imageDataInputImage[x, (y*3)+2];
        }          
      }
    }

Now reaching the same goal with a C6. Remeber to deactivate MultiPart first:

BooleanNode Multipart = nmp["Cust::GevSCCFGMultiPart"] as BooleanNode;
      Multipart.Value = false;

Now extract the Rangemap image from the raw input image and extract the Intensity and Scatter image from the attached chunk:

public void ExtractIntensityAndScatterFromImageChunk()
    {
      //Datatypes of images vary between Range, Intensity and Scatter.
      RangeImage = InputImage; //Range image is transmitted in the image buffer of the datamessage.
      IntensityImage = new Image(InputImage.Width, InputImage.Height, 1, DataTypes.Int10BppUnsigned);
      ScatterImage = new Image(InputImage.Width, InputImage.Height, 1, DataTypes.Int8BppUnsigned);
      LinearAccessData<UInt16> imageDataIntensityImage = IntensityImage.Planes[0].GetLinearAccess<UInt16>();
      LinearAccessData<byte> imageDataScatterImage = ScatterImage.Planes[0].GetLinearAccess<byte>();

      //Define the size of ChunkData 
      long ImageDataSize = InputImage.Width * InputImage.Height * 2;
      long ChunkSize = PayloadSize - ImageDataSize;

      unsafe
      {
        IntPtr ptrBaseIMG = InputImage.Planes[0].GetLinearAccess().BasePtr;//Get Basepointer of InputImage.
        UInt16* ptrBaseIntensityImage = ((UInt16*)(ptrBaseIMG + (int)ImageDataSize))+4;//8 Byte Offset between Range Image and Intensity Image        
        
        byte* ptrBaseScatterImageOffset = ((byte*)(ptrBaseIMG + 2*(int)ImageDataSize))+16;//8 Byte Offset between Intensity Image and Scatter Image (+ Offset Range-Intensity)
        
        //Iterate over memory staring at base pointer address, extracting Intensity and Scatter image.
        for(int x = 0; x < InputImage.Width; x++)
        {
          for(int y = 0; y < InputImage.Height; y++)
          {
            UInt16* ptrCurrIntensityImage = (UInt16*)(ptrBaseIntensityImage + (x+(InputImage.Width * y)));
            imageDataIntensityImage[x, y] = *ptrCurrIntensityImage;
            byte* ptrCurrScatterImage = (byte*)(ptrBaseScatterImageOffset + (x+(InputImage.Width * y)));
            imageDataScatterImage[x, y] = *ptrCurrScatterImage;
          }
        }
      }     
    }

For further information on the usage of C5 cameras please refere to this forum post.
In the following posts there will be topics added related to the usage of C6 cameras in CVB.

4 Likes

Performing image acquisition with an Automation Technology C6 camera in CVB.

While using the Chunk Image Mode on the C6 to acquire composite images from the C6 without using MultiPart works fine for existing imaging systems, by creating a new application using C6 cameras the prefered way to acquire images should be the use of MultiPart.

There are multiple advantages the C6 platform has compared to its predecessor the C5 series.
The C6 allows to acquire images from 4 AOIs (Regions) in parallel while at each region iti s possible to extract up to 4 laser line peaks, allowing the full collection of data for (semi-)transparent surfaces. The maximum number of parts in a composite is nine.

The following example in CVB.NET shows how to acquire images with a C6 using GenICam 3.0 MultiPart image Acquisition.

using System;
using Stemmer.Cvb;
using Stemmer.Cvb.GenApi;
using Stemmer.Cvb.Utilities;
using Stemmer.Cvb.Driver;

namespace CVB_Multipart_ATC6
{
  class Program
  {
    static void Main(string[] args)
    {
      Console.WriteLine("Start Program.");
      DiscoveryInformationList DicoveryList = DeviceFactory.Discover(DiscoverFlags.IgnoreVins);       
      Device device = DeviceFactory.Open(DicoveryList[0],AcquisitionStack.GenTL);      
      Console.WriteLine((device.NodeMaps[NodeMapNames.TLDevice]["Std::DeviceModelName"] as StringNode).Value + " opened.");
      
      //Multipart Acquisition
      CompositeStream stream = (((GenICamDevice)device).GetStream<CompositeStream>(0));      
      stream.Start();
      for (int i = 0; i < 10; i++)
      {
        using (Composite composite = stream.Wait(out WaitStatus status))
        { 
          using(MultiPartImage image = MultiPartImage.FromComposite(composite))
          {
            using (NodeMapDictionary nodeMaps = NodeMapDictionary.FromComposite(composite))
            {
              int index = 0;
              foreach (var part in image.Parts)
              {
                index++;
                switch (part)
                {
                  case Image partImage:
                    partImage.Save("Multipartimage_"+ i + "_" + index + ".tif");
                    Console.WriteLine("Composite " + i + " Part " + index + " is an Image");
                    break;
                  case PfncBuffer buffer:
                    Console.WriteLine("Composite " + i + " Part " + index + " is an PFNCBuffer");
                    break;
                  default:
                    break; 
                }
              }
            }
          } 
        }
      }
      stream.Stop(); 
      Console.WriteLine("Program finished");
      Console.ReadLine();
    }
  }
}

In case of the C6 all composite parts are images. There are other devices that for example contain a pointcloud inside the composite. In this case the part extraction could look like this:

using (PointCloud pointcloud = PointCloud.FromComposite(composite))
          {
            //Do something
          }

Using Subpixels with Automation Technology C6 cameras

The usage of subpixels for cameras from Automation Technologies is crucial to achieve high sampling rates in the Z coordinate to work with high resolution pointclouds in CVB.
Using subpixeling of C6 cameras is a bit different compared to the earlier C5 model from Automation Technology. A description on how to use subpixels for a C5 can be found here.

For the C6 the number of used subpixels is no longer limited to 6 and additionally offers the option to use your custoimized subpixel resolution instead of beeing limited to the fixed values that were offered for the C5. This option also includes the advantage to be able to save bandwidth by reducing the bit depth of the rangemap.

Generals considerations:
Subpixles allow to subsample a line position on the camera sensor by using line extraction algorithms like COG or FIR-Peak. The possible precision for the calculation of the line position is limited to 1/(2^subpixels) which is 1/64 in case of 6 subpixels.
The allowed number of subpixels to use is depending on the AOI Height and the bit depth of the image. The AOI Height determines the required bits to describe the full line where the laserline is extracted from the sensor. Using an AOI Height of 256 will already require 8 bit to represent a line projection detected in line 129 to 256 of the sensor. Reaching subpixel precision for the calculation of the line position now requires the usage of subpixels. As we already use 8 bit for the description of the full line, we will need a 12 bpp or 16 bpp image format to use subpixels. In case of 16 bpp we are now able to use 8 subpixels for a C6 while using a C5 the number of subpixels would be limited to 6.

Here are some calculations on the achievable subpixel precision using a C6:

Maximum number of available subpixels is described as follows:
grafik
(where Bpp is the Bit Depth of the Rangemap).
Following that rule the theoretically used subpixel precisions can be using depending on AOI Height and Bit Depth. Note that the theoretical precision is not equal to the achievable accuracy, especially using high subpixel values.

max. AOI height max. subpixels (8Bit) resolution (8Bit, Range 256) max. subpixels (12Bit, Range 4096) resolution (12Bit, Range 4096) max. subpixels (16Bit, Range 65536) resolution (16Bit, Range 65536)
4 6 0.015625 10 0.000977 14 0.000061
8 5 0.03125 9 0.001953 13 0.000122
16 4 0.0625 8 0.003906 12 0.000244
32 3 0.125 7 0.007813 11 0.000488
64 2 0.25 6 0.015625 10 0.000977
128 1 0.5 5 0.03125 9 0.001953
256 0 1 4 0.0625 8 0.003906
512 0 bit overflow 3 0.125 7 0.007813
1024 0 bit overflow 2 0.25 6 0.015625
2048 0 bit overflow 1 0.5 5 0.03125
4096 0 bit overflow 0 1 4 0.0625

Based on these limits the value for subpixel precision can be set manually on the camera nodemap and will be used in the sensor calibration information, when applying the calibration in CVB.

Note that for using Subpixeling with the AT C6 that to avoide bit overflow next to reducing the AOI height additionally the “Coordinate Mode” should be set to “Region” when using a Y Offset for the AOI.

An further explaination on that using an example (all numbers not further specified in sensor rows):
6 Subixels, 16 bit pixel format
AOI height: 1024
AOI Y Offset = 512
Line Position in AOI: 700 (of 1024)

Case 1: Coordinate Scale = Sensor
Absolute line position on Sensor = AOI Y Offset + Line Position in AOI = 512 + 700 = 1212
Using 6 Subpixels results in a Rangemapvalue of 77568 which is higher than the 16 bit maximum value of 65536 and leads to bit overflow and a resulting rangemap value of 12032 which should be line 188 on the sensor.
→ This will lead to a wrong calibrated pointcloud.

Case 2: Coordinate Scale = Region
Resulting Rangemap value for line 1212:
Absolute Position on sensor - AOI Y Offset = 1212 - 512 = 700.
Using 6 Subpixel → 700 x 64 = Rangemaptvalue 44800 (< 65536).
→ Correctly calibrated pointcloud when using the calibration file with the specified parameter Coordinate Scale = Region.

2 Likes