Switching from Automation Technology C5 to C6 cameras in CVB

Since the release of the Automation Technology C6 series there are some changes required in the image acquisiton method of you CVB application. AT C6 features MultiPart an thus regularly requires the 3rd Generation image acquisition stack of CVB.
The C5 cameras from Automation Technology used to transmitt the acuired data containing Range, Intensity and Scatter data in an interlaced format. In this format a transmitted image would look like this line by line:

Intensity Line 0
Scatter Line 0
Range Line 0
Intensity Line 1
Scatter Line 1
Range Line 1
Intensity Line 2

While this was a fine way of transmitting associated data it was limited to one AOI and all data required the same bit depth.

Using the same acquisition using MultiPart at the C6 the format would look like this while still transmitting all date in one buffer not requiring any prostprocessing to extract the images:

Range Image 16 bpp
Intensity Image 10 bpp
Scatter Image 8 bpp

When now exchanging a C5 by a C6 in a existing system a whole change of the image acquisiton stack would be required and probably an update of the SDK version as well, bringing in multiple uncertainties.
Luckily the C6 offers a “backdoor” to be able to keep using GenICam 1.0 for the image acquisiton which also enables a relativly smooth transition when an exchange of a C5 to a C6 is required in a existing system.

For that the C6 offers the NodeMap parameter Cust::GevSCCFGMultiPart that allows to disable the MultiPart functinality and to use a GenICam 1.0 acquisiton mode, when disabled.
In this case the camera transmitts Range, Intensity and Scatter data of one AOI in a way that the Range data are transmitted in the so called image buffer while Intensity and Scatter are transmitted over image Chunk and can be read from the memory while both keep their own bit depth.
So for exchanging a C5 by C6 camera only requires to exchange the image extraction method after the identical image acquisiton method.
Here is an example how to change the extraction code for Range, Intensity and Scatter.
First the known way on extracting the three images from a C5 image in .NET:

public void ExtractInputImage()
    {
      //Data are transmitted in one single image of the size width * ProfilesPerFrame * 3 * 16 bpp. Alle three images are in 16 bpp.
      RangeImage = new Image(InputImage.Width, NumberOfProfiles, 1, DataTypes.Int16BppUnsigned);
      IntensityImage = new Image(InputImage.Width, NumberOfProfiles, 1, DataTypes.Int16BppUnsigned);
      ScatterImage = new Image(InputImage.Width, NumberOfProfiles, 1, DataTypes.Int16BppUnsigned);

      LinearAccessData<UInt16> imageDataInputImage = InputImage.Planes[0].GetLinearAccess<UInt16>();
      LinearAccessData<UInt16> imageDataRangeImage = RangeImage.Planes[0].GetLinearAccess<UInt16>();
      LinearAccessData<UInt16> imageDataIntensityImage = IntensityImage.Planes[0].GetLinearAccess<UInt16>();
      LinearAccessData<UInt16> imageDataScatterImage = ScatterImage.Planes[0].GetLinearAccess<UInt16>();
      for (int y = 0; y < NumberOfProfiles; y++)
      {
        for (int x = 0; x < InputImage.Width; x++)
        {
          //Iterate through input image by stepping every third line for each image type.
          imageDataIntensityImage[x, y] = imageDataInputImage[x, y*3];
          imageDataScatterImage[x, y] = imageDataInputImage[x, (y*3)+1];
          imageDataRangeImage[x, y] = imageDataInputImage[x, (y*3)+2];
        }          
      }
    }

Now reaching the same goal with a C6. Remeber to deactivate MultiPart first:

BooleanNode Multipart = nmp["Cust::GevSCCFGMultiPart"] as BooleanNode;
      Multipart.Value = false;

Now extract the Rangemap image from the raw input image and extract the Intensity and Scatter image from the attached chunk:

public void ExtractIntensityAndScatterFromImageChunk()
    {
      //Datatypes of images vary between Range, Intensity and Scatter.
      RangeImage = InputImage; //Range image is transmitted in the image buffer of the datamessage.
      IntensityImage = new Image(InputImage.Width, InputImage.Height, 1, DataTypes.Int10BppUnsigned);
      ScatterImage = new Image(InputImage.Width, InputImage.Height, 1, DataTypes.Int8BppUnsigned);
      LinearAccessData<UInt16> imageDataIntensityImage = IntensityImage.Planes[0].GetLinearAccess<UInt16>();
      LinearAccessData<byte> imageDataScatterImage = ScatterImage.Planes[0].GetLinearAccess<byte>();

      //Define the size of ChunkData 
      long ImageDataSize = InputImage.Width * InputImage.Height * 2;
      long ChunkSize = PayloadSize - ImageDataSize;

      unsafe
      {
        IntPtr ptrBaseIMG = InputImage.Planes[0].GetLinearAccess().BasePtr;//Get Basepointer of InputImage.
        UInt16* ptrBaseIntensityImage = ((UInt16*)(ptrBaseIMG + (int)ImageDataSize))+4;//8 Byte Offset between Range Image and Intensity Image        
        
        byte* ptrBaseScatterImageOffset = ((byte*)(ptrBaseIMG + 2*(int)ImageDataSize))+16;//8 Byte Offset between Intensity Image and Scatter Image (+ Offset Range-Intensity)
        
        //Iterate over memory staring at base pointer address, extracting Intensity and Scatter image.
        for(int x = 0; x < InputImage.Width; x++)
        {
          for(int y = 0; y < InputImage.Height; y++)
          {
            UInt16* ptrCurrIntensityImage = (UInt16*)(ptrBaseIntensityImage + (x+(InputImage.Width * y)));
            imageDataIntensityImage[x, y] = *ptrCurrIntensityImage;
            byte* ptrCurrScatterImage = (byte*)(ptrBaseScatterImageOffset + (x+(InputImage.Width * y)));
            imageDataScatterImage[x, y] = *ptrCurrScatterImage;
          }
        }
      }     
    }

For further information on the usage of C5 cameras please refere to this forum post.
In the following posts there will be topics added related to the usage of C6 cameras in CVB.

4 Likes

Performing image acquisition with an Automation Technology C6 camera in CVB.

While using the Chunk Image Mode on the C6 to acquire composite images from the C6 without using MultiPart works fine for existing imaging systems, by creating a new application using C6 cameras the prefered way to acquire images should be the use of MultiPart.

There are multiple advantages the C6 platform has compared to its predecessor the C5 series.
The C6 allows to acquire images from 4 AOIs (Regions) in parallel while at each region iti s possible to extract up to 4 laser line peaks, allowing the full collection of data for (semi-)transparent surfaces. The maximum number of parts in a composite is nine.

The following example in CVB.NET shows how to acquire images with a C6 using GenICam 3.0 MultiPart image Acquisition.

using System;
using Stemmer.Cvb;
using Stemmer.Cvb.GenApi;
using Stemmer.Cvb.Utilities;
using Stemmer.Cvb.Driver;

namespace CVB_Multipart_ATC6
{
  class Program
  {
    static void Main(string[] args)
    {
      Console.WriteLine("Start Program.");
      DiscoveryInformationList DicoveryList = DeviceFactory.Discover(DiscoverFlags.IgnoreVins);       
      Device device = DeviceFactory.Open(DicoveryList[0],AcquisitionStack.GenTL);      
      Console.WriteLine((device.NodeMaps[NodeMapNames.TLDevice]["Std::DeviceModelName"] as StringNode).Value + " opened.");
      
      //Multipart Acquisition
      CompositeStream stream = (((GenICamDevice)device).GetStream<CompositeStream>(0));      
      stream.Start();
      for (int i = 0; i < 10; i++)
      {
        using (Composite composite = stream.Wait(out WaitStatus status))
        { 
          using(MultiPartImage image = MultiPartImage.FromComposite(composite))
          {
            using (NodeMapDictionary nodeMaps = NodeMapDictionary.FromComposite(composite))
            {
              int index = 0;
              foreach (var part in image.Parts)
              {
                index++;
                switch (part)
                {
                  case Image partImage:
                    partImage.Save("Multipartimage_"+ i + "_" + index + ".tif");
                    Console.WriteLine("Composite " + i + " Part " + index + " is an Image");
                    break;
                  case PfncBuffer buffer:
                    Console.WriteLine("Composite " + i + " Part " + index + " is an PFNCBuffer");
                    break;
                  default:
                    break; 
                }
              }
            }
          } 
        }
      }
      stream.Stop(); 
      Console.WriteLine("Program finished");
      Console.ReadLine();
    }
  }
}

In case of the C6 all composite parts are images. There are other devices that for example contain a pointcloud inside the composite. In this case the part extraction could look like this:

using (PointCloud pointcloud = PointCloud.FromComposite(composite))
          {
            //Do something
          }

Using Subpixels with Automation Technology C6 cameras

The usage of subpixels for cameras from Automation Technologies is crucial to achieve high sampling rates in the Z coordinate to work with high resolution pointclouds in CVB.
Using subpixeling of C6 cameras is a bit different compared to the earlier C5 model from Automation Technology. A description on how to use subpixels for a C5 can be found here.

For the C6 the number of used subpixels is no longer limited to 6 and additionally offers the option to use your custoimized subpixel resolution instead of beeing limited to the fixed values that were offered for the C5. This option also includes the advantage to be able to save bandwidth by reducing the bit depth of the rangemap.

Generals considerations:
Subpixles allow to subsample a line position on the camera sensor by using line extraction algorithms like COG or FIR-Peak. The possible precision for the calculation of the line position is limited to 1/(2^subpixels) which is 1/64 in case of 6 subpixels.
The allowed number of subpixels to use is depending on the AOI Height and the bit depth of the image. The AOI Height determines the required bits to describe the full line where the laserline is extracted from the sensor. Using an AOI Height of 256 will already require 8 bit to represent a line projection detected in line 129 to 256 of the sensor. Reaching subpixel precision for the calculation of the line position now requires the usage of subpixels. As we already use 8 bit for the description of the full line, we will need a 12 bpp or 16 bpp image format to use subpixels. In case of 16 bpp we are now able to use 8 subpixels for a C6 while using a C5 the number of subpixels would be limited to 6.

Here are some calculations on the achievable subpixel precision using a C6:

Maximum number of available subpixels is described as follows:
grafik
(where Bpp is the Bit Depth of the Rangemap).
Following that rule the theoretically used subpixel precisions can be using depending on AOI Height and Bit Depth. Note that the theoretical precision is not equal to the achievable accuracy, especially using high subpixel values.

max. AOI height max. subpixels (8Bit) resolution (8Bit, Range 256) max. subpixels (12Bit, Range 4096) resolution (12Bit, Range 4096) max. subpixels (16Bit, Range 65536) resolution (16Bit, Range 65536)
4 6 0.015625 10 0.000977 14 0.000061
8 5 0.03125 9 0.001953 13 0.000122
16 4 0.0625 8 0.003906 12 0.000244
32 3 0.125 7 0.007813 11 0.000488
64 2 0.25 6 0.015625 10 0.000977
128 1 0.5 5 0.03125 9 0.001953
256 0 1 4 0.0625 8 0.003906
512 0 bit overflow 3 0.125 7 0.007813
1024 0 bit overflow 2 0.25 6 0.015625
2048 0 bit overflow 1 0.5 5 0.03125
4096 0 bit overflow 0 1 4 0.0625

Based on these limits the value for subpixel precision can be set manually on the camera nodemap and will be used in the sensor calibration information, when applying the calibration in CVB.

Note that for using Subpixeling with the AT C6 that to avoide bit overflow next to reducing the AOI height additionally the “Coordinate Mode” should be set to “Region” when using a Y Offset for the AOI.

An further explaination on that using an example (all numbers not further specified in sensor rows):
6 Subixels, 16 bit pixel format
AOI height: 1024
AOI Y Offset = 512
Line Position in AOI: 700 (of 1024)

Case 1: Coordinate Scale = Sensor
Absolute line position on Sensor = AOI Y Offset + Line Position in AOI = 512 + 700 = 1212
Using 6 Subpixels results in a Rangemapvalue of 77568 which is higher than the 16 bit maximum value of 65536 and leads to bit overflow and a resulting rangemap value of 12032 which should be line 188 on the sensor.
→ This will lead to a wrong calibrated pointcloud.

Case 2: Coordinate Scale = Region
Resulting Rangemap value for line 1212:
Absolute Position on sensor - AOI Y Offset = 1212 - 512 = 700.
Using 6 Subpixel → 700 x 64 = Rangemaptvalue 44800 (< 65536).
→ Correctly calibrated pointcloud when using the calibration file with the specified parameter Coordinate Scale = Region.

3 Likes

Hi,
I do have a question about FIRPeak and CoG use with C6.

With the C5, a few specific data was needed to set up FIRPeak for example (DC0, DC1 and DC2) or the resolver part (I use the bidirectionnal feature for instance).

I cannot find it with the C6 using Genicam Browser. Are those features relevente now ? Or do I need to mange that in another way ?

Dear @AxelDosSantosDoAlto,

the usage of CoG or FIRPeak is more or less equal between C5 and C6, yet the Nodemap did change a lot for the C6 to meet the GenICam 3.0 standard.
DC0, DC1 and DC2 have been renamed to Intensity, Scatter and Range and can be found in the Scan3dExtraction section. In this section you can choose a Scan3DExtractionMethod (FIRPeak,CoG, Threshold or Max).
The Encoder section did change as well. There you have to choose the EncoderOutputMode Motion to use it in bidirectional mode.
There is a document that describes feature organization changes between C5 and C6 that can be found here:
AppNote_C5_C6_Comparison_en.pdf (768.6 KB)

If you have further questions on this topic please open a different thread in the forum to keep this thread clean for direct information on C5 to C6 switch or contact the support team by using support@stemmer-imaging.com

1 Like

Getting started with Automation Technology C6 cameras in CVB?
Have a look here for a detailed documentation.

Have a look here for a detailed documentation.

@Simon the link doesn’t seem to work any more

Thanks for the hint @illusive
Here is the link for both parts of the document on the forum server now.
SI_ApplicationNote_AT_C6-Series_IntegrationGuide_EN_part2.pdf (1.4 MB)
SI_ApplicationNote_AT_C6-Series_IntegrationGuide_EN_part1.pdf (1.1 MB)

Using AT C5 and C6 parallel in a single CVB appliaction using 3rd generation acquisition stack

To work with C5 and C6 in parallel in CVB some adjustments need to be taken to the aquisition code.
First there must be a check in the first place to classify by model type to then later use the required image acquisition strategy for C5 or C6.
Using the 3rd generation acquisition stack, most of the code can be shared between both camera types:

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Stemmer.Cvb;
using Stemmer.Cvb.Driver;
using Stemmer.Cvb.GenApi;
using static Stemmer.Cvb.Driver.AccessToken;

namespace CVB_C5_C6_Acquisition
{
  internal class Program
  {
    public struct ATdevice
    {
      public GenICamDevice device;
      public string deviceType;
      

      public ATdevice(GenICamDevice _device, string _deviceType)
      {
        device = _device;
        deviceType = _deviceType;
      }

    }
    static void Main(string[] args)
    {
      Console.WriteLine("Acquire multiple images from either C5 or C6 cameras.");

      Console.WriteLine("Discovering devices:");
      DiscoveryInformationList discovList = DeviceFactory.Discover();
      List<ATdevice> devices = new List<ATdevice>();
      Console.WriteLine("Discovered Devices: " + discovList.Count);
      
      //Open all Filter Driver devices
      for (int i = 0; i < discovList.Count; i++)
      {
        if (discovList[i].AccessToken.Contains("Filter Driver"))
        {          
          discovList[i].SetParameter("PacketSize", "8192");
          discovList[i].SetParameter("NumBuffer", "50");
          discovList[i].SetParameter("PixelFormat", "0");
          discovList[i].SetParameter("AttachChunk", "1");

          //Check for C5 or C6
          string deviceType = "";
          if (discovList[i].AccessToken.Contains("C6"))
          {
            deviceType = "C6";
          }
          else if (discovList[i].AccessToken.Contains("C5") || discovList[i].AccessToken.Contains("CX"))
          {
            deviceType = "C5";
          }
          else
          {
            deviceType = "Other";
          }
          //Open devices and add them to devices list.
          devices.Add(new ATdevice((DeviceFactory.Open(discovList[i].AccessToken, AcquisitionStack.GenTL) as GenICamDevice), deviceType));
          discovList[i].TryGetProperty(DiscoveryProperties.DeviceUsername, out string deviceUsername);
          Console.WriteLine("Opened " + deviceUsername + " as: " + deviceType);
        }
      }

      //Start Streams
      List<CompositeStream> streams = new List<CompositeStream>();
      for(int i = 0; i < devices.Count; i++)
      {
        streams.Add(devices[i].device.GetStream<CompositeStream>(0));
        streams[i].Start();
      }

      //Acquire 10 Images for each camera
      for(int i = 0; i < 10; i++)
      {
        //Step through all opened devices
        for(int d = 0;d < devices.Count; d++)
        {
          using (var composite = streams[d].Wait(out WaitStatus status)) // (3)
          {
            using (var nodeMaps = NodeMapDictionary.FromComposite(composite))
            {
              List<Image> outImagelist = new List<Image>();

              //Distinguish between C5 and C6 input images.

              //Using C6 camera
              if (devices[d].deviceType.Equals("C6"))
              {                
                for (int j = 0; j < composite.Count; j++)
                {
                  var part = composite[j];
                  switch (part)
                  {
                    case Stemmer.Cvb.Image partImage:
                      outImagelist.Add(partImage);
                      break;

                    case Stemmer.Cvb.Driver.PfncBuffer buffer: break;
                    default: break; // and more
                  }
                  
                }
              }
              
              //Using C5 camera
              else if (devices[d].deviceType.Equals("C5"))
              {
                Int64 realImgHeight = (devices[d].device.NodeMaps[NodeMapNames.Device]["Cust::ProfilesPerFrame"] as IntegerNode).Value;
                Image image = composite[0] as Image;
                // do something with the composite and the node map
                int countImages = (int)(image.Height / realImgHeight);
                outImagelist = ExtractInputImageC5(image, countImages);
              }              
            }
          }
        }
        Console.WriteLine("Captured: " + i);
      }
      //Abort Streams     
      for (int i = 0; i < devices.Count; i++)
      {        
        streams[i].Abort();
      }
      Console.WriteLine("Finished");
      Console.ReadLine();
    }


    public static List<Image> ExtractInputImageC5(Image InputImage, int countImages)
    {
      //Extract Range, Intensity Scatter from interlaced image


      List<Image> outImagelist = new List<Image>();
      int NumberOfProfiles = InputImage.Height / countImages;
      //Data are transmitted in one single image of the size width * ProfilesPerFrame * 3 * 16 bpp. Alle three images are in 16 bpp.
      Image RangeImage = new Image(InputImage.Width, NumberOfProfiles, 1, DataTypes.Int16BppUnsigned);
      Image IntensityImage = new Image(InputImage.Width, NumberOfProfiles, 1, DataTypes.Int16BppUnsigned);
      Image ScatterImage = new Image(InputImage.Width, NumberOfProfiles, 1, DataTypes.Int16BppUnsigned);

      LinearAccessData<UInt16> imageDataInputImage = InputImage.Planes[0].GetLinearAccess<UInt16>();
      LinearAccessData<UInt16> imageDataRangeImage = RangeImage.Planes[0].GetLinearAccess<UInt16>();
      LinearAccessData<UInt16> imageDataIntensityImage = IntensityImage.Planes[0].GetLinearAccess<UInt16>();
      LinearAccessData<UInt16> imageDataScatterImage = ScatterImage.Planes[0].GetLinearAccess<UInt16>();
      for (int y = 0; y < NumberOfProfiles; y++)
      {
        for (int x = 0; x < InputImage.Width; x++)
        {
          //Iterate through input image by stepping every third line for each image type.
          imageDataIntensityImage[x, y] = imageDataInputImage[x, y * countImages];
          if(countImages > 1)
            imageDataScatterImage[x, y] = imageDataInputImage[x, (y * countImages) + 1];
          if(countImages > 2)
            imageDataRangeImage[x, y] = imageDataInputImage[x, (y * countImages) + 2];
        }
      }
      outImagelist.Add(RangeImage);
      if(countImages>1)
        outImagelist.Add(IntensityImage);
      if(countImages>2)
        outImagelist.Add(ScatterImage);

      return outImagelist;
    }
  }
}