GetLinearAccess returns incorrect values for the xInc for 16Bit camera images

Hi,

I work with 3D cameras transmitting 16bit images to my software by using the GenICam driver. I modified one of your examples so that I can get access to each pixel using the function GetLinearAccess. However, the pixels do not match my assumptions. Also, according to the documentation the resulting values for xInc and yInc of this function should be in bytes. However, with the 16bit image handles xInc is always 1. But I think this should be 2 bytes for 16bit, right?

I am using CVB version 12.01.003

Hi,you are right! When accessing 16Bit images with GetLinearAccess, xInc should have a value of 2. So in your case it looks like that the bit depth of the image you received is really just 8Bit.

If you are sure that you set the image format in the camera to 16Bit one possible reason for that might be the GenICam driver. By default, CVB automatically detects the correct preprocessing for the pixel format / colour format the camera delivers (default parameter Auto Detect at Device Options -> CVB Color Format menu or direct in the GenICam.ini-file).
For example if the camera delivers 8 Bit Bayer data, CVB converts the image to a RGB 24 image using an 2x2 RGB conversion.
If an 8 Bit Mono image is detected, the raw 8 Bit image is used.

However, with 16Bit images, the auto detect option automatically maps the 16bit into 8bit image data. Therefore it is necessary to set the CVB color format of the driver to Raw (0).

Hope this helps.

3 Likes

Thank you, this is it! Changing the color format in the GenICam.ini file made it work.

You can also query the data type of the pixel’s plane in code like this (C-Interface):

cvbdatatype_t dt = ImageDatatype(theImage, 0);

You can gets the bits per pixel with this:

int bitsPerPixel = dt & 0xFF;

Bytes per pixel is a bit more complicated, so CVB has a function for this:

cvbval_t bytesPerPixel = BytesPerPixel(dt);

There are also more flags for the data type like signed or floating point:

cvbbool_t isFloatingPoint = (dt & DT_Float) != 0;
3 Likes

Is it possible to set the color format in code?

This assumes that you configured the desired Std::PixelFormat via the GenApi:

The simplest way would be to use

Device Discovery

(which doesn’t use the GenICam.ini):

https://forum.commonvisionblox.com/t/raw-color-format-in-device-discovery/253/2?u=parsd

Pre-Configuration via GenICam Browser/Management Console

(and thus the GenICam.ini)

You need to modify the ini-file itself:

  1. Find the wanted camera section
    (like [Channel_0])
  2. Change the PixelFormat entry in that section as hinted above
    (e.g. PixelFormat = 0 for RAW)
1 Like