I work with 3D cameras transmitting 16bit images to my software by using the GenICam driver. I modified one of your examples so that I can get access to each pixel using the function GetLinearAccess. However, the pixels do not match my assumptions. Also, according to the documentation the resulting values for xInc and yInc of this function should be in bytes. However, with the 16bit image handles xInc is always 1. But I think this should be 2 bytes for 16bit, right?
Hi,you are right! When accessing 16Bit images with GetLinearAccess, xInc should have a value of 2. So in your case it looks like that the bit depth of the image you received is really just 8Bit.
If you are sure that you set the image format in the camera to 16Bit one possible reason for that might be the GenICam driver. By default, CVB automatically detects the correct preprocessing for the pixel format / colour format the camera delivers (default parameter Auto Detect at Device Options -> CVB Color Format menu or direct in the GenICam.ini-file).
For example if the camera delivers 8 Bit Bayer data, CVB converts the image to a RGB 24 image using an 2x2 RGB conversion.
If an 8 Bit Mono image is detected, the raw 8 Bit image is used.
However, with 16Bit images, the auto detect option automatically maps the 16bit into 8bit image data. Therefore it is necessary to set the CVB color format of the driver to Raw (0).