our software uses the CVB parameters DataType and BitsPerPixel to determine the pixel format of the camera.
It seems the CVB behaviour has changed from 14.0 (and before) to 14.1.
PixelFormat in-camera
CVB 14.0
CVB 14.1
mono8
DataType = 8, BitsPerPixel = 8
DataType = 8, BitsPerPixel = 8
mono10
DataType = 10, BitsPerPixel = 10
DataType = 16, BitsPerPixel = 16
mono12
DataType = 12, BitsPerPixel = 12
DataType = 16, BitsPerPixel = 16
As you can see, the bit depth is not reported correctly (or at least in the same way) in 14.1.
Do I miss something, is this the accepted way of processing this, is there maybe a better approach or an added function in CVB 14.1?
AFAIK we are using the (deprecated) .vin stack with C.
In 14.1, the images are transferred fine, just the detection of the bit depth has changed from previous versions.
Thank you, I will dive and look for what you are reporting here, but it would be great if you could say more about how you access the values you call “CVB parameters”. Actually, there is a export in iCVCImg.h, that is called ImageDatatype, returning a cvbdatatype_t from an image (IMG). Do you mean this?
I counterchecked the behavior for a simple camera with changed pixel format, where it worked as expected. But, this function can be used in serveral contexts. Now we need more specifics of your application. Could you break it down to a minimum working example and provide it?
Sorry for the long delay.
Here is some example code:
// TestCVB
static int TestCVBCam(const char* pCVBDriver, int camPort){
std::cout << "\nCamera driver: " << pCVBDriver;
// check if CVB camera driver file exists
std::fstream fs;
fs.open(pCVBDriver, std::fstream::in);
if (fs.fail())
{
std::cout << "\nDriver file not found!";
return -1;
}
fs.close();
// open CVB camera driver
std::cout << "\nTry to load camera driver... ";
IMG cvImg = nullptr;
BOOL bResultLoadFile = LoadImageFile(pCVBDriver, cvImg);
if (bResultLoadFile)
{
std::cout << "OK";
}
else
{
std::cout << "FAILED!";
return -2;
}
// Try to select given camera port
int cameraPort = 0;
if (CanCameraSelect2(cvImg))
{
IMG cvImgLocal = nullptr;
cvbres_t result = CS2SetCamPort(cvImg, camPort, 5, cvImgLocal);
if (IsImage(cvImgLocal))
{
if (cvImgLocal != cvImg)
{
result = ReleaseImage(cvImg);
cvImg = cvImgLocal;
cameraPort = camPort;
}
else
{
result = ReleaseImage(cvImgLocal);
}
}
else
{
std::cout << "\nFailed to select camera on port " << camPort;
result = ReleaseImage(cvImg);
return -3;
}
}
// read some camera pixel format properties
int nCamDimX = ImageWidth(cvImg);
int nCamDimY = ImageHeight(cvImg);
TColorModel colorModel = ImageColorModel(cvImg);
cvbdatatype_t dataType = ImageDatatype(cvImg, 0);
cvbval_t bitsPerPixel = BitsPerPixel(dataType);
cvbval_t bytesPerPixel = BytesPerPixel(dataType);
std::cout << "\nSelected camera port: " << cameraPort;
std::cout << "\nDimensions: " << nCamDimX << " x " << nCamDimY << " pixels";
std::cout << "\nImageColorModel: " << colorModel << " (" << ColorModel2String(colorModel).c_str() << ")";
std::cout << "\nImageDataType: " << dataType;
std::cout << "\nBitsPerPixel: " << bitsPerPixel;
std::cout << "\nBytesPerPixel: " << bytesPerPixel;
cvbval_t ret = ReleaseImage(cvImg);
return 0;
}
For a test camera, set to ImageFormat: mono12 this renders the following results (no camera settings were changed):
CVB 14.00.006
CVB 14.01.001
Try to load camera driver… OK
Try to load camera driver… OK
Selected camera port: 0
Selected camera port: 0
Dimensions: 1920 x 1200 pixels
Dimensions: 1920 x 1200 pixels
ImageColorModel: -1 (CM_Guess_Mono)
ImageColorModel: -1 (CM_Guess_Mono)
ImageDataType: 12
ImageDataType: 16
BitsPerPixel: 12
BitsPerPixel: 16
BytesPerPixel: 2
BytesPerPixel: 2
Not being able to correctly detect camera bitness breaks our software in the current implementation.
Note: for PixelFormat mono12p, the detection curiously works correctly