Hy guys,
I’ll try to do a better job than last week for my topic.
So, I have a acquisition software used only with AT CX serie camera that work for many years. We used the C5 as the main camera to get range images and we use the CVB “imager” tools to get those images. The C5 have a frame rate of 811Hz, so it take 1,233 milliseconds to take one image of 16 profiles with a buffer number of 300, aoi is 220 height pixels, exposure time of 1200.
There an example of the code that we use :
int CCameraCVB::NextGrab()
{
if (m_imager==NULL) return -1;
int ret = (*PTR_G2Wait)(m_imager);
if (ret<0) {// Timeout
m_timeout_count++;
return(-1);
}
return 0;
}
This NextGrab method take ~25 milliseconds the first time we use it, and after the first shot it a average of 0,500 milliseconds to be executed. It suppose that we have a delay between the acquisition and the retrival of the image taken du to the time of acquisition that I measure… Or not ?
With the change of C6 model, we try to upgrade our CVB library use and start rewrite our code to get image from stream objects. The parameters of the C6 are similare to those of the C5 that I wrote about before, same frame rate, same aoi, etc. so same acquisition time toO normally.
There is a example of what we wrote :
int CCameraCvbGenicamATC6::UpdateStream() {
if (m_device == nullptr)
throw std::exception("cannot acces device");
if (m_stream) {
m_stream->TryStop();
m_stream = nullptr;
}
if (m_compositeStream) {
m_compositeStream->TryStop();
m_compositeStream = nullptr;
}
switch (m_device_scan_type)
{
default:
case CCameraCvbGenicamAT::DeviceScanTypeEnums::Areascan: {
return CCameraCvbGenicam::UpdateStream();
break;
}
case CCameraCvbGenicamAT::DeviceScanTypeEnums::Linescan3d:
m_compositeStream = m_device->Stream<Cvb::CompositeStream>();
if (m_compositeStream == nullptr)
throw std::exception("cannot acces composite stream");
m_is_grabbing = false;
m_compositeStream->DeregisterFlowSetPool();
m_compositeStream->RegisterManagedFlowSetPool(300);
break;
}
}
int CCameraCvbGenicamATC6::StartGrab() {
switch (m_device_scan_type) {
case DeviceScanTypeEnums::Areascan:
{
return CCameraCvbGenicam::StartGrab();
}
case DeviceScanTypeEnums::Linescan3d:
{
if (m_compositeStream == nullptr)
throw std::exception("cannot acces stream");
if (!m_is_grabbing) {
m_compositeStream->Start();
m_is_grabbing = true;
return NO_ERROR;
}
}
}
return ERR_BAD_STATE;
}
int CCameraCvbGenicamATC6::NextGrab() {
Cvb::ImagePtr cvb_image;
switch (m_device_scan_type)
{
case CCameraCvbGenicamATC6::DeviceScanTypeEnums::Areascan: {
return CCameraCvbGenicam::NextGrab();
case CCameraCvbGenicamATC6::DeviceScanTypeEnums::Linescan3d: {
if (m_is_grabbing) {
Cvb::CompositePtr cvb_composite;
Cvb::WaitStatus waitStatus;
Cvb::NodeMapEnumerator enumerator;
std::tie(cvb_composite, waitStatus, enumerator) = m_compositeStream->Wait();
switch (waitStatus) {
default:
case Cvb::WaitStatus::Abort:
return ERR_TIMEOUT;
break;
case Cvb::WaitStatus::Timeout:
return ERR_TIMEOUT;
break;
case Cvb::WaitStatus::Ok:
break;
}
auto multiPart = Cvb::MultiPartImage::FromComposite(cvb_composite);
auto cvb_image = Cvb::get<Cvb::ImagePtr>(multiPart->GetPartAt(0));
int cvb_image_width = cvb_image->Width();
int cvb_image_height = cvb_image->Height();
int dst_image_depth_width = m_image_depth->GetWidth();
int dst_image_depth_height = m_image_depth->GetHeight();
if (dst_image_depth_width * dst_image_depth_height == 0)
return -1;
if (cvb_image_width != dst_image_depth_width || cvb_image_height != dst_image_depth_height)
return -2;
unsigned short* pCvb_image = (unsigned short*)cvb_image->Plane(0).LinearAccess().BasePtr();
if (pCvb_image == nullptr)
return -3;
unsigned short** pDst_image_depth = (unsigned short**)m_image_depth->GetLineAccessTable();
if (pDst_image_depth == nullptr)
return -3;
int image_data_size = cvb_image_height * cvb_image_width;
for (int y = 0; y < cvb_image_height; y++)
{
unsigned short* ptr_dst = pDst_image_depth[y];
for (int x = 0; x < cvb_image_width; x++) {
*ptr_dst++ = pCvb_image[x + y * cvb_image_width];
}
}
return NO_ERROR;
}
}
}
return UNKNOWN;
}
As you see, we create an composite stream to get at first range images (and reflectance images too in a second step). This stream has 300 buffer created into the RAM to be sure to not miss images.
The use of the new NextGrab methods take an average of 1,6 to 2,2 millseconds, mostly because of the
std::tie(cvb_composite, waitStatus, enumerator) = m_compositeStream->Wait();
method. That’s seems logical for me.
In both examples, we put the 16 profiles images in a bigger image to recreate the movement of the objets that we have to see, with apply of our own flip X/ flip Y/ flip Z after acquisition methods.
I have two problems :
- When I measure the time of the second NextGrab, I get 25~ milliseconds the first time, and then… 1,6 to 2,2 millseconds on average. That’s a lot when compare to the first images times. I don’t understand why there is not the same acquisition time (I need to precise that I don’t write the imager part, so I may miss something) ;
- My second problem is that this new code give me miss images when I put 300 buffers, but when I put 811 buffers (following the rules that we put as much buffer as we need images in 1 seconds), I get all the profile so that’s perfect ! But on my way of mastrering of this new C6 camera, when I come back to 300 buffers, I get the same images quality and not a degrated one like I used to…
There two examples of the bad (first one) reconstitutions with missing frames and good (second one) images with all the frame :
(The both images represent the same objects, but without the same flip X / flip Y / flip Z).
Thanks in advance for those who can answer me!
Best regards,