Imager use vs Stream use for C5/C6 acquisition

Hy guys,
I’ll try to do a better job than last week for my topic.

So, I have a acquisition software used only with AT CX serie camera that work for many years. We used the C5 as the main camera to get range images and we use the CVB “imager” tools to get those images. The C5 have a frame rate of 811Hz, so it take 1,233 milliseconds to take one image of 16 profiles with a buffer number of 300, aoi is 220 height pixels, exposure time of 1200.

There an example of the code that we use :

int CCameraCVB::NextGrab()
{
	if (m_imager==NULL) return -1;

	int ret = (*PTR_G2Wait)(m_imager); 
		if (ret<0) {// Timeout
			m_timeout_count++;
			return(-1);
		}
	return 0; 
}

This NextGrab method take ~25 milliseconds the first time we use it, and after the first shot it a average of 0,500 milliseconds to be executed. It suppose that we have a delay between the acquisition and the retrival of the image taken du to the time of acquisition that I measure… Or not ?

With the change of C6 model, we try to upgrade our CVB library use and start rewrite our code to get image from stream objects. The parameters of the C6 are similare to those of the C5 that I wrote about before, same frame rate, same aoi, etc. so same acquisition time toO normally.

There is a example of what we wrote :

int CCameraCvbGenicamATC6::UpdateStream() {
	if (m_device == nullptr)
		throw std::exception("cannot acces device");
	if (m_stream) {
		m_stream->TryStop();
		m_stream = nullptr;
	}
	if (m_compositeStream) {
		m_compositeStream->TryStop();
		m_compositeStream = nullptr;
	}
	switch (m_device_scan_type)
	{
	default:
	case CCameraCvbGenicamAT::DeviceScanTypeEnums::Areascan: {
                return CCameraCvbGenicam::UpdateStream();
		break;
	} 
	case CCameraCvbGenicamAT::DeviceScanTypeEnums::Linescan3d:
		m_compositeStream = m_device->Stream<Cvb::CompositeStream>();
		if (m_compositeStream == nullptr)
			throw std::exception("cannot acces composite stream");
		m_is_grabbing = false;
		m_compositeStream->DeregisterFlowSetPool();
		m_compositeStream->RegisterManagedFlowSetPool(300);
		break;
	}
}

int CCameraCvbGenicamATC6::StartGrab() {
	switch (m_device_scan_type) {
	case DeviceScanTypeEnums::Areascan:
		{
			return CCameraCvbGenicam::StartGrab();
		}
	case DeviceScanTypeEnums::Linescan3d:
		{
		if (m_compositeStream == nullptr)
			throw std::exception("cannot acces stream");
			if (!m_is_grabbing) {
				m_compositeStream->Start();
				m_is_grabbing = true;
				return NO_ERROR;
			}
		}
	}
	return ERR_BAD_STATE;
}

int CCameraCvbGenicamATC6::NextGrab() {
	Cvb::ImagePtr cvb_image;
	switch (m_device_scan_type)
	{
	case CCameraCvbGenicamATC6::DeviceScanTypeEnums::Areascan: {
                return CCameraCvbGenicam::NextGrab();
	case CCameraCvbGenicamATC6::DeviceScanTypeEnums::Linescan3d: {
		if (m_is_grabbing) {
			Cvb::CompositePtr cvb_composite;
			Cvb::WaitStatus waitStatus;
			Cvb::NodeMapEnumerator enumerator;
			std::tie(cvb_composite, waitStatus, enumerator) = m_compositeStream->Wait();
			switch (waitStatus) {
			default:
			case Cvb::WaitStatus::Abort:
				return ERR_TIMEOUT;
				break;
			case Cvb::WaitStatus::Timeout:
				return ERR_TIMEOUT;
				break;
			case Cvb::WaitStatus::Ok:
				break;
			}
			auto multiPart = Cvb::MultiPartImage::FromComposite(cvb_composite);
			auto cvb_image = Cvb::get<Cvb::ImagePtr>(multiPart->GetPartAt(0));
			int cvb_image_width = cvb_image->Width();
			int cvb_image_height = cvb_image->Height();

			int dst_image_depth_width = m_image_depth->GetWidth();
			int dst_image_depth_height = m_image_depth->GetHeight();

			if (dst_image_depth_width * dst_image_depth_height == 0)
				return -1;

			if (cvb_image_width != dst_image_depth_width || cvb_image_height != dst_image_depth_height)
				return -2;

			unsigned short* pCvb_image = (unsigned short*)cvb_image->Plane(0).LinearAccess().BasePtr();
			if (pCvb_image == nullptr)
				return -3;

			unsigned short** pDst_image_depth = (unsigned short**)m_image_depth->GetLineAccessTable();
			if (pDst_image_depth == nullptr)
				return -3;
			int image_data_size = cvb_image_height * cvb_image_width;
			for (int y = 0; y < cvb_image_height; y++)
			{
				unsigned short* ptr_dst = pDst_image_depth[y];
				for (int x = 0; x < cvb_image_width; x++) {
					*ptr_dst++ = pCvb_image[x + y * cvb_image_width];
				}
			}
			return NO_ERROR;
		}
	}
	}
	return UNKNOWN;
}

As you see, we create an composite stream to get at first range images (and reflectance images too in a second step). This stream has 300 buffer created into the RAM to be sure to not miss images.

The use of the new NextGrab methods take an average of 1,6 to 2,2 millseconds, mostly because of the

std::tie(cvb_composite, waitStatus, enumerator) = m_compositeStream->Wait();

method. That’s seems logical for me.

In both examples, we put the 16 profiles images in a bigger image to recreate the movement of the objets that we have to see, with apply of our own flip X/ flip Y/ flip Z after acquisition methods.

I have two problems :

  • When I measure the time of the second NextGrab, I get 25~ milliseconds the first time, and then… 1,6 to 2,2 millseconds on average. That’s a lot when compare to the first images times. I don’t understand why there is not the same acquisition time (I need to precise that I don’t write the imager part, so I may miss something) ;
  • My second problem is that this new code give me miss images when I put 300 buffers, but when I put 811 buffers (following the rules that we put as much buffer as we need images in 1 seconds), I get all the profile so that’s perfect ! But on my way of mastrering of this new C6 camera, when I come back to 300 buffers, I get the same images quality and not a degrated one like I used to…

There two examples of the bad (first one) reconstitutions with missing frames and good (second one) images with all the frame :

(The both images represent the same objects, but without the same flip X / flip Y / flip Z).

Thanks in advance for those who can answer me!

Best regards,

Hi @AxelDosSantosDoAlto

thank you for your detailed questions.
At the moment I have some problems following all of your information.

In your code there are some methods that call themselves recursively. I dont know if this actually happens when you see the 25ms or if this is simply overhead that never occurs in your tests.

As for the buffer, if I get it correctly you have no problems with 811 images stored but you see lost frames with 300 images space in the buffer?

To break this problem down I think what might help is if you write a very minimalistic console application that simply loads your camera, assigns the correct number of buffers and then performs a wait() call n times.
This way we should be able to measure the time it takes for the wait to return and see if this is consistent or if we already have a problem here.

I think this will help a lot

Cheers
Chris

Hi @Chris , thanks for your reply.

I find the reason of my problem. I use the C6 Cust::SensorRateMax to get the frame rate maximum that I can get for a free run, without knowing that this value is updated after taken one picture at least… So I was always getting a lower frame rate than I expected, and it was the frame rate on full 2D image…
It’s hard to know wich node sometime to get the full optimisation of the c6 camera.

Thanks for your reply anyway. :slight_smile:

Hi @AxelDosSantosDoAlto

great to hear you figured that out.

One more thing I saw when reading your code:
The stream has a property that indicates if it is still running or not.
So you might want to check for this property rather than for m_is_grabbing just to be sure you actually have the correct state.
Stream.IsRunnning might turn false without your member recognizing which might result in timeouts or exceptions.

Cheers
Chris