Modifying individual PixelFormat settings for Dual streams of a Multispectral Camera

Dear team,

We are working on a multi-spectral camera which is capable of outputting two streams (RGB and NIR). We have been able to use C++ to modify NodeMap parameters to access the images of these streams. We are using code similar to a discussion on accessing Dual streams of a camera as present here.

As a result of this implementation, we can get two streams at different addresses and both of them can output images. The RGB images (first stream) are outputted alright. But the NIR images (second stream) sometimes show a blank white image. Sometimes these NIR images just show one white line horizontally and the rest of image is black.

The camera provides only one NodeMap.

On GenIcam browser, setting correct pixel formats on GenIcam for NIR i.e. Component Selector > NIR, Pixel Format > Mono8 shows a pixelated output. When we move something under the camera, we can see the motion on the image display window, but this pixelation / white noise is overlayerd on top of it.

A discussion here mentions that if the image has noise, it may be connectivity issue. The descriptors have been set to the maximum values and jumbo frames activated in the Configuration device vin file and in the Network card. The RGB images on GenIcam are displayed alright.

I will be grateful, if you could point me if there is a class which I might need to use to configure PixelFormats of both (synchronous: same FPS) streams (RGB & NIR) independently for their correct pixel formats and then output the images through code, or if there is an error which I might be missing on Genicam. I see a MultiStreamHandler class with Setup() method, will it be of relevance here?

It should be noted that this camera provides two ways to output multispectral images, I am currently trying the second method.
image

I will be looking forward to hearing your comments. Wish you a great afternoon,

Kind regards,
Maggi

Hi @maggi ,

can you give us more information on your setup?
CVB Version, OS and so on…

Cheers
Chris

Hi @maggi,

First i do not believe you do have transport issues. The image (which you are showing) suggest there is something wrong (i.e. misinterpreted by the code/driver) regarding the pixel format.

Do you by any chance have a property called “Std::SourceSelector” or “SourceSelector” in your nodemap?
If yes, this needs to be adjusted before opening the individual streams.
I.e. Go to “DS1”, change SourceSelector to “Source1” then press Play, then go to “DS0”, select “Source0”, then press Play.

Hi @maggi

Did my reply help?

Hi @c.hartmann

Apologies for the delay in response. The pixelated output was indeed gone after properly setting the PixelFormat node settings (specifically RGB, RGB8 to enable DS0 and NIR, MONO8 to enable DS1) in the GeniCam.

As we are working more on the C++ code, we need to be able to access the individual PixelFormat settings of both the streams simultaneously to enable dual stream output. If we use the Device nodemap, then how can we set the nodemap parameters separately for individual streams.

We are accessing multiple streams so we are opening the camera as generic transport layer but the Nodemap is part of the Device. Do you have an idea if there is a way to link the Nodemap with the Generic Transport Layer so we are able to access the individual PixelFormat settings.

The code we are currently using is as follows:

       auto line_scan_device = Cvb::DeviceFactory::Open(infoList[device_index].AccessToken(), Cvb::AcquisitionStack::GenTL);

        std::vector<Cvb::ImageStreamPtr> streams; //Vector to take all the streams
        std::generate_n(std::back_inserter(streams), line_scan_device->StreamCount(), [&line_scan_device, i = 0]() mutable
        {
            return line_scan_device->Stream<Cvb::ImageStream>(i++);
        });

        auto device_node_map = line_scan_device->NodeMap(CVB_LIT("Device")); 

        std::cout<<"Total Streams Available : "<<streams.size()<<std::endl;

The CVB version we are using is 13.04.006 and the OS is Windows 10. Thanks a lot and I wish you a great day ahead,

Hi @maggy,

As we are working more on the C++ code, we need to be able to access the individual PixelFormat settings of both the streams simultaneously to enable dual stream output. If we use the Device nodemap, then how can we set the nodemap parameters separately for individual streams.

I would be pragmatic and set the appropriate Pixelformat after Opening the Device, but before creating the streams.

i.e.
0. open device

  1. “auto device_node_map = line_scan_device->NodeMap(CVB_LIT(“Device”));”
  2. change the SourceSelector to first stream
  3. set appropriate pixelformat
  4. change the SourceSelector to second stream
  5. like 3.
  6. open streams

Otherwise just build a tiny wrapper around the stream, which also gets the nodemap ptr from the device.
Alternatively you can just access the device from a stream via StreamBase::Parent().

Dear @c.hartmann ,

Thanks for the response. This looks like a plausible method.

There is no SourceSelector property in our GeniCam browser. The property visibility is set to Guru level. The closest we can see is:

Device Link Selector (Std::DeviceLinkSelector)
Selects which Link of the device to control.

Full Name: Std::DeviceLinkSelector
Type: Integer
Access Mode: Read/Write
Visibility: Beginner
Caching Mode: Write Through
Streamable: True
Minimum: 0
— —
Maximum: 0
Increment: 1
Representation: Pure Number

Do you have an idea, where we could find it. Thanks a lot for the reply.

Hi @maggi,

sorry the SourceSelector is used in generic “multi stream” scenarios. In your case the “ComponentSelector” applies (instead of SourceSelector). I am silly :expressionless:

Hi @c.hartmann

We just tried the ComponentSelector but this particular setting seems to be linked together for both streams at the backend. Changing the ComponentSelector value in DS0 causes the ComponentSelector value in DS1 to be of the same value automatically, and vice versa. Do you have an idea, if there is another setting that maybe causing this linking behaviour.

Have a great day ahead,
Kind regards,

Hi @maggi

there is no linking behaviour, it is the same resource, there is also no way to unlink this behaviour.
i.e. there is only one ComponentSelector regardless how many streams are opened and changing it in one datastream will affect the other one.

I.e. it is designed this way in the Genicam SFNC Document.

Hi @c.hartmann

Thanks for the response. We will try this out and let you know.

While trying this, we aim to set the buffer size in the nodemap. While opening the camera as:

auto device= Cvb::DeviceFactory::Open(infoList[device_index].AccessToken(), Cvb::AcquisitionStack::GenTL);

and then accessing the Nodemap parameter as following results in the error as below:

device->Stream()->RingBuffer()->Count();

Failed to open the device: legacy stream cannot be created from handle

Do you have an idea what could be causing this or any direction we could explore to troubleshoot the issue. Kind regards

Yes, essentially you are mixing APIs / Stacks. To set the RingBufferSize with AcquisitionStack::Gentl you need to use: RegisterManagedFlowSetPool(…) (https://help.commonvisionblox.com/NextGen/14.0/cvbpp/d6/d84/class_cvb_1_1_driver_1_1_composite_stream_base.html#a42c8dcf66f58afa436cb8a5c3c092fd4)

But a simpler way to do this is to manipulate the infoList[device_index] before opening the camera.

    // find all cams
    auto tokens = Cvb::DeviceFactory::Discover(flags);

    // open all cams serially
    for (auto& token : tokens)
    {
      token.SetParameter("NumBuffer", "50");     

      auto device = Cvb::DeviceFactory::Open(token.AccessToken());
1 Like

Hi @c.hartmann
So as @maggi was trying out with the buffers, If I set the buffer count to 50 for the dual stream camera, will the count of 50 be set to the individual streams or will it be divided into 25 each ?

I am not entirely certain how it is implemented, but each stream should get 50 buffer.

If it doesn’t …

… will set it for the individual stream.

Hi @c.hartmann
Just a small question. So, when I use RegisterManagedFlowSetPool(…) function to set the buffers, I need to set it individually for each stream right ? like stream_1->RegisterManagedFlowSetPool(NumBuffers) and stream_2->RegisterManagedFlowSetPool(NumBuffers).

If this is the case, then can i set different buffers for each stream. Eg: 100 Buffers for stream 1 and 50 for stream 2

Yes, you have to call it for each stream individually.

Streams at this point should be viewed as indepented.

1 Like

Hello @c.hartmann
Just a question regarding the bufferes set via RegisteredManagedFlowSetPool(). Does these buffers behave same way as the ring buffer ?

Consider this scenario:
I set the buffer size to 50 for one of my streams and the buffers are allocated as buffer[0], buffer[1],buffer[2],…buffer[49]. Now the camera pushes the first image to buffer[0] and the next to buffer[1] and so on. In my code, when I do stream->Wait() or stream->WaitFor(TimeOut), the image present in buffer[0] is read out, once I clear the memory by saving the image the buffer[1] moves to buffer[0], buffer[2] moves to buffer[1] and so on, right ?

Does the wait() function will always point to buffer[0] ? If so, do I have to manually free the memory using delete or will it happen automatically (I know the images are given as shared pointers so the memory should be freed automatically) but Is my understanding of the flow correct ?

Each buffer is requeued into the pool, once your buffer (the return value of Wait()/WaitFor()) runs out of scope, i.e. Cvb::ImagePtr is deconstructed).

This might be important: IF YOU KEEP THE Cvb::ImagePtr ALIVE (forever), the memory will (never) be overwritten. I.e. as if you had one less buffer available for acquisition.

We (and GenTL) have a strict model of lifetime and ownership over buffers.

No, it might point to buffer[0] (this is statistically the most likely scenario).
It is a buffer POOL, meaning there is no specific order of returning.

buffers returned might look like this (talking about memory layout here):
image 0: buffer[0]
image 1: buffer[0]
image 2: buffer[0]
image 3: buffer[1]
image 4: buffer[2]
image 5: buffer[0]
image 6: buffer[1]

The underlying technology is the GenTL standard which also uses a pool.

For our GenTL implementation for GigEVision a std::vector is used:
std::vector < std::shared_ptr < CBufferInfo > >
(Re-)Queueing is implemented via push_back(). So it will behave like a queue, i.e. our impl of the “pool” is really a queue. But this is internal.

So … (talking about memory layout here)
image 0: buffer[0]
image 1: buffer[1]
image 2: buffer[2]
image 3: buffer[3] // i am just assuming we have only 4 buffers.
image 4: buffer[0]
image 5: buffer[1]
image 6: buffer[2]

This is the point where i should ask about your use case.
Do you intend to hold ownership of these buffers longer?
We do have ringbuffer APIs available, but what do you intend to do?

Hi @c.hartmann
Thank you for the explanation.

So, What I am doing is this

I have a local queue to which the buffer from Wait()/WaitFor() are enqueued. Now, I have two writer threads, which access the front of this local queue and the images are written to the disk (I am locking and unlocking the local queue before poping them out). So, now whenever, I save the image to disk, the buffer memory set by RegisteredFlowSetPool() is freed.

I would hold on to the buffer until the images are saved to the disk.

Now, I would like to ask if there is a way to know about the number of buffers that are already filled ? I meant the pool buffers.

I see that there are two functions FlowSetCount() and FlowSetInfo(). The function FlowSetCount() gives the total number of buffers allocated ? or does it give the number of buffers that hold the images ?

Also, what does the FlowSetInfo() give ? This function is little unclear to me

Thanks in advance