MultiStreamHandler can't handle cvb.ImageStreams from Multispectral Camera

Hi,

I’m trying to follow the QmlStreamDisplay Example (https://help.commonvisionblox.com/API/Python/cvb_2_qml_stream_display-example.html) to get a stream from an Multispectral Camera (jai Fusion with 3 Sensors), which works as long as I just capture one single stream(cvb.Stream).

However, it is not possible to capture all the streams of the camera in parrallel, so I followed the way described here https://help.commonvisionblox.com/API/C++/gen3acq.html#multiStream for Multistream using the GenTL acquisition stack, which demands opening the device streams as ImageStream object.

        discover = cvb.DeviceFactory.discover_from_root(flags=cvb.DiscoverFlags.IgnoreVins)
        device_info = discover[0]

        self.dev = cvb.DeviceFactory.open(device_info.access_token, cvb.AcquisitionStack.GenTL)
        self.dev_node_map = self.dev.node_maps["Device"]
        self.stream = [[],[],[]]
        self.img = [[], [], []]
        for n_stream in range(self.dev.stream_count):
            self.stream[n_stream] = self.dev.stream(cvb.ImageStream, n_stream)
        handler = cvb.MultiStreamHandler(self.stream)

Unfortunatly, the MultiStreamHandler seems not to be able to handle ImageStream and throws an error:

ValueError: type miss match - cannot convert object to cvb.Stream

Is it possible to make the ImageStream work with the Single/MultiStreamHandler or do I need another approach to display a real time image from the multispectral camera on the screen?

Using the .wait()-method on the ImageStream objects works and I can get all the Streams from the camera but the display (using PySide2, Qlabel and Qtimer for repetitively calling the wait()-method) seems rather slow and delayed and yields erros in the images, at least in my implementation.

Help would be very much appreciated. Thank you very much!

I’m using:
cvb: ‘v13.04.005’
Windows 10
Python 3.8.3
and a jai Fusion Camera with 3 CMOS Sensors

Ahoi,

can you show an image of the errors?
(I am suspecting missing lines in the images).
This might be related to “normal” transmission parameters.

Cheers.

@JRo85 forgot to tag you.

image

Hi, yes! It’s just the common missing lines but annyoing anyway and doesn’t seem to appear in the QMLStream example. Do you have tips to make the lines disappear and how to display the frames in real time with small delay and in a frequency >20 Hz on the screen using Python? As said, the QMLStreamDisplay seems like a good start if it would work for the cvb.ImageStream objects from the jai Camera.

I’m not 100% convinced that it’s just packet loss. But this should be fixed first.

  1. Test with “Genicambrowser”.
  2. Follow these instructions: https://help.commonvisionblox.com/GenICam-User-Guide/html_english_howto_jumbopackages_e.htm (full manual is here: https://help.commonvisionblox.com/Configurator/)

Do you see these stripes with Genicambrowser?

(And a minimal qml gui with python can be found in: %CVB%\Tutorial\ImageManager\CvbPy\QmlStreamDisplay/)

Hope this helps.

Hi c.hartmann,

thanks for your help! The stripes only appear right after starting the stream and disappear after one to a couple of frames. It seems that following the setup of the network card and the camera have improved the issue with the stripes. Thanks a lot! In the GeniCamBrowser I’ve not recognized the stripes. The logging however states once “Corrupt frames were delivered due to lost packets. Packet resend is already activated. Please optimize network card settings (e.g. receive descriptors)” right after clicking “grab all”.

My initial question was mostly related to the QmlStreamDisplay - Example because I cannot use the cvb.ImageStream objects with the Single/Multistreamhandler used for displaying the stream. The example works fine for me without any visible stripes using the GenICam.vin, but here I can only see one stream and not 3.

Cheers

@JRo85 forgot to tag again -.-

And if you want to change the the packetSize via python (you only need this if you use discover): cvb.DiscoveryInformation.set_parameter(“PacketSize”, “8960”)

With PacketSize it is best to use of 8960 or 8192 (depends on camera, but usually PacketSize % X == 0, where X might be 16, 256, 512). Higher is better.

@JRo85 Oh in that case, you have to modify the example to have multiple displays.

We don’t have a codesnippet or example showcasing (with UI) multiple image/stream/devices as far as i recall.

This was exactly my question from the beginning how to do this because the cvb-class that is used in the example (Single- and Multistreamhandler) doesn’t work with the GenTL Acquistionstack required for acquisition of multiple streams from one camera. Do I have to write my own handler-function or is there some other workaround?

Pretty much yes. I would just rewrite the handler to work with the new stack.

Sadly the stack works quite differently (especially in regards of pulling the data out of the buffer).

Hmmm … maybe there is a way … does the camera offer a mode, where the data is transmitted via a sindle stream (but not Multipart data)? Then you could just work with the old “Vin” stack.

Pretty much yes. I would just rewrite the handler to work with the new stack.

Ok. I guess that will take a while… Can you give me a hint on how to rewrite the handler to work with the new stack. Looking at the implementation of the MultiStreamHandler in the init.pyi wasn’t very helpful for my current level of understanding.

does the camera offer a mode, where the data is transmitted via a sindle stream (but not Multipart data)

I think it doesn’t. It’s a jai FS-3200T-10GE-NNC and I didn’t find anything in the manual about this.