This is a bit broad question, and I think it mostly relates to best pratices which might be usefull for other people too.
Im using three cameras, where one of the three is streaming. Upon pressing a button, the streaming should stop, and the three cameras should acquire one image. However, in some cases, the snapped images come out distorted. This looks like a network issue, and for streaming cameras I’d solve this with an interpacket delay in the camera settings.
But, in my application (in c#), the preview camera is grabbing, stops, and then all three cameras take an image. What is the best way to pipeline this? I have no idea what the actual framerate is (its 3x a snap after all), and all three cameras are on the same NIC.
Attached an example of how the images look.
this is heavily transport technology dependent – and also a bit device capability dependent. From your description I assume you are using GigE Vision cameras? But of course all transport technologies using a shared network/bus can have these problems.
First a few questions:
- How do you stop acqisition of the preview stream? Stop (
false) or abort (
- Are the snaps taken in parallel or one after the other?
- Which frames are affected? Does it change?
- How is your hardware setup?
Three cameras attached to one switch which is connected to one NIC?
Indeed, this is done with GigE vision cameras.
I stop the acquisiton via the following loop. I don’t really know which of the two (stop or abort) that is?
camImages is an array;
for (int i = 0; i < camImages.Length; i++)
camImages[i].Grab = false;
Snaps are simply taken by giving the three image objects a snap command in a similar manner.
I’ve got the following hardware set up, all on the same subnet, router and NIC.
I’ve been messing arround with the camera specific stuff (interpacket delay and suchlike) and this already reduced the frequency of the issue occuring quite a bit. Im busy reproducing the error (to answer question 3).
Update: the effects are much rarer than they were before, by limiting the data stream that each camera can transmit. Not a very elegant approach, as this also limits the max framerate for the preview (for which I’d rather not use 1/3 of the max bandwith, but the full allowed bandwith of course).
The effect seems to occur quite random, for all three camera, and sometimes on 2 images at the same time. It seems to have little to do with which camera was grabbing before the snap command came.
A somewhat better image of the effect;
Ok, I try to break it up a bit…
Regarding the Image ActiveX control
The stop mode depends on a Windows Registry value:
HKEY_LOCAL_MACHINE\SOFTWARE\Common Vision Blox\Image Manager\Global ACQ Kill Enabled
When set to 0 (the default) the acquisition is stopped (thus the current transmission is waited for). With 1 it is aborted. The resulting behavior depends on the camera: It also may immediately stop transfering data or finish its frame.
This is one of the reasons I prefer the CVB.Net API as it free from such side effects (the other non-ActiveX APIs, too; but as you are currently using C#…).
Grab Stop (
.Grab = false)
As sending by the camera and receiving data by the acquisition engine are parallel operations it may happen that the camera still is in process of sending an image although the acquisition engine has stopped. If you want to be really safe in that regard either use triggers or wait one frame after setting
Something as described in the stop case can also happen with snap. In its essence it is
- start acquisition (
- wait for one frame to arrive (
- stop acquisition (
The GenICam.vin tries to setup the camera to single-frame mode (GenICam SFNC features AcquisitionMode and AcquisitionFrameCount) which then should circumvent “double” send scenarios. If the camera does not support that the problem and the solution are the same as in the stop case.
Last but not least the other hardware components. I talked a bit about the capabilities of the cameras, but the router(?)/switch and the network interface cards (NIC) are also important. Dependend on their memory and processing power we also have seen issues that a (single) fast camera could overload a switch/NIC. That also can be circumvented via interpacket delay. Sometime jumbo frames also help (less processing power is needed as fewer packets arrive).
How to decide?
After all the ifs and maybes: a quick test would be to wait one frame time after each
.Snap. If the error is gone it is probably a double image send. Also for analysis use the acquisition statistics (
G2GetGrabStatus). See Stream Statistics and the following Transfer Monitoring post for more info. The corrupted frames might not always be that visible as the old data may look no different than the new.
For a detailed analysis we would need a Wireshark dump sent to firstname.lastname@example.org.
Well, I found out what the problem was;
My own code
Yeah, I did something weird in the order of things, where it could hapen that the three snaps would fire when the preview camera was still grabbing.
You were spot on about the double image send being the culprit though