I am currently working with a linescan setup (Linea GigE) and CVB 2017. The camera is connected to an encoder to synchronize the line frequency with the movement of the belt. In our application it is important that the belt can stop for a while and start moving without losing lines.
I would expect that the camera is waiting until the belt is moving again to finish the current image. So the first X lines of the image were acquiered before the stop and the rest of the image afterwards. Of course the stop-start period should be shorter than the default timeout I set in the Management Console.
Unfortunately this is not the case. The camera does acquire images without problems but as soon the belt stops the current image will be aborted and I get an unfinished image with X lines which were acquired before the stop and the rest of the image is filled with old data. With the next start of the belt the camera is acquiring images again.
I hope you can tell me what exaclty is happening here.
this is a very specific issue with linescan cameras and start-stop applications. Next to the “default timeout” from CVB there is a hidden “packet timeout” in the GenICam.Vin which oversees the time between the network packets which are sent by the camera. If the transfer stops the packet timeout will trigger and the current image will be transferred unfinished. Normally there is no reason why a camera should stop the transfer in the middle of an image as long there is no issue and so nobody becomes acquainted with the “packet timeout”.
Of course there is this one exception you have there. The GigE Linescan camera will stop the image transfer as soon as your application stops and no more packets will be sent as long the belt is not moving. Therefore the packet timeout does not work with your application and should be deactivated.
This is a GenICam.vin setting but it is not part of the GenICam.ini. So you will have to set it each time you load your application.
You will find the “PacketTimeout” in the DataSteramPort Nodemap.
If not, what is the difference, when should you use which, and is there a way to set this second one programatically? (I’ve already found Cust::PacketTimeout)
no, the default timeout does refer to the timeout between two images. If Image B is not acquired after the time of “Default Timeout” after Image A was acquired you will receive a timeout error. However, the Packet Timeout is a timeout between two Networkpackets. This timeout is set to 200 ms by default. So if your Linescan system does pause the image transfer (because your belt stopped) the timeout will occur. You can change the timeout only programatically. I think the “Cust::PacketTimeout” you are refering to should be the correct node.
Set it to Zero to deactivate the PacketTimeout completely.
OK thanks, I’ve already set Cust::PacketTimeout, but if I also want to avoid timeouts between images, is there a way to set the Default Timeout programmatically?ù
Update, is there any relationship between this Default Timeout and the log in GenICam Browser (which seems to always say “Acquisition started (AcquisitionTimeout= 60000 msec)” whatever the value of Default Timeout is)?
We also see:
WARNING ::ID->00-01-0D-C2-56-F2::192.168.36.29|DS0 : Acquisition timeout occurred (counter= 0 ). A reason might be no or bad trigger signal. The AcquisitionTimeout is 60000 msec.
With the new APIs like Stemmer.Cvb the setting in the Management Console has no meaning anymore. The .Wait call waits forever or until acquisition is aborted and the .WaitFor waits for the specified amount of time.
And this is always the timeout for the method call and not anything in the acquisition engine.
We have a Teledyne Linea, and even though we’ve set the Packet Timeout to 0, we’re still getting back a half-black image if we stop triggering half-way through an image. I’m wondering if this isn’t something that actually happens on the camera? Does anyone know if there’s a way to stop this? Or at least work out where it’s happening (in the camera or the driver).
No, actually this should be definately the packet timeout. Or in other words: Lets try something
Can you use the CVB Viewer and try to reproduce this behavior? The viewer does have the option (“GenICam -> Transport Layer Nodemaps”) to access the Data Stream Port Nodemap and change the Packet timeout manually. If null does not work please try maximum and send us your findings.
Not sure how to connect the CVB Viewer, how do you connect a camera from there?
Using the GenICam Browser, we can connect to the camera, and modify the Packet Timeout - do we need to execute the Update settings command for this as well?
We can’t seem to get consistently understandable results - sometimes with the Packet Timeout set to 0 we get timeouts. Then with Packet Timeout set to non-zero we don’t get timeouts. Or with a short packet timeout (I’m presuming the number is milliseconds), the timeout takes minutes.
We’re going to carry on testing, but if you have any ideas, please let me know.
We could not reproduce the behavior. Did you restart the Acquisition after setting the parameter? Our testing made clear that this is important so that the parameter value can be updated.
However, I think the next step should be a remote session as I wrote in the PN I send you.
Thanks to some psychic debugging from @Theo, this is fixed.
The timeout is reset to 200ms every time you connect to a camera, so if you want it to be zero, you have to set it explicitly. I was doing this, but then disconnecting and reconnecting again!