I’ve followed the setup guide to enable PTP, and checking the camera raw_timestamp values shows them to be within around 25 micro-seconds of each other, which is more than good enough for my application.
However, visually inspecting the captured images shows them to be around 3 or 4 frames out of sync with one another (images acquired in the same iteration of the handle_async_wait_result). I’m acquiring at 120 FPS.
Is this a known issue? Anything I can look at to resolve it? If the behaviour is consistent I could work around it, but it seems odd!
Are the images asynchronous right from the beginning? Be aware that you should start the acquisition on both cameras before enabling the trigger otherwise one camera will run ahead.
Please also check if you have lost image due to locked buffers (see stream statistics). By default only three buffers are allocated for images to be delivered to the host. If your application is not fast enough to acquire them in time images may be lost and result in a out of sync impression like you describe.
As a rule of thump the number of buffers should be high enough to store one second of acquisition. This way you application will be more resilient against OS system hick ups.
Forgive me I’m a bit confused by the first point. I have set PTP enabled in the GenICam Browser. I don’t change those settings, but then run the cameras with the code in the linked issue. So they are both running using handle_async_wait_result from the beginning. Could you send an example of how you mean to run them?
The ring buffer count is changed before running any acquisition, so I don’t think that’s the issue. At the moment I run at 120 FPS, so have allocated 200 on each camera.
Also in the attached issue, I only use images where both cameras have returned cvb.WaitStatus.Ok, so that should take care of situations where one camera has an issue? Unless there is another check I can do?
You should open the cameras and start the acquisition (run the handler) while PTP is disabled. Otherwise one camera might already acquire images while the other is still starting and they will be out off sync from the application perspective. Starting the stream is a quite expensive operation. After that the acquisition engine on the host will just wait for new images to arrive. Once it is stared enabling PTP is fast.
You should also start the PTP slave first, to add extra safety.
Unfortunately I do not have a code sample for this so I hope my explanation is sufficient.
Please note, that your ring buffer count is fine, but you cannot detect all possible issues. By the the cvb.WaitStatus. E.g. it will always be ok, even if you loose images as long as there are new ones arriving within the given timeout. Monitor the stream statistics to detect transport issues.
I had been sent the Alvium GigE guide from UK support, which I have followed. Looking through this it seems broadly the same. I’ll play around with trying to check some of the parameters in Python and report back.
In general though, I don’t feel it’s an unreasonable ask for more Python examples on multi-camera acquisition, in a few common hardware setups (I doubt I’m the only person using 2 GigE cameras and PTP).