Best way to do time lapse: stream.wait(), or stream.get_snapshot()

Hello,

I am running time lapses with typical intervals on the order of seconds to minutes, i.e. much larger than what the camera needs to acquire a single image. I was wondering if there was any difference between the two following strategies :

  1. start a stream with stream.start(), then use stream.wait() with a timer, and eventually stream.stop()

  2. use stream.get_snapshot() with a timer to get single images upon request.

I’m assuming (1) wastes resources by continuously leaving the camera running and rarely taking images, while (2) only uses resources on demand, but might create some “fatigue” by constantly opening and closing communication with the camera …

Any thoughts on what should be best practice in this situation?

Hi,

Both 1) and 2) do the same thing. So I would suggest to go with get_snapshot() since it is easier to use.

Thank you for your reply. I’m surprised that there would be no difference between the two. Doesn’t get_snapshot() open and close the stream each time, while stream.wait_for() allows the stream to be opened just once, and close at the end only?

Sorry I don’t really know what’s going on under the hood and what it really means to open and close a stream.

Yes, it’s excacly as you describe it. From your first description I would think that you would do a stream.start(), stream.wait() and stream.stop() for each image manually. That is what get_snapshot is for. When you only need a new image once in a while (not like 30 fps) it should be ok to call the get snapshot. Otherwise the ringbuffer would be filled during streaming (stream.start()) and you would get older images when calling stream.wait(). Once the ring buffer is full, images would be dropped. To circumvent that (besides usage of get_snapshot()), you would need to set the ringbuffer lock mode to ON and do stream.wait() in a loop and directly .unlock the image or get the image normally like now (no ringbuffer lock mode set = lock mode auto) and delete the image object. These two methods allow the ringbuffer to be filled with a new camera image. A third way would be the ringbuffer lock mode OFF but then you’ll never know if the image is overwritten, so try to avoid that mode.

All in all for normal use I would recommend the get_snapshot() function. Do you have any hardware limitations that starting and stopping the stream would use too much resources?

Thank you for your detailed explanation, I hadn’t thought of the problem of the ring buffer and getting older images, this is a very good point. Just to be sure I understand, in normal operation, the buffer discards any new images once full, and needs to be reset in order to accept new ones? Where can I find documentation on how the ring buffer works and interacts with start(), stop() and wait()?

In any case, I agree that in most situations we use (time intervals 1-10 s between images), get_snapshot() seems to be the way to go for us. Sometimes we also want to look at images with better time resolution (~10 fps), and I’m wondering at which point the get_snapshot() limits the performance, I will do some quick tests.

Hi @ovinc
here is some further information regarding the RingBuffer:

https://forum.commonvisionblox.com/t/where-does-the-image-point-to-using-rgbufferseq-with-numbuffers-1/140/5

https://forum.commonvisionblox.com/t/getting-started-with-cvb-net/246/14

And the online reference:
https://help.commonvisionblox.com/Reference/

Basically whenever you start your Stream, the buffers of the RingBuffer will be filled with image data until its either full and you get lost images (can be checked with the acquisitions statistics on the Stream) or you do a wait() or wait_for() call on the Stream.

Every wait() will leave you with a more or less actual image from your ringbuffer thus unlocking this part of the buffer for new image data to be filled with.

In a best case szenario you dont have to cache your images in the buffer as your processing is slightly faster than your acquisition or you trigger your camera thus only taking a new image once your processing is done.

There are use cases however, where you want to have high frame rates for a limited amount of time to acquire images that you want to process later on.
With the Ringbuffer you can do exactly that.
Eg. acquire images into a RingBuffer with NumBuffers = 1000, to keep 1000 images in the RAM and once your recording is done (or parallel to the recording) process your images. This way you are (almost) not limited with your fps.

In your case, you only need to be aware that the wait() has a default timeout of 10s.
You should also always check, if you still have buffers pending.

In a best case szenario you wait_for(13000ms) and get a timeout exception if there were no new image data within this interval, or a new image (within 12s) to process.

In your case however we have intervals between 1s and 12s, which is no problem, as long as your processing is always faster than the shortest interval between images.
If at any time your processing and thus the interval to the next wait() call COULD take longer than 1 second, you do not simply want to call another wait() as you still have image data in the buffer, which is the image that was taken while you were still processing.
This effect will now add up to the point where all your buffers are full and you start loosing images.

I hope this information is what you needed, in any case just let me know if you need further information.

Cheers
Chris

1 Like