Async Saving of Images


I am currently using CVB 13.06 with the .NET WPF Bindings. My application connects to a TwinCat2 Ads Server and listens for a trigger. Once Trigger is received, 2 images should be captured and saved to disk ASAP.

I ´ve created a DeviceDependency class inheriting from DependencyObject. In there is a DependencyProperty for the Device just like in the samples you provide.

In my ViewModel I create an instance of that DeviceDependency class in the constructor and the Display on my View binds to the DeviceImage of that Device in the DeviceDependency-object.

Now what is great is, there are functions like GetSnapshotAsync(), but unfortunately, my trigger event gets fired from another thread so I have to actually invoke it on ViewModel thread right? That’s why I am currently holding an instance of my Window in my ViewModel which is, not MVVM friendly. So in my trigger I actually call Window.Dispatcher.InvokeAsync and do all my capturing and saving in there. However, the whole window freezes whenever a trigger occurs and unfreezes when saving is done. I am not sure whether it is the Saving, or the Capturing that blocks the GUI (at the time of writing this). For saving images there are no async methods unfortunately, do you have a work around for that? Like, is it safe to await a Task.Factory.StartNew() Task in which I call the Save method? Will that even work? How would you achieve that properly?

Thanks in advance

I think the answer is easier shown with a project: (11.9 KB). The Stemmer.Cvb and Stemmer.Cvb.Wpf assemblies need to be installed also (see the downloads). The app expects the camera to be in software trigger mode. If it is not it will save a lot of images :slight_smile:.

In the project I implemented the ISoftwareTrigger via a Button. The MainWindow also shows a live image. The MainViewModel provides the live Image and commands for triggering and app-closing. The whole program logic is in the TriggerSaver model class.

The MainWindow creates the MainViewModel which in turn creates the TriggerSaver which opens the first GenICam device it finds. Then the app is run async via WPF’s Dispatcher. Thus the main logic runs on the UI thread. That is not a bad thing as all the code is written async. The Image.Save is executed via .Net’s task pool (Task.Run).

The :cvb: Device is encapsulated in the model class TriggerSaver. Only the live Image is publicly available plus the use cases of running the app and sending (double) software triggers. Depending on the camera used you probably want to modify the TriggerSaver.SendSoftwareTrigger method.

The app logic is written in a way that we wait for two images and save them as soon as they arrive. We utilize :cvb:'s IRingBuffer interface to keep the acquisition code simple without loosing images. We use RingBufferLockMode.On and manually unlock acquired Images via the using-block in SaveAndUnlockImageAsync. Depending on your acquisition rate and hard disk/ssd used you might need to increase the buffer count on the ring buffer to handle processing peaks (see TriggerSaver.SetupDevice).

Here is the core logic:

public async Task RunAsync(CancellationToken cancellationToken)
  var stream = _device.Stream;
  stream.RingBuffer.LockMode = RingBufferLockMode.On;

    long counter = 0;
    while (!cancellationToken.IsCancellationRequested)
        var savesFinished = new Task[2];
        for (int i = 0; i < 2; i++)
          var imageTask = stream.WaitForAsync(AcquisitionTimeout);
          savesFinished[i] = SaveAndUnlockImageAsync(imageTask, counter++);

        await Task.WhenAll(savesFinished);            
      catch (TimeoutException)
        // swallow timeouts, but we need to check for cancellation

private async Task SaveAndUnlockImageAsync(Task<StreamImage> imageTask, long imageNumber)
  Debug.Assert(imageTask != null);

  using (var image = await imageTask)
    await SaveImageAsync(image, imageNumber);

private static Task SaveImageAsync(Image image, long imageNumber)
  Debug.Assert(image != null);

  return Task.Run(() =>
    image.Save(Path.Combine(StorageDirectory, $"{imageNumber}.bmp"))

Hi, that is interesting, I will have a look into the project. Thank your for the example and the great explanation how it is being setup/made.

But I have another small question. I was using GetSnapshotAsync for now because when I started a stream, I took an image, changed the ExposureTime Node, took another image, it didnt change the exposureTime in the next image. It first appeared some frames after the two frames I grabbed. Is there a way to wait for the camera to apply the exposure time and then react with a camera shot? For now I had to do it via GetSnapshotAsync but I can imagine it is way slower than taking single frames from a running stream.

We seldom use .GetSnapshot as it is systematically slow:

  1. it starts the acquisition engine
  2. setups the camera to single frame mode if possible
  3. starts the camera acquisition
  4. waits for one frame
  5. stops the acquisition engine
  6. restores the former acquisition mode in the camera

If you have non-moving objects and enough time available this is still a viable way to go.

Regarding the application of settings in the camera I can imagine two things:

  1. The intended contract for a GenApi device is that the setting is applied when the value write was acknowledged. Some cameras try to improve UI responsiveness by immediately acknowledging the write and then applying the value later.
  2. Acquisition is configured free running and was active and the camera is delivering frames relatively fast. Thus you will have a new frame in the Device's ring buffer while you set the new exposure time. The current frame in the camera will still be exposed with the old exposure time and then the next one will use your newly set one.

In both cases you end up with frames with the old exposure time. The only way you can get around that if your software wants to control everything is to use triggered acquisition or the .GetSnapshot method. Setting up the camera in software trigger mode and sending software triggers is much more efficient and helps to nicely separate the concerns in your application: Trigger logic is then independent of acquisition.

1 Like

@parsd The provided example solution shows, how software triggering works. Am I correct with the thought, that on Button Click it will trigger 2 times and the whole process has a 100ms timeout for both the images?

So basically if I wanted to set the exposure time in between of those two, I will first set the exposure time, then send a software trigger, set the exposure time again to another value, send a second software trigger. Wont this end up also in not applied exposure times because of the ring buffer, or is a software trigger real time acquisition?

Hi @johmarjac,

regarding the example: yes in this simple example there will be two software triggers sent. Depending on your application requirements and camera used you need to adapt that trigger code. Also we use a timeout of 100ms, but not for the whole process. We try to acquire two images with a 100ms timeout each. So if there are not two images arriving within 200ms we won’t save anything. This timeout also enable us to cancel our acquisition app even when no images are arriving (see the outer loop with !cancellationToken.IsCancellationRequested).

Regarding the triggering and exposure time:

  1. Triggering
    The example here is quite silly and will only work for very short exposure times. If you trigger too quickly you will “overtrigger” the camera and probably loose a frame. Thus you should wait inbetween two triggers (at least for the exposure time).

  2. Exposure Time
    If you want to control the acquisition sequence as you described you can do it like this:

    1. set the exposure time for the first frame
    2. send the software trigger
    3. wait at least as long as the exposure time is
    4. set the exposure time for the second frame
    5. send the software trigger
    6. depending on your acquisition logic you can wait here as long as the second exposure time is or sync to your save handling.

    With this approach you can set the exposure time for each frame and avoid overtriggering you device.

  3. The RingBuffer
    The RingBuffer and the camera are two different entities. They both run in parallel completely independent of each other (they are in two different devices: the camera and your PC). If you send a software trigger you simply tell the camera to take a picture and send it to you. If the acquisition engine is active and the RingBuffer has space left it will store that picture until you collect it via the .Wait function. The .Wait won’t collect the data; it just syncs to the next acquired image. The acquisition handling collects the incoming images in parallel to your application code.

To sum up: in 2. I only described camera setup/communication. I also would rename the .SendSoftwareTrigger method to something along .SendAcquisitionSequence. Perhaps you even find a better name :slight_smile:. The acquisition handling on the receiving end doesn’t change at all. We simply get two images. You need to change your timeout settings, though, if you use differing exposure times. I would suggest using the short exposure time first as this makes cancelling your acquisition code faster.

I hope that makes whats happening between the camera and your software clearer.