Getting started: Displaying multiple Cameras

Thank you @Lukas for pushing this topic forward.

Now, lets use the code we have at this point and do some processing.
Until now we only had 2 Displays, that did nothing else than display the image that was currently acquired.
After this little session we will have one display that shows an image that we manipulated in some way. We did some processing with the image if you will.

So… lets remove everything that had to do with the second device and only keep the Image2 property.

All that is left now should be this:

MainViewModel.cs
using Stemmer.Cvb;
using Stemmer.Cvb.Async;
using Stemmer.Cvb.Driver;
using System.Threading.Tasks;

namespace WpfApp3
{
  class MainViewModel : ViewModelBase
  {
    public MainViewModel()
    {
      Device1 = DeviceFactory.Open("CVMock.vin", 0, 0);
      Image1 = Device1.DeviceImage;
    }

    public bool Grabbing
    {
      get => _grabbing;
      set
      {
        if (value != _grabbing)
        {
          _grabbing = value;
          if (value)
          {
            StartPlayingAsync(Device1);
          }
          OnPropertyChanged();
        }
      }
    }
    private bool _grabbing;

    public Image Image1
    {
      get
      {
        return _image1;
      }
      set
      {
        _image1 = value;
        OnPropertyChanged();
      }
    }
    private Image _image1;

    public Image Image2
    {
      get
      {
        return _image2;
      }
      set
      {
        _image2 = value;
        OnPropertyChanged();
      }
    }
    private Image _image2;

    public Device Device1
    {
      get
      {
        return _device1;
      }
      set
      {
        _device1 = value;
        OnPropertyChanged();
      }
    }
    private Device _device1;    

    public async Task StartPlayingAsync(Device toStartStreamingFrom)
    {
      toStartStreamingFrom.Stream.Start();
      try
      {
        while (Grabbing)

        {
          using (StreamImage image = await toStartStreamingFrom.Stream.WaitAsync())
          {

          }
        }
      }
      finally
      {
        if (toStartStreamingFrom.Stream.IsRunning)
          toStartStreamingFrom.Stream.Abort();
      }
    }
  }
}

Now, the Image2 property still exists, our second display on the View also still exists and still binds to this property.
What we do now is, in the constructor we set the Image2 to a black image with the same size as Image1, just to have something else than blank white space on startup:

Image2 = new Image(new Size2D(Image1.Width, Image1.Height));
Image2.Initialize(0);

Bevore we can now use any filters on the image, we need to add a new reference and use it.
So go to your references, add Stemmer.Cvb.Foundation and dont forget to add it to the usings:

using Stemmer.Cvb.Foundation;

Short sync:

This is what the ctor should look like now
 public MainViewModel()
    {
      Device1 = DeviceFactory.Open("CVMock.vin", 0, 0);
      Image1 = Device1.DeviceImage;
      Image2 = new Image(new Size2D(Image1.Width, Image1.Height));
      Image2.Initialize(0);
    }

Now, lets go into the StartPlayingAsync(Device toStartStreamingFrom) and add some filtering on our camera images and display them on the second display:

StartPlayingAsync implementation with filter
public async Task StartPlayingAsync(Device toStartStreamingFrom)
    {
      toStartStreamingFrom.Stream.Start();
      try
      {
        while (Grabbing)
        {
          using (StreamImage image = await toStartStreamingFrom.Stream.WaitAsync())
          {
            Image2 = Filter.Sobel(image, FilterOrientation.Horizontal, FixedFilterSize.Kernel3x3);
          }
        }
      }
      finally
      {
        if (toStartStreamingFrom.Stream.IsRunning)
          toStartStreamingFrom.Stream.Abort();
      }
    }

Thats it… if you run your example right now and hit ‘Grab’ you should see something like this:

Now, that was simple, wasn’t it?

Well there is a little problem with this, that you might stumble upon in the future if you are doing some high speed processing or really complex image manipulation but you dont need to worry about this for now. If you are interested feel free to reveal the spoiler below.

There is just one thing we have to keep in mind whenever we do any processing.
Currently the Filtermethod takes 2ms-5ms (on my machine and in debug mode).
My CVMock.vin`s FrameRate is set to 5, this means, I get 5 images per second.
Thus I have 200ms from acquiring one image until the next image arrives.
So we are all good here.

If you aim at higher framerates or have more complex processing tasks, your processing might eventually take longer than the time until the next image is being acquired.
To handle this situations and prevent you from loosing images, :cvb: has a nice little RingBuffer interface, that by default is set to 3.
So, by default you are able to buffer 3 images for processing without data getting lost. Depending on how often your processing takes longer than the acquisition this might still not be enough.

For now, dont worry about this as it wont affect this tutorial, but if you like have a look at this explanation here:
https://forum.commonvisionblox.com/t/getting-started-with-cvb-net/246/14

Or just search the forum for RingBuffer.

Summary
As always, here is the MainViewModel to sync codes, we did not touch the MainView this time.

MainViewModel.cs
using Stemmer.Cvb;
using Stemmer.Cvb.Async;
using Stemmer.Cvb.Driver;
using System.Threading.Tasks;
using Stemmer.Cvb.Foundation;

namespace WpfApp3
{
  class MainViewModel : ViewModelBase
  {
    public MainViewModel()
    {
      Device1 = DeviceFactory.Open("CVMock.vin", 0, 0);
      Image1 = Device1.DeviceImage;
      Image2 = new Image(new Size2D(Image1.Width, Image1.Height));
      Image2.Initialize(0);
    }

    public bool Grabbing
    {
      get => _grabbing;
      set
      {
        if (value != _grabbing)
        {
          _grabbing = value;
          if (value)
          {
            StartPlayingAsync(Device1);
          }
          OnPropertyChanged();
        }
      }
    }
    private bool _grabbing;

    public Image Image1
    {
      get
      {
        return _image1;
      }
      set
      {
        _image1 = value;
        OnPropertyChanged();
      }
    }
    private Image _image1;

    public Image Image2
    {
      get
      {
        return _image2;
      }
      set
      {
        _image2 = value;
        OnPropertyChanged();
      }
    }
    private Image _image2;

    public Device Device1
    {
      get
      {
        return _device1;
      }
      set
      {
        _device1 = value;
        OnPropertyChanged();
      }
    }
    private Device _device1;    

    public async Task StartPlayingAsync(Device toStartStreamingFrom)
    {
      toStartStreamingFrom.Stream.Start();
      try
      {
        while (Grabbing)
        {
          using (StreamImage image = await toStartStreamingFrom.Stream.WaitAsync())
          {
            Image2 = Filter.Sobel(image, FilterOrientation.Horizontal, FixedFilterSize.Kernel3x3);
          }
        }
      }
      finally
      {
        if (toStartStreamingFrom.Stream.IsRunning)
          toStartStreamingFrom.Stream.Abort();
      }
    }
  }
}

Happy coding and see you soon!
Cheers!
Chris