Getting Started with CVB.Net

Getting Started with CVB .Net

The simplest way for getting started is using Visual Studio. Make sure you have the .Net-Desktop Development package installed. Also a current version of CVB needs to be installed (download for Windows 32 or Windows 64). Starting with :cvb: 13.02.000 CVB .Net is included in the standard installers for Windows. If you use an older 13.x version you still can download it separately (see here) .

First create a Console, Windows Forms or WPF App as you like (or take an existing one). In this post we just add the necessary assemblies. Later on we see some code.


But before we start let’s take care of a gotcha when using :cvb: 64 bit: Default project setting is AnyCPU which is exactly what we need. Uncheck the Prefer 32-bit check box in the build settings for Debug and Release:

If this stays checked the process will be 32-bit. Then some things are not working as expected or you immediatly get a BadImageFormatException as we install either 32-bit or 64-bit :cvb:.


I recommend reading this from top to bottom, but here are the main topics for the impatient:


Add Reference to Stemmer.Cvb Assemblies

Before you can do anything with :cvb: in .Net, you need to add the references to our Stemmer.Cvb assemblies:

In the opened dialog switch to Browse and click on the Browse… button:

BrowseReferences

Enter

%CVB%Lib\Net

in your address list to jump to the :cvb: library directory for .Net. Ignore the iXXX.dll group of DLLs (e.g. iCVCImg.dll) – these are the C-style P/Invoke assemblies. We are interested in the Stemmer.Cvb.* assemblies.

The Assemblies

Image Manager (and thus CameraSuite)

Stemmer.Cvb.dll
This is the core assembly containing basic Image and Device handling including acquisition via Streams and configuration (like GenICam GenApi or IDeviceControl).

This combines now the functionality of CVCError.dll, iCVCImg.dll, iCVCDriver.dll, iCVCUtilities.dll, and iCVGenApi.dll.

Stemmer.Cvb.Forms.dll
The :cvb: Display and GenApiGrid for Windows Forms apps. If you reference this one Stemmer.Cvb.Extensions.dll and Stemmer.Cvb.Aux.dll are needed.

This combines the functionality of the CVDisplay ActiveX and CVGenApi ActiveX controls.

Stemmer.Cvb.Wpf.dll
The :cvb: Display and GenApiGrid for WPF apps. You also need Stemmer.Cvb.Extensions.dll if you reference this assembly.

In principle this also combines the functionality of the CVDisplay ActiveX and CVGenApi ActiveX controls. The display is a pure WPF implementation, though.

Stemmer.Cvb.Aux.dll
This is needed by the Stemmer.Cvb.Forms.Controls.Display for native interop handling (mostly overlay related).

Stemmer.Cvb.Extensions.dll
In here we put extension methods especially for the System.Drawing namespace to be able to support .Net Core.

Foundation Package

Stemmer.Cvb.Foundation.dll
All Foundation functions and Foundation tools are bundled in here covering for example correlation, filtering, non-linear calibration, blob, optical flow and more.

Combined in here is the functionality of the iCVCFoundation.dll, iBayerToRGB.dll, iCVCEdge.dll, and iLightMeter.dll.

Tools

  • Stemmer.Cvb.Manto.dll
  • Stemmer.Cvb.Minos.dll
  • Stemmer.Cvb.Movie2.dll
  • Stemmer.Cvb.Polimago.dll
  • Stemmer.Cvb.SampleDatabase.dll
    (Sample Image List (SIL) handling)
  • Stemmer.Cvb.ShapeFinder.dll
1 Like

Loading and Saving Images (Console)

Let’s start very simple: we want a Console app that takes two arguments:

  1. Image input path
  2. Image output path

The app loads the image given as the first argument and saves to the path in the second argument:

  1. Add namespace (we need the Stemmer.Cvb.dll for this):

    using Stemmer.Cvb;

  2. Load the input image via Image.FromFile factory method:

    var image = Image.FromFile(args[0]);

    (we assume that the user provided the input in args[0])

  3. Save the image:

    image.Save(args[1]);

    (we assume that the user provided the output in args[1])

:cvb: determines the file format based on the file’s extension (.bmp, .jpg, .png,…).

The full app with (simple) error handling.
using Stemmer.Cvb;
using System;
using System.IO;

namespace CvbPlayground
{
  class Program
  {
    static void Main(string[] args)
    {
      if (args.Length != 2)
      {
        PrintHelp();
        return;
      }

      try
      {
        using (var inputImage = Image.FromFile(args[0]))
        {
          inputImage.Save(args[1]);

          Console.WriteLine("Convert");
          Console.WriteLine(" " + Path.GetFullPath(args[0]));
          Console.WriteLine("->");
          Console.WriteLine(" " + Path.GetFullPath(args[1]));
        }
      }
      catch (IOException ex)
      {
        Console.WriteLine(ex);
      }
    }

    private static void PrintHelp()
    {
      Console.WriteLine("Usage: " + AppName + " <input> <output>");
      Console.WriteLine();
      Console.WriteLine("<input>  Path to input file to convert.");
      Console.WriteLine("<output> Path to output file to convert.");
    }

    private static string AppName
    {
      get
      {
        return Path.GetFileNameWithoutExtension(typeof(Program).Assembly.Location);
      }
    }
  }
}

If you read so far :clap:, you get a little bonus: perhaps you have seen the using (var inputImage… block. :cvb: Images are IDisposable types. That means that you can free their resources (memory in this case) immediatly via image.Dispose(). A using block does that always when the program exits its scope. Even if exceptions occur. So we use the image only as long as we actually need it.

Loading Image Stream Sources (Console)

Previously we have seen how to load and save single images with :cvb::

In Common Vision Blox we mostly deal with cameras which send a stream of images. So let’s have a look how to do this (with an AVI video for starters):

  1. Add namespaces (we only need Stemmer.Cvb.dll):

    using Stemmer.Cvb;
    using Stemmer.Cvb.Driver; // for IndexedStream
    using Stemmer.Cvb.Utilities; // for SystemInfo
    
  2. We use a tutorial video:

    var aviPath = Path.Combine
    (
      SystemInfo.InstallPath, 
      "Tutorial", 
      "Foundation", 
      "Videos", 
      "Cycling (xvid).avi" 
    );
    

    We get the :cvb: directory from CVB’s SystemInfo (which also contains defaults and license information).

  3. Stream sources are Devices:

    Device videoDevice = DeviceFactory.Open(aviPath);
    

    (everything that provides an image stream is a Device: vin-driver, avi and emu)

  4. And these have Streams and DeviceImages:

    IndexedStream stream = videoDevice.Stream as IndexedStream;
    DeviceImage deviceImage = videoDevice.DeviceImage;
    

With these objects we can print out some very basic information:

Console.WriteLine("Video:");
Console.WriteLine("  Resolution:  " + deviceImage.Size);
Console.WriteLine("  Color model: " + deviceImage.ColorModel);
Console.WriteLine("  Frames:      " + stream.ImageCount);

If we execute this, we get:

Video:
  Resolution:  [1024; 576]
  Color model: RGBGuess
  Frames:      300

We take a look into acquisition later on.

Quick Overview of the Types

Stream vs IndexedStream

The IndexedStream is a special kind of Stream. A Stream is a possibly endless stream of Images (e.g. from a camera). An IndexedStream is a finite stream with an ImageCount. To access this special property we needed to downcast the Stream into an IndexedStream via as. Both video-streams and emu-streams are indexed.

ColorModel and Planes

Perhaps you wondered about ColorModel.RGBGuess. :cvb: Images only know ImagePlanes. A Mono8 image would have one plane with a DataType DataTypes.Int8BppUnsigned (8 bit per pixel unsigned integer – or byte for short :slight_smile:). An RGB8 image on the other hand would have three planes with a byte DataType each.

This information is not enough to really determine the color model: YUV or HSV images also have three planes. When :cvb: was conceived 20 years ago, the Windows Bitmap was widely used and it only worked with monochrome or RGB images (even the indexed images resulted in RGB images). Thus we assumed three planes being RGB. (A lot of our algorithms actually do not care about the color model or even work on on ImagePlane.)

If you use Stemmer.Cvb.Foundation.ConvertColorSpace methods, you get the actual ColorModel from the resulting images.

Image vs DeviceImage

A DeviceImage is a very special kind of Image. Normally the pixel buffer stays the same for a given Image. A DeviceImage on the other hand represents the latest image from a Device. The object stays the same, but the pixel buffer changes. Thus you should be careful when using these images when you process their image data – especially in multi-threaded environments. For display purposes on the other hand these images are perfect.

But the pixel buffer does not change randomly: only if you call one of the Stream.Wait or Stream.GetSnapshot methods. Acquisition will be handled later in more detail.

Acquire Images from a Stream (Console)

We continue with the loaded stream from Loading Image Stream Sources (Console):

The goal is to write all images from that video stream as single images to a target folder. For simplicity we create a folder on the Desktop (everybody loves additional folders on the desktop :wink:):

var targetDir = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.Desktop), "CvbNetDemo");
if (!Directory.Exists(targetDir))
  Directory.CreateDirectory(targetDir);

From the post above we know the number of images in the video (ImageCount = 300). So here is the snipped with the loop and the code to get the stream running:

stream.Start();
try
{
  for (int i = 0; i < stream.ImageCount; i++)
  {
    using (StreamImage image = stream.Wait())
    {
      // Save here...
    }
  }
}
finally
{
  stream.Stop();
}

So, stream.Start() starts the acquisition, stream.Stop() halts it and stream.Wait() waits for the next image to arrive in the stream. The StreamImage by default is only valid until the next call to Stream.Wait() (after which it is Disposed). Thus using the using block (no pun intended) makes that clear in the code (the acquisition engine dosn’t double dispose - we take care of that :wink:).

We also have learned how to save images – it will be easy to complete the app:

Snippet to put into Main()
var targetDir = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.Desktop), "CvbNetDemo");
if (!Directory.Exists(targetDir))
  Directory.CreateDirectory(targetDir);

var aviPath = Path.Combine(SystemInfo.InstallPath, "Tutorial", "Foundation", "Videos", "Cycling (xvid).avi" );

using (var videoDevice = DeviceFactory.Open(aviPath))
{
  var stream = videoDevice.Stream as IndexedStream;
	
  stream.Start();
  try
  {
    for (int i = 0; i < stream.ImageCount; i++)
    {
      using (StreamImage image = stream.Wait())
      {
        var targetFileName = Path.Combine(targetDir, string.Format("{0:000}.jpg", i));
        image.Save(targetFileName);
      }
    }
  }
  finally
  {
    stream.Stop();
  }
}
1 Like

Fun Interlude (Linq-style Processing)

Ok, this is at least fun for me… :nerd_face: If you like language integrated query (Linq) style, you can use that also on Streams :tada:.

All you need is the Reactive Extensions (Rx)

To install these best use NuGet

Tools > NuGet Package Manager > Manage NuGet Packages for Solution…

And there Browse for System.Reactive and Install it for you project:

With that you can write the whole thing like this (boilerplate from above excluded):

int i = 0;
await videoDevice
  .Stream
  .ForEachAsync(image => image.Save(Path.Combine(targetDir, $"{i++:000}.jpg")));

Want to do some processing? (Now it starts to become linq-ish):

await videoDevice
  .Stream
  .Select(image => image.Map(new Size2D(image.Width / 2, image.Height / 2)))
  .ForEachAsync(image => image.Save(Path.Combine(targetDir, $"{i++:000}.jpg")));

Attention: Reactive extensions are nice if you need a full set of images. But if you need adaptive error handling (e.g. if you have a supply chain app and must cope with camera disconnects and the like), this is not the right solution. Rx stops processing on the first encountered error!

Explanations

Rx uses a so-called push model. We do pushing by handling an acquisition loop like the one in the above example in a long-running Task. Thus all processing is starting from a different thread context! If we wouldn’t await our app would finish before anything can happen as the all calls are async. The Linq-chain is called by this acquisition loop Task.

To use this directly in a Console app it can only be easily done with C# 7.1. In Windows Forms or WPF you can use it more easily.

This is just an appetizer and Rx can do a lot more. But this is not covered here. We also have more classical approaches available if async-processing is too complex, yet.

Acquire from a Camera (Console)

In general this is similar to the things we have done so far:

You have two general options on how to open devices:

  1. DeviceFactory.Open and its related methods to open a vin-driver.
  2. DeviceFactory.Discover for GenICam GenTL based drivers (like GigE Vision or USB3 Vision)
Don't have a camera?

Then let’s simulate one with the :cvb: GigE Vision Server! (You need :cvb: full install – not CameraSuite for that.)

Depending on your operating system find the

GEVServer > C# GEVServer Example

This can be found either directly in the start menu (e.g. Windows 7) or via the Tutorials link (e.g. Windows 10).

When starting the app select one of the available IP addresses (best one not in your company network to keep your admins on your good side :wink:). Also select the Socket Driver instead of the Filter Driver as we want to do loop-back (communication on the same machine). Then hit Start and you have a camera.

If you have no available IP address (e.g. offline usage) or no private one you can install a virtual network adapter as described here.

Case 1: Load Vin-Driver

For configuration I just assume you use a GenICam based device (like GEV, U3V). For other vin-drivers please consult the driver’s documentation on how to configure them. You can use the :cvb: default way of configuring your devices via the GenICamBrowser or the Common Vision Blox Management Console.

Configure for local GEV Server

When you want to acquire images from a locally running GEV Server, you also need to set the used driver to Socket. This can currently only be done in the Common Vision Blox Management Console or the GEV Config Manager. For the Management Console you find this setting in the camera’s context menu.

To access these configured devices you can call simply call

Device firstDevice = DeviceFactory.Open("GenICam.vin");

to open the firstDevice. Depending on the driver you can also access other configured cameras like this:

Device secondDevice = DeviceFactory.OpenPort("GenICam.vin", 1);

(It is zero-based indexing.)

Case 2: Discover devices

This so far only works for GenICam GenTL based devices (like GEV, U3V). Here you don’t need the additional configuration step. Simply call

DiscoveryInformationList foundDevices = DeviceFactory.Discover();

to find all available devices in your environment (this is the same as the output in the Management Console).

Discover for local GEV Server

For this you need to change the default filter:

var foundDevices = DeviceFatory.Discover(DiscoverFlags.IgnoreVins | DiscoverFlags.IgnoreGevFD);

Let’s assume we found at least one device (foundDevices.Count > 0), we can open it like this:

Device device = DeviceFactory.Open(foundDevices[0]);

Streaming

As we have a Device again, we can do the streaming like with the video file. The only difference now is that we just have an (endless) Stream instead of an IndexedStream:

Stream stream = device.Stream;

stream.Start();
try
{
  for (int i = 0; i < 10; i++)
  {
    using (StreamImage image = stream.Wait())
    {
      // Processing...
    }
  }
}
finally
{
  stream.Stop();
}

Looks nearly the same except the Stream type and the loop over 10 images.

Configure a Device with the GenApi

Often, before acquiring images from a Device, you want to configure it. So we open a Device as described in the previous post:

If the Device supports GenICam GenApi, then there will be NodeMaps:

device.NodeMaps.Count > 0

Most of the time we are interested in the features of the camera (remote device):

NodeMap deviceNodeMap = device.NodeMaps[NodeMapNames.Device];

For remote Devices we will always have at least the NodeMapNames.Device NodeMap.

With this NodeMap we can access the camera’s features. Let’s start with some reporting:

Console.WriteLine("Vendor:  " + deviceNodeMap["DeviceVendorName"]);
Console.WriteLine("Model:   " + deviceNodeMap["DeviceModelName"]);
Console.WriteLine("Version: " + deviceNodeMap["DeviceVersion"]);

This results in something like this:

Vendor:  STEMMERIMAGING
Model:   CVGevServer
Version: 3.4.0.797_C#-Example

What happened here? The indexer of the NodeMap returns a Node object. Console.WriteLine then calls .ToString() on these which is mapped to the derived types’ Value property:

// for StringNode you need: using Stemmer.Cvb.GenApi

var vendorName = deviceNodeMap["DeviceVendorName"] as StringNode;
Console.WriteLine("Vendor:  " + vendorName.Value);

In case of a StringNode the .Value property is a string.

In case of ExposureTime this would be a FloatNode (this is according to the GenICam Standard Features Namimg ConventionSFNC):

var exposure = deviceNodeMap["ExposureTime"] as FloatNode;
exposure.Value = 40000.0; // unit is micro-seconds

You can also get other feature/type specific information like

  • .Max: current maximal allowed value
  • .Min: current minimal allowed value
  • .Increment: the step-size: feature.Min + x * feature.Inrement

Common to all Nodes is also the AccessMode which shows whether this feature is readable and/or writeable or currently not available at all. For better readability you can also use the .IsAvailable, .IsReadable or .IsWritable properties before trying to access Nodes.

How to find the Node's type?

You can do it

  1. Programatically
    By getting the Node and look up its type in the debugger (or calling .GetType().Name). This gives you its most derived type, of course. Normally you don’t want the RegisterNode interfaces, so you can simply use a StringRegNode as a StringNode as a StringRegNode derives from StringNode.

  2. Use the GenApi Grid. For the Windows control (e.g. in the Management Console or as in Stemmer.Cvb.Forms.Controls.GenApiGrid) you can use the context menu on a feature and open Properties…:

    grafik

    You can see its basic type in the second category name: String. This this is a StringNode.

Display a Stream (Windows Forms)

For this post, let’s create a new Windows Forms App (Visual C#) and call it CvbViewer:

After the wizard finishes its work you should see an open Form1.cs [Design].

Don’t forget to disable the Prefer 32-bit check-box.

Open your Toolbox, right-click on the General tab and click on Choose items… After loading finishes you can click on Browse…:

grafik

Enter

%CVB%Lib\Net

and choose Stemmer.Cvb.Forms. Then confirm this and the Choose Toolbox Items dialog with Ok. You now should see three new entries under General:

  • Display
  • GenApiGrid
  • StreamHandler

Before we can do anything useful we also need to add the Stemmer.Cvb.dll, Stemmer.Cvb.Forms.dll, Stemmer.Cvb.Aux.dll and Stemmer.Cvb.Extensions.dll to the references.

  1. We quickly add an “Open”-Button (openButton) and a “Grab”-CheckBox (grabCheckBox) and anchor them to the Bottom, Left.
  2. We add the StreamHandler as streamHandler to the Form1. It will be placed below the Designer window in the Components section.
  3. We add the Display as displayand anchor it Top, Bottom, Left, Right.

As we finished our UI-design, we can double-click on the Open… button to create its Click handler and enter the following code:

private void openButton_Click(object sender, EventArgs e)
{
  try
  {
    Device device = FileDialogs.LoadByDialog(path => DeviceFactory.Open(path), "CVB Devices|*.vin;*.emu;*.avi");
    if (device == null)
      return; // canceled

    Device = device;
  }
  catch (IOException ex)
  {
    MessageBox.Show(ex.Message, "Error loading device", MessageBoxButtons.OK, MessageBoxIcon.Error);
  }
}

and

private Device Device
{
  get { return _device; }
  set
  {
    if (!ReferenceEquals(value, _device))
    {
      display.Image = value?.DeviceImage;
      streamHandler.Stream = value?.Stream;

      _device?.Dispose(); // old one is not needed anymore
      _device = value;
    }
  }
}
private Device _device;

To make this work add the following namespaces:

using Stemmer.Cvb;
using Stemmer.Cvb.Forms;
using System.IO;

We use the helper FileDialogs.LoadByDialog to easily open a FileDialog that returns the opened Device or null if canceled. This dialog supports avi- and emu-files as well a preconfigured vin-drivers.

We also utilize a Device Form1.Device property to manage life-time of prior loaded devices and to assign the DeviceImage to the display and the Device.Stream to the streamHandler. (If ?. looks unfamiliar: that is automatic null handling: null is assigned if value == null.)

When we run the app and load e.g. the GenICam.vin, we see a black image in the Display. This is normal as we don’t do “Auto Snap” and no data has been acquired, yet.

Finally let’s connect to the grabCheckBox's CheckedChanged event by double-clicking on the CheckBox in the designer. Add the following code:

private void grabCheckBox_CheckedChanged(object sender, EventArgs e)
{
  if (grabCheckBox.Checked)
    streamHandler.Start();
  else
    streamHandler.Stop();

  openButton.Enabled = !grabCheckBox.Checked;
}

And that’s it. You have your life-display :cvb: app.

If you want to be correct and properly clean-up, move the void Dispose(bool disposing) (see the Dispose pattern for more info about this) method from the .Designer.cs into your implementation cs file and change it to:

protected override void Dispose(bool disposing)
{
  if (disposing)
  {
    components?.Dispose();
    Device?.Dispose();
  }
  base.Dispose(disposing);
}

components?.Dispose() will dispose the streamHandler, which will abort any ongoing acquisition. And last but not least we clean-up our opened Device. This is especially useful if you use your window as a dialog in your application and must ensure that the Device is not open anymore after the dialog is closed.

Processing with the StreamHandler (Windows Forms)

This post continues on the project created at:

The StreamHandler is designed to work like the Image ActiveX control (without being an ActiveX control :+1:). Thus it also has events to notify you on new images and errors:

  • NewImage event
  • Error event

If you know how the old Image ActiveX and Display ActiveX controls worked or you simply wondered how the Display knew that a new image arrived: The :cvb: Image has a PixelContentChanged event which the Display registers to. All our SDK methods changing the pixel data raise this event. That is why we didn’t need to implement the NewImage events to refresh the display in our simple CvbViewer app.

To implement the events we select our streamHandler on the Form’s Designer view and switch to the events in the Properties:

grafik
Here we double-click on both the Error and the NewImage events to register them. These events are called from the UI thread-context! So the good news is that you are able to directly manipulate your UI elements. The bad news is that, if you do lengthy processing, your UI hangs or at least becomes laggy.

There are ways on how to work around the lags. C# has the async/await keywords to do asynchronous processing. And probably you did see the NewImageAsync and ErrorAsync events. These are not the prefered solution, though – these are for the scenario that you started with the NewImage event and later found out that your UI becomes too laggy. This is an easier migration path to use async/await without rewriting your app. The prefered way is

Stemmer.Cvb.Async

Attention: It is out of the scope of this getting started to go into the details of asynchronous processing. Often this is also paired with parallel processing which is not a beginners topic anymore. If you are an expert: yes, we support that in multiple ways. :slight_smile:

Processing with Stemmer.Cvb.Async (Windows Forms)

In the last post we have seen how to do image processing with the StreamHandler:

And there I also stated, that if you want to have a fluent UI, you should use the Stemmer.Cvb.Async functionality.

Disclaimer: this quickly becomes an advanced topic if you do parallel processing!

You have been warned… :wink: We integrate nicely with the asynchronous and parallel processing capabilities of .Net.

Stream is an IObservable

One way we have seen above with Rx:

:cvb: Streams implement the IObservable<T> interface which enables you to .Subscribe to the Stream. The Stream pushes the acquired Images from a long-running Task to your registered IObserver<T> (where T is StreamImage). The StreamHandler utilizes this, in fact. It adds to that implementation the synchronzation to the UI thread-context to make your life easier. But if you want to do processing outside the UI thread-context, the IObservable<T>.Subscribe is a good place to start.

Stemmer.Cvb.Async Acquisition

In this post I describe a way that is more like the StreamHandler (it originates from the UI thread-context), but is more flexible. To make this work you first need to add these two namespace:

using Stemmer.Cvb.Async;
using Stemmer.Cvb.Driver;

The first one is for the async extension methods an the second one for the StreamImage which is returned by the .Wait method. For this to work you only need to rewrite your grabCheckBox_CheckedChanged event handler like this:

private async void grabCheckBox_CheckedChanged(object sender, EventArgs e)
{
  if (grabCheckBox.Checked)
  {
    Device.Stream.Start();
    openButton.Enabled = false;
    try
    {
      while (grabCheckBox.Checked)
      {
        using (StreamImage image = await Device.Stream.WaitAsync())
        {
          // processing
        }
      }
    }
    catch (OperationCanceledException)
    {
      // acquisition was aborted
    }
    finally
    {
      openButton.Enabled = true;
    }
  }
  else
  {
    if (Device.Stream.IsRunning)
      Device.Stream.Abort();
  }
}

Note the async keywork added in the method signature and the

await Device.Stream.WaitAsync()

call. Windows Forms is async/await capable so that if you check the grabCheckBox this method enters the then-branch of the if-conditional. It all runs in the UI thread-context. All these calls are reasonable fast, so that you won’t experience any lags. Until you reach the .Wait. This is now an awaitable .WaitAsync. The await cooperatively waits for the operation being finished and continues on after wards. While waiting, though, it gives control back to the UI so it stays reactive.

If you uncheck the grabCheckBox this method is reentered(!), but now goes to the else branch which .Aborts the acquisition. When there is a free slot in Windows Forms message queue handling, the still running grabCheckBox_CheckedChanged then-branch is ‘unthawed’ and also exits (as .Checked is now false). Also, dependent on when the .Abort() was issued, the .Wait may have been canceled (resulting in the OperationCanceledException).

Ok, why use this instead of the events? Because you are more flexible. You could

  • use WaitFor if you expect images to arrive in a certain time-frame
  • wait on two or more cameras simultanously
  • have a driver that does not support .Abort()
  • :wink:

Exit Handling

Exit handling should always be a priority if you do async or even parallel processing (e.g. via Task.Run). As long as you only perform short tasks in the UI thread, your are fine if you call

grabCheckBox.Checked = false;

in the void Dispose(bool disposing) method if disposing is true.

If you do your own parallel processing, then you must wait until that is finished in exit scenarios. I recommend creating a TaskCompletionSource<bool> (it has to have some type) like this (this code is inserted where the comment // Processing... is placed:

var tsc = new TaskCompletionSource<bool>();
_processingFinished = tsc.Task; // _processingFinished is of type Task
await Task.Run(() =>
{
  // lengthy processing

  // when finished:
  tsc.TrySetResult(true);
});

And then register the FormClosing event:

if (grabCheckBox.Checked)
{
  grabCheckBox.Checked = false;
  await _processingFinished;
}

This is safe as the TaskCompletionSource is created in the UI thread-context and the FormClosing event is also executed in the UI thread-context.

Display a Stream (WPF)

We do a non-MVVM demo of a WPF app for :cvb:. You can create a MVVM app as our controls support binding (see my answer here for an example). Let’s create a new WPF App (Visual C#) and call it CvbWpfViewer:

After the wizard finishes you should see an empty MainWindow.xaml.

Don’t forget to disable the Prefer 32-bit check-box.

For this app we need the Stemmer.Cvb.dll, Stemmer.Cvb.Wpf.dll, and Stemmer.Cvb.Extensions.dll as references. To be able to use our :cvb: UI controls, we need to add our namespace to the <Window ...>:

xmlns:cvb="http://www.commonvisionblox.com/wpf"

UI Design

First we add to rows to the grid:

<Grid.RowDefinitions>
  <RowDefinition Height="*" />
  <RowDefinition Height="Auto" />
</Grid.RowDefinitions>

The first row is for the Display and the second row for the controls. Let’s add them:

<cvb:Display x:Name="display" Grid.Row="0" />

<StackPanel Grid.Row="1"
            Orientation="Horizontal" Margin="0,4,0,0">
  <Button x:Name="openButton" Content="Open..." 
          MinWidth="75" Margin="0, 0, 4, 0" />
  <CheckBox x:Name="grabCheckBox" Content="Grab"
            VerticalAlignment="Center" />
</StackPanel>

I also gave the Grid a Margin="8". Yes, this is just for quickness sake. In a real app you should put the visual stuff in <Style>s.

Adding Behavior

We click on the Button and open the events in the Properties:

grafik

Double-click in the text box right to the Click event to create the event handler. Enter the following code:

private void openButton_Click(object sender, RoutedEventArgs e)
{
  try
  {
    Device device = FileDialogs.LoadByDialog(path => DeviceFactory.Open(path), "CVB Devices|*.vin;*.emu;*.avi");
    if (device == null)
      return; // canceled

    Device = device;
  }
  catch (IOException ex)
  {
    MessageBox.Show(ex.Message, "Error loading device", MessageBoxButton.OK, MessageBoxImage.Error);
  }
}

and

private Device Device
{
  get { return _device; }
  set
  {
    if (!ReferenceEquals(value, _device))
    {
      display.Image = value?.DeviceImage;

      _device?.Dispose(); // old one is not needed anymore
      _device = value;
    }
  }
}
private Device _device;

To make this compile add the following namespaces:

using Stemmer.Cvb;
using Stemmer.Cvb.Wpf;
using System.IO;

We use the helper FileDialogs.LoadByDialog to easily open a FileDialog that returns the opened Device or null if canceled. This dialog supports avi- and emu-files as well as preconfigured vin-drivers.

We also utilize a Device MainWindow.Device property to manage life-time of prior loaded devices and to assign the DeviceImage to the display. (If ?. looks unfamiliar: that is automatic null handling: null is assigned if value == null.)

When we run the app and load e.g. the GenICam.vin, we see a black image in the Display. This is normal as we don’t do “Auto Snap” and no data has been acquired, yet.

Finally we connect the grabCheckBox's Checked and Unchecked events by double-clicking in the text-boxes right from them. We add the following code:

private async void grabCheckBox_Checked(object sender, RoutedEventArgs e)
{
  Device.Stream.Start();
  openButton.IsEnabled = false;
  try
  {
    while (grabCheckBox.IsChecked ?? false)
    {
      using (StreamImage image = await Device.Stream.WaitAsync())
      {
        // processing
      }
    }
  }
  catch (OperationCanceledException)
  {
    // acquisition was aborted
  }
  finally
  {
    openButton.IsEnabled = true;
  }
}

(note the async in the event handler’s signature)
and

private void grabCheckBox_Unchecked(object sender, RoutedEventArgs e)
{
  if (Device != null && Device.Stream.IsRunning)
    Device.Stream.Abort();
}

For this code to compile you need these namespaces:

using Stemmer.Cvb.Async;
using Stemmer.Cvb.Driver;

Ok, this is a lot. But let me point you to the previous post as this is nearly the same except the adjustments for WPF:

Exit Handling

As described in the previous post, exit handling is important. Sadly WPF Windows are not derived from IDisposable. As we store IDisposable types (the Device) we better implement that interface. But this doesn’t help us here as nobody calls .Dispose() on our MainWindow. The correct solution would be to implement the IDisposable interface and .Abort() the acquisition in the Window's Closing event. For briefness sake I put the exit handling directly into the event handler (go back to the designer, select the Window and double-click in the text-box right to the Closing event):

private void Window_Closing(object sender, CancelEventArgs e)
{
  grabCheckBox.IsChecked = false;
  Device?.Dispose();
}

(We use the Closing event because at this point in time the MainWindow is still alive.)

This is how the correct way looks like

The Closing event has also to be registered and the MainWindow must implement IDisposable:

public partial class MainWindow : Window, IDisposable

Then we need a finalizer:

~MainWindow()
{
  Dispose(false);
}

, the .Dispose() method:

public void Dispose()
{
  Dispose(true);
  GC.SuppressFinalize(this);
}

, the actual dispose handler:

protected virtual void Dispose(bool disposing)
{
  if (disposing)
  {
    Device?.Dispose();
  }
}

, and last but not least our Closing event handler:

private void Window_Closing(object sender, CancelEventArgs e)
{
  grabCheckBox.IsChecked = false;
}

In this case we defer clean-up to the user of our MainWindow. If it is opened manually, the implementer is responsible for disposing it. In case the framework opens the MainWindow, the process is exited anyways and the garbage collector will clean-up the Device.

Stream Statistics

When streaming from a camera you can query statitistics to monitor your acquisition health.

A good point in time to query these statistics is right after your Stream.Wait call (or any (async) variant of it – also inside the NewImage event which is fired right after the Wait call). You can do this via

device.Stream.Statistics

The reason behind querying the statistics right after the wait is because acquisition is asynchronous. The Wait method is your only way to sync to that process. The statistic values are also only valid while the acquisition IsRunning (between Start and Stop/Abort).

There are several measurement points defined of which every driver implements its own sub-set. You can print the available ones with the following snippet:

foreach (var info in device.Stream.Statistics)
{
  Console.WriteLine(info.Key);
}

If you only work with one special driver (e.g. evaluation) you can directly query a value:

double numLostRBOverflow = device.Stream.Statistics[StreamInfo.NumBuffersLostLocked];

It is safer, though, to to query the statistic without throwing an exception if it isn’t available:

double numLostRBOverflow = .0;
bool isPresent = device.Stream.Statistics.TryGetValue(StreamInfo.NumBuffersLostLocked, out numLostRBOverflow);

In later posts I will describe two typical monitoring use cases.

RingBuffer Fill Level

This is for when you want to monitor if your processing can keep up with the incoming images. It is based on the post:

Interlude: The IRingBuffer Interface

Vin drivers receive e.g. image data as frames from a camera. This data needs to be stored somewhere: in a buffer, which is a contiguous chunk of memory. A ring buffer is a circular buffer used to store multiple of these buffers. It is called ring buffer, because single buffers are “taken” from it and “returned” if not needed anymore. If you reached the last buffer in the ring buffer, the first one is retrieved – and thus ring closes. With this data structure you can go on acquiring image data for as long as you wish with a finite and constant amount of memory used.

Aside from the ring buffer’s basic properties we use it for these three main reasons:

  1. Enable parallel acquisition and processing

    A ring buffer must have at least three buffers (no really, don’t use less than three buffers :wink:). One is to acquire into, one for the current processing and the last one to be able to switch between the first two without loosing data.

  2. Compensate for varying processing times

    We use more than three buffers to compensate for jitter in processing time. If your processing time is always less than the time between two acquired images, you are fine. But if there are peaks of longer processing time you would loose images. With more buffers you can handle those peaks.

  3. Improve acquisition speed and reduce memory fragmentation

    Memory allocation takes time. Even more so when memory becomes fragmented. The last reason might only be partially intuitive to a .Net developer as .Net has a managed heap. For .Net it “only” takes additional work to compact the memory; for native apps it may mean “out of memory” as no chunk of memory large enough for one image is available anymore.

    Hardware normally does not use the managed heap or at least needs pinned memory. Some hardware even needs to know all these addresses before starting the acquisition.

We will see more of the IRingBuffer interface in later posts. You can do a lot of cool stuff with it.

Most of the vin drivers (including the aforementioned GenICam.vin) support the IRingBuffer interface. This can be queried by the presence of an IRingBuffer object on the Stream:

bool isPresent = device.Stream.RingBuffer != null

Useful Measurement Points

Virtually all vin drivers supporting the IRingBuffer interface have these statistics:

  • StreamInfo.NumBuffersPending

    The number of buffers that have been filled by the acquisition engine that have not been consumed by the client app (via .Wait). When this count is greater than 0, .Wait calls return immediatly.

  • StreamInfo.NumBuffersLocked

    Number of buffers currently locked (meaning not fillable by the acquisition engine). By default this comprises the current image buffer returned by .Wait and all pending image buffers.

These two in combination with the device.Stream.RingBuffer.Count property (or StreamInfo.NumBuffersAnnounced statistic if available) gives you all information necessary to evaluate the acquisition engine status.

If NumBuffersPending increases, you fall behind the acquisition rate. This needn’t be bad if you anticipated this. Reasons can be the processing time peaks mentioned in the IRingBuffer section or you acquire multiple images in bursts to process them as a group.

If at any point in time NumBuffersLocked equals the number of buffers in the IRingBuffer (.Count), you are prone to loose images. If such a buffer overflow happens, NumBuffersLostLocked is increased. This is a counter that is reset on acquisition .Start.

If you want to keep track on the amount of lost images between two .Wait calls you need to store that data and calculate the difference yourself. We don’t do the bookkeeping to be as lean as possible in our acquisition engine (also we don’t know your actual use case: could be simply that, a series of lost frames measured at certain points in time or actually the total number as returned).

Symptoms of too few buffers are, in addition to the lost image data, unexpected drops in acquisition framerate. You can increase the number of buffers either in the driver’s configuration file, which can be found at %CVBCONFIG%Drivers, or programatically via

device.Stream.RingBuffer.ChangeCount(newSize, DeviceUpdateMode.UpdateDeviceImage);

UpdateDeviceImage is the default change mode, NewDeviceImage is used in scenarios when you switch the ring buffer count while still processing image data.

Transfer Monitoring

Here the focus is on the transfer “over the wire” and possible errors here. This is not connection monitoring: it’s about transfer statistics.

This topic is heavily transport technology dependent. That means that the available measurement points change from vin driver to vin driver. If the vin driver supports different transport technologies (like the GenICam.vin), some statistacs may not be available even though the driver itself supports them.

At the time of writing the GenICam.vin supports the following transfer related measurement points (descriptions are GenICam.vin specific):

  • NumBuffersAcquired

    Counts all buffers seen by the acquisition engine since acquisition .Start including lost or corrupted ones (during transfer from camera to pc).

  • NumBuffersLostTransfer

    Buffers lost on the wire since DeviceFactory.Open. This is not be increased for USB3 Vision transfers as a reliable hardware protocol is used. That doesn’t mean that there will never be an error: you still should use connection monitoring (INotify interface) and delays may occur.

  • NumBuffersDelivered

    The buffers handled by the app (returned by .Wait) since DeviceFactory.Open.

  • NumBuffersCorruptOnArrival

    Only increased if PassCorruptFrames is False (feature of the NodeMapNames.DataStream NodeMap). Corruption is due to tranfer errors.

  • NumBuffersCorruptOnDelivery

    Only increased if PassCorruptFrames is True (feature of the NodeMapNames.DataStream NodeMap). If False the frames/buffers won’t be delivered, but simply requeued. Corruption is due to transfer errors.

  • NumPacketsReceived

    Number of packets received since DeviceFactory.Open (mostly GigE Vision related). A full image (or frame) buffer is normally transferred in smaller packets over the wire. If a transfer error occurs in a packet based protocol (like GigE Vision), the packet size defines the granularity of the affected region(s). If a buffer is corrupt at least one packet is not present. Du to the ring buffer this results in old image data being present in that region!

  • NumResends

    Number of initiated packet resends since DeviceFactory.Open (mostly GigE Vision related). A resend is issued if the acquisition engine is detecting missing packets. If this counter increases you have either cabling problems or too many transfers/too high data rate on the connection. In combination with NumPacketsReceived you can determine the packet error rate.

How to get the PassCorruptFrames BooleanNode
var streamNodeMap = device.NodeMaps[NodeMapNames.DataStream];
var passCorruptFrames = streamNodeMap["PassCorruptFrames"] as BooleanNode;
passCorruptFrames.Value = false; // true is the default

If set to False only fully valid image buffers are seen by the app on .Wait calls. Bad buffers are simply requeued.

So for transfer monitoring regarding errors the important measurement points are:

  • NumBuffersLostTransfer
  • NumBuffersCorruptOnArrival/NumBuffersCorruptOnDelivery
  • NumResends

An increased count of NumResends needn’t necessarily directly result in corrupted buffers. It is just a hint that you have problems with your connection. Packet resends can alleviate the problem (resulting in increased transfer time). You should investigate your setup if that occurs:

  • test/exchange cables if no additional devices were added
  • use e.g. interpacket delay if many devices share bandwidth (GigE Vision)

In case of increased NumBuffersLostTransfer/NumBuffersCorruptOnArrival you actually lost buffers and you cannot do anything about it anymore. If NumBuffersCorruptOnDelivery count increased since the last call, then the last image buffer returned by .Wait contains errors (e.g. because of missing packets there are “holes” containing old buffer data).

Remember: Collect the statistics right after the .Wait call to get correct information!

How to identify corrupted ranges

The easiest solution is to initialize the image buffer before unlocking it (by default via the next .Wait call after you finished your processing). Use a value you don’t care about (no feature) or which is not present in your images, e.g.:

streamImage.Initialize(.0);
1 Like

A post was split to a new topic: Foundation Binaries/Resources missing