Getting Started with CVBpy

Download “Anaconda” with Python 3.6 and install it.

Make sure that you have the newest released CVB version.

Use pip to install the cvb wheel form %CVB%/Lib/Python e.g:

python -m pip install cvb-1.1-cp35.cp36.cp37-none-win_amd64.whl 

Now start Spyder 3 (included in Anaconda) and type “Import cvb” into the console.

If the cvb wrapper is not found check if the correct path was set:

import sys

sys.path

Check out this example.

Regards,
Theo

Ps.: It does not has to be Anaconda of course, but according to @parsd Anaconda is the best platform for beginners like me ^^

3 Likes

I think PYTHONPATH should instead be pointing at %CVB%Lib/Python

Update (18. 10. 2018): PYTHONPATH is not required anymore, see above!

1 Like

Setup Hints

Hi, it has been a while since the last post. However meanwhile many features were added to CVBpy.

That is right, but sometimes setting the python path is not enough. In such a case you may manually copy the contents of %CVB%/Lib/Python to your installation’s Lib/site-packages.
Plase be aware that this is not an actual installation.

Before manually installing CVBpy check if your interpreter actually sees this environment variable. E.g in VS Code have to add the PYTHONPATH through the settings.

Update 18. 10. 2018: PYTHONPATH is not required anymore, see above!

Finally some advanced stuff.

CVBpy Module Layout CVBpy contains one main module which is called "cvb" which contains everthing that is refered to as Image Manager in the C-Style API. In addition there will be sub modules for each available CVB Tool. Currently "cvb.movie" and "cvb.foundation" (not yet complete) are available.
CVBpy Versioning CVBpy is build on top of CVB++ which is a header only C++ wrapper for the C-Style API. This makes quite challenging as CVBpy's version also depends on the CVB++ version used to build it. You may get this information by calling cvb.version(). A possible output could be: CVBpy-v1.0.0.build-79.rev-2793+ Release vc14 @ x64 & CVB++-v1.0.0.build-64.rev-2789+ & CVB-v13.00.004 If you report a bug or just have a question it would be great to provide CVBpy version string, as it is currently in beta state and we provide nightly builds.
2 Likes

Loading and Saving Images

Let’s start simple load an image from disk.
Get some properties from it and save it back to a different name and location.

  1. Add the main module to your script.
import cvb
  1. Load an image that comes with a standard installation.
    Note: lower case allows your script to run on Linux as well.
image = cvb.Image(cvb.install_path() + "/tutorial/Clara.png")
  1. Get the image dimensions.
print("Image Size: " + str(image.width) + " x " + str(image.height))
  1. Save it to your current location as BMP.
image.save("Clara.bmp")
2 Likes

Loading Image Stream Sources

Loading images from disk is fine.

However, usually you will deal with live images from stream. A stream can either be a streming device like a camera, or a video (form disk) or a emulated stream composed from single images you.

Here is how you load a EMU file, that can be used to simulate your application with a defined set of images.

  1. Add the main module to your script.
import cvb
  1. Streams are provided by devices.
emu_device = cvb.DeviceFactory.open(cvb.install_path() + "Tutorial\\ClassicSwitch.emu")
  1. Now you can get the stream and a special device image
stream = emu_device.stream
device_image = emu_device.device_image
  1. Finally print some information.
print("Emu:")
print("Resolution: " + str(device_image.width) + " x " + str(device_image.height))
print("Frames: " + str(stream.image_count))

The output would be:

Emu:
Resolution: 640 x 480
Frames: 5

Additional information can be found at CVB.NET. With CVBpy things are quite similar.

Acquire from a Camera

This is very simelar to the things shown above.

  1. load the device. This will open the first configured GenICam device.
vin_device = cvb.DeviceFactory.open(cvb.install_path() + "/drivers/GenICam.vin")
stream = vin_device.stream
  1. Before you can get any images start the stream.
stream.start()
  1. Wait for 10 images and print some information about them.
try:
    for i in range(0 ,10):
        image, status = stream.wait_for(1000)
        if status == cvb.WaitStatus.Ok:
            print("Acquired image " + str(i) + " into buffer " + 
               str(image.buffer_index) + ".")
        else:
            raise RuntimeError("timeout during wait" 
                if status == cvb.WaitStatus.Timeout else 
                    "acquisition aborted")
except:
    pass 
finally:
    stream.try_abort()

Please note that, although errors are unlikely in this simple example and I just pass them it is usually a good idea to handle them, and at least always stop the stream so that stay in a defined state.

The output given you have three (default) buffers allocated would be:

Acquired image 0 into buffer 0.
Acquired image 1 into buffer 1.
Acquired image 2 into buffer 2.
Acquired image 3 into buffer 0.
Acquired image 4 into buffer 1.
Acquired image 5 into buffer 2.
Acquired image 6 into buffer 0.
Acquired image 7 into buffer 1.
Acquired image 8 into buffer 2.
Acquired image 9 into buffer 0.

Configure a Device with the GenApi

Usually you will have to configure some things in a camera before the image fits your needs. Sometimes you might even have a lot of cameras which all need the same settings applied automatically. Here comes how you can script it.

  1. Open your device as described above.
  1. Get the device node map, which groups all settings for your camera (device).
device_node_map = vin_device.node_maps["Device"]
  1. Now you can access the camera features. Let’s start reading some stuff.
print("Vendor:  " + device_node_map["DeviceVendorName"].value)
print("Model:   " + device_node_map["DeviceModelName"].value)
print("Version: " + device_node_map["DeviceVersion"].value)

The value property is also used to write (if the node is writeabel) features to the camera. In the debugger you can see a lot of additional information that will help you to pick the right values. E.g:

exposure = device_node_map["ExposureTime"]

image

1 Like

Live Node Map in a REPL Console

A very nice python feature is that it can be run in a REPL Console. Which means you can interactively enter instruction and see them evaluated (ReadEvalPrintLoop).

Ipython is one way to lunch such a console. Usually it also provides code completion and easy access to the build in documentation:

For node maps provide a nice feature in interactive a mode as all GenAPI nodes are mapped as dynamic properties.

image

This way you can use auto completion to search for specific nodes or simply explore your device’s features.

1 Like

Image Processing and Pixel Access

Image acquisition is fine, but rather useless on it’s own, so how to do image processing with CVBpy?
First I recommend to have a look at the foundation package which provides many standard processing algorithms.

import cvb.foundation

However there are valid use cases to implement a custom algorithm and access the pixel data directly.
Unfortunately python does not provide an efficient way to do such low level stuff. Usually this is handled through widely used python module called numpy. CVBpy seamlessly works together with numpy so you can directly access the pixel data from your recently acquired image.

Here is how it works…

  1. Load an image

image = cvb.Image(cvb.install_path() + "/tutorial/Clara.bmp")

  1. Convert it to a numpy array

np_array = cvb.as_array(image, copy=False)

This is very efficient as data is not copied, by default. You may choose otherwise by setting copy=True .
Please note, that the copy parameter is only a request. Depedeing on the image’s memory layout a copy might be unavoidable.

  1. Check if you got a copy or a view into the actual image buffer.

print(np_array.flags["OWNDATA"])

The flag OWNDATA=False means that the numpy array does not own the actual buffer, but the :cvb: image does.

  1. Access the image data (quite similar to Matlab)

np_array[83 : 108, 48 : 157] = 0

In this case just set a view pixels to black.

  1. Save the :cvb: image.

image.save("ClaraUnknown.png")

ClaraUnknown

The other way around

It is just as easy to get a :cvb: image from a numpy array.

wrapped_image = cvb.WrappedImage.from_buffer(np_array, cvb.BufferLayout.HeightWidthPlane)

You may use the optional plane parameter to change the meaning of the array dimensions.

3 Likes

Rendering the Image

When testing a image processing algorithms we usually want to see the result rendered in some sort of UI as verifying pixel one by one is cumbersome work.We are lucky that this can be done easily through numpy using pyplot.

import matplotlib.pyplot as plt
...
plt.imshow(np_image, cmap="gray")

See also

This provides a simple method to visualize your image data.

But what if the plotted image pop-up is not enough, as an actual interactive custom UI is required.
E.g you may also want to render some custom stuff on top of the image to visualize your results
or some controls to receive user input.

UI libraries usually define their own image/buffer formats optimized for on screen rendering. This format usually does not match the image format provided by a streaming device. So at least one low level copy of the image data is required.

How does it look like in a real example? E.g using PyQt5 as UI framework.

The following steps can be used to copy a mono :cvb: image into an ARGB QImage optimized for rendering. Similar steps are required to copy into an OpenGl texture for instance.

  1. Load a mono image from file.
cvbMonoImage = cvb.Image(cvb.install_path() + "/tutorial/Clara.bmp")
  1. Link the image buffer into a format suitable for on screen drawing (no copy).
cvbRGBImage = cvb.Image.from_images(cvb.MappingOption.LinkPixels, [cvbMonoImage.planes[0].map(), cvbMonoImage.planes[0].map(), cvbMonoImage.planes[0].map()])
  1. Create a numpy array that matches the buffer layout of QImage (includes an alpha channel).
npImage = numpy.empty([cvbMonoImage.height, cvbMonoImage.width, 4], numpy.uint8, order="C")
npImage.fill(255) # initialize the alpha channel
  1. Create a QImage as view into the numpy array (no copy).
qImage = QImage(npImage.data, npImage.shape[1], npImage.shape[0], npImage.strides[0], QImage.Format_ARGB32_Premultiplied)
  1. Create a :cvb: image as view into the numpy array (no copy).
cvbWrappedImage = cvb.WrappedImage.from_buffer(npImage)
  1. Use :cvb:'s optimized copy routine to copy the data to the buffer owned by the numpy array.
cvbRGBImage.copy(cvbWrappedImage)

This is the only step that copies data and therefore the only potentially expensive step.

  1. Use a QPainter to render into the image.
painter = QPainter()
painter.begin(qImage)
painter.setBrush(QBrush(Qt.red))
painter.setPen(Qt.NoPen)
painter.drawRect(48, 83, 109, 25)
painter.end()

8 Show the image in a QLabel.

app = QApplication(sys.argv)

lb = QLabel()
lb.setPixmap(QPixmap.fromImage(qImage))
lb.setAlignment(Qt.AlignCenter)
lb.show() 

sys.exit(app.exec_())

(rendering QImage actually requires another internal copy, but I prefer to keep it simple :wink: )

image

Please note that this is not the most elegant or most efficient way to draw an image into a QWidget, but the most illustrative.

1 Like

Lifetime and Ownership

If you are using the numpy interface, as described above, it is especially important to keep track of the buffer ownership.

Of course lifetime and ownership is always important for reliable safe software. Unfortunately especially script languages tend to hide such things from the user and do some magic in the background. You probably know that python has a garbage collector, so you don’t need to care?

I really wish this was the case, but unfortunately this so not true!

Of course python has a garbage collector, but when dealing with hardware like cameras.
Huge images might even remind you that RAM is limited :wink:.

So how is it done in CVBpy? Consider the following:

device = cvb.DeviceFactory.open("GenICam.vin")

Who owns the device, and what is it’s lifetime? When is it closed again?

You as the user own the device, and it will live for the entire runtime of the script.
So the device will be open and locked for all other users until the python process has finished.

This might not be what you want, so you can shorten the lifetime of the device manually to allow others to access it.

del device

This is dangerous as python objects implement shared ownership. Therefore if between creation and destruction someone does this:

device2 = device

Suddenly there are two owners, device and device2, as a consequence calling del once does nothing except reducing the reference count. The device won’t close until there is no owner left.

Typically it is not an easy task to keep track of all the owners, as sharing happens much more subtitle in real world applications. In addition you would have to make sure that every creation or share matches a del call. This is definitely not an easy task if exceptions can alter your execution path in unexpected ways!

Python would not be python if there wasn’t a nice and clean way around it - the with statement.
So instead of simply opening a device you can write:

with cvb.DeviceFactory.open("GenICam.vin") as device:
    pass

This way you put the device into a scope, which ensures that the device will be closed once the interpreter leaves the scope. In addition with guarantees this even if an exception is thrown from within the scope but caught outside the scope.

In CVBpy all objects that hold significant resources support the with statement. Use it for your own benefit!

3 Likes

2 posts were split to a new topic: Acquire from multiple devices with CVBpy

A post was merged into an existing topic: Accessing CVB Management Console Camera Properties via Python

3 posts were split to a new topic: Setting Exposure Time via Node Map

2 posts were split to a new topic: Problem loading driver

5 posts were split to a new topic: Using PyQt with CVB

In this case if I want to connect to cameras on ports > 0 I need to use dsicovery and connect to them using their access_teken, right?
With this approach how I can set Number of buffers and CVB color format?
Did not find these settings in node_maps

@Fatemeh
The Discovery Interface does not use or have ports. You open the camera you want with the access token which is available after your discover.

To set the CVB color format (“PixelFormat”) or number of buffers (“NumBuffer”) you need to set the parameters in the token before opening the device:

token.SetParameter("PixelFormat", "2")
token.SetParameter("NumBuffer", "30")
1 Like