Basic usage of Intel RealSense cameras with CVB

This guide is suitable for the Intel® RealSense™ cameras D415/D435/D435I. It describes the steps starting from driver installation/configuration to basic sample programs.

4 Likes

1. Requirements and installation:

The following steps describe the requirements for a standard setup of the 3D camera.

  • The camera can be connected and powered by an USB 3.0 interface.

  • The system requirement is an Ubuntu®16.04 or Microsoft® Windows® 10 operating system on the part of the camera.

  • Download and install the latest version of :cvb:.
    Select the installer depending on your operating system and platform:

  • Download and install the transport layer (TL) driver:

    • As a Windows® user download and install the ‘RealSense GenTL producer’.

    • For Linux users an installer (.deb - package) is available via support request.

      Note: The terms of service of :cvb: apply.

1 Like

2. Licensing:

For the RealSense™ cameras to work properly with :cvb: a license is required. See the :cvb: licensing guide for detailed information about purchase and methods. If :cvb: is used without a valid license, a watermark (shape of :cvb: logo) will appear on displayed images. The license terms of 3 party software can be found on the :cvb: website.

3 Likes

3. Configuration and Discovery:

If a complete overview of the funcionality and especially the driver properties is needed, a first dive into the :cvb: System Browser (>13.02.000, Windows) or the :cvb: GenICamBrowser can be helpful.

Fast access (System Browser, Windows only)

  • Connect the camera to a USB 3.0 Port:The RealSense™ camera must be connected to an external USB or on-board Host controller card with USB 3.0 ports (usually marked blue). Other recommendations are an USB 3.0 cable with maximum length of 5 meters, and the use of machine vision validated USB cables. See this documentation for more information about the USB 3 vision standard.
  • Type “System Browser” in the Start menu
  • The device will be available in the list on the left hand side

  • A double click on the DEPTH camera in this list will open a depth map live view.

Advanced setup (GenICamBrowser)

The TL implementation allows the standard configuration for GenICam devices.

  • Connect the camera to a USB 3.0 Port:

    The RealSense™ camera must be connected to an external USB or on-board Host controller card with USB 3.0 ports (usually marked blue). Other recommendations are an USB 3.0 cable with maximum length of 5 meters, and the use of machine vision validated USB cables. See this documentation for more information about the USB 3 vision standard.

  • Open the GenICam Browser from the Start menu (Windows) or from the Dash (linux)

  • Search for available devices. The RealSense™ cameras may appear as two separate devices (RGB and DEPTH) .
    For further details refer to the GenICam user guide

  • Configure and save the device driver parameters into an .ini - file:

  • Select the cameras (1. and 2.) and add them to the ‘Configured Devices’ section by clicking on the ‘add’ button (3.).

  • In order to select the suitable driver depth image format click on the configured depth device (right hand side), hit ‘Options’ (4.) and select ‘Raw’ as CVB Color Format.

  • Save the configuration selecting ‘Save’ (5.) .

  • Open the device view by double clicks on the desired device in the ‘Configured Devices’ section to see a live capture.

  • By selecting a higher ‘max visibility’ in properties menu (6.) (Beginner, Expert, Guru) advanced features can be displayed.

  • Change RealSense™ specific settings using the NodeMap in the property grid (7.) . Advanced options for sensor settings and processing methods can be adjusted. For feature availability and options refer to camera manuals in the ‘Downloads’ section.

  • Intel releases several documents for methods improving camera performance (e.g. this guide or calibration instructions)

  • The device specific information stored in the property grid (7.) can be saved to a .gcs file via ‘Save device properties’ in the properties menu (6.)

  • To reload these settings programmatically, please refer to the C-API function NMLoadSet().

3 Likes

Other GenTL Consumers

For other software to handle with the TL, an environment variable called GENICAM_GENTL64_PATH or GENICAM_GENTL32_PATH is created. The interface can be found there. Most GenTL consumers such as :cvb: use this variable to load the TL, hence it should be available automatically.

3 Likes

4. Sample Programs:

These sample programs include a basic acquisition and processing routine in different APIs. A single frame is acquired and transferred from depth map to point cloud by means of a calibrated projection. These snippets make use of the new APIs CVB.Net, CVB++ and CVBpy. A valid :cvb: installation (version > 13.2.000) is required.

Please note: The 3D calibration method (pinhole camera model) required for the transfer between depth image and point cloud for RealSense™ cameras isn’t available yet for CVB++/CVBpy, but will included as soon as possible. Interested developers are asked to use the .Net Wrappers or to send a support request. The said function also can be found in the C-Api and can be included via a handle.

CVB.Net

using System;
using System.IO;
using System.Linq;
using System.Text.RegularExpressions;
using Stemmer.Cvb;
using Stemmer.Cvb.Driver;
using Stemmer.Cvb.Utilities;
using Stemmer.Cvb.GenApi;

namespace GrabPointCloud
{
    class Program
    {
        static void Main()
        {
            // open device and get information
            var genPath = Path.Combine(SystemInfo.InstallPath, "Drivers", "GenICam.vin");
            var discovery = DeviceFactory.Discover();
            var depthDiscover = discovery.First(i => Regex.IsMatch(i[DiscoveryProperties.DeviceModel], @".*Intel RealSense.*_DEPTH"));
            using (var device = DeviceFactory.Open(depthDiscover))
            {
                NodeMap nm = device.NodeMaps[NodeMapNames.Device];
                // get The camera intrinsics and create a Calibrator
                var fXNode = nm["IntrinsicsFocalLengthX"] as FloatNode;
                var fYNode = nm["IntrinsicsFocalLengthY"] as FloatNode;
                var pXNode = nm["IntrinsicsPrincipalX"] as FloatNode;
                var pYNode = nm["IntrinsicsPrincipalY"] as FloatNode;
                var sZ = nm["DepthUnits"] as IntegerNode;
                Point2Dd principalPoint = new Point2Dd(pXNode.Value, pYNode.Value);
                var calibrator = Calibrator3D.FromPinholeCameraModel(fXNode.Value, fYNode.Value, principalPoint, sZ.Value);
                // start acquisition
                Stemmer.Cvb.Driver.Stream stream = device.Stream;
                stream.Start();
                try
                {
                    using (StreamImage image = stream.Wait())
                    {
                        image.Save("rangemap.tif");
                        var PointCloud = calibrator.CreatePointCloud<Point3Df>(image);
                        PointCloud.Save("pointcloud.tif");
                    }
                }
                catch (System.IO.IOException e)
                {
                    Console.WriteLine("Error :{1}", e.Message);
                }
                finally
                {
                    stream.Stop();
                }
            }
        }
    }
}

CVBpy

import cvb
import os
import re

flags = cvb.DiscoverFlags.IgnoreGevSD | cvb.DiscoverFlags.IgnoreVins
discover = cvb.DeviceFactory.discover_from_root(flags)
info = next(filter(lambda x: re.match(".*Intel RealSense.*_DEPTH",x[cvb.DiscoveryProperties.DeviceModel]),discover),None)
with cvb.DeviceFactory.open(info.access_token) as device:
    # get device intrinsic information
    nmp = device.node_maps["Device"]
    zDivisor = nmp.DepthUnits.value
    focalX = nmp.IntrinsicsFocalLengthX.value
    focalY = nmp.IntrinsicsFocalLengthY.value
    principalX = nmp.IntrinsicsPrincipalX.value
    principalY = nmp.IntrinsicsPrincipalY.value
    principalPoint = cvb.Point2D(principalX,principalY)
    calibration = cvb.Calibrator3D(focalX,focalY,zDivisor,principalPoint)
    
    # start acquisition
    stream = device.stream
    stream.start()
    rec_image, status = stream.wait()
    image = rec_image.clone()
    rec_image.save("rangeimage.tif")
    
    # create and save point cloud
    flags = cvb.PointCloudFlags.NoExtrinsic | cvb.PointCloudFlags.Double
    pc = cvb.PointCloudFactory.create(image.planes[0], calibration, flags)
    pc.save("pointcloud.tif")
CVB++

#include <iostream>
#include <cvb/string.hpp>
#include <cvb/image.hpp>
#include <cvb/image_plane.hpp>
#include <cvb/point_cloud.hpp>
#include <cvb/point_cloud_factory.hpp>
#include <cvb/calibrator_3d.hpp>
#include <cvb/device_factory.hpp>
#include <cvb/utilities/system_info.hpp>
#include <cvb/driver/stream.hpp>

int main(int /*argc*/, char* /*argv[]*/)
{
	try
	{
		// open device and grab a depth image
		auto path = Cvb::InstallPath();
		path += CVB_LIT("Drivers/GenICam.vin");
		auto device = Cvb::DeviceFactory::Open(path);
		auto nodeMap = device->NodeMap(CVB_LIT("Device"));
		auto stream = device->Stream();
		stream->Start();
		auto waitResult = stream->WaitFor(std::chrono::seconds(10));
		if (waitResult.Status == Cvb::WaitStatus::Timeout)
			throw std::runtime_error("acquisition timeout");
		auto image = waitResult.Image;
		stream->Stop();

		// get the camera's intrinsics
		double fx = nodeMap->Node<Cvb::FloatNode>(CVB_LIT("IntrinsicsFocalLengthX"))->Value();
		double fy = nodeMap->Node<Cvb::FloatNode>(CVB_LIT("IntrinsicsFocalLengthY"))->Value();
		double px = nodeMap->Node<Cvb::FloatNode>(CVB_LIT("IntrinsicsPrincipalX"))->Value();
		double py = nodeMap->Node<Cvb::FloatNode>(CVB_LIT("IntrinsicsPrincipalY"))->Value();
		Cvb::Point2D<double> principal(px, py);
		double Sz = nodeMap->Node<Cvb::IntegerNode>(CVB_LIT("DepthUnits"))->Value();

		// transform to point cloud
		auto calib = Cvb::Calibrator3D::FromPinholeModelParams(fx, fy, Sz, principal);
		auto flags = Cvb::PointCloudFlags::Double | Cvb::PointCloudFlags::NoExtrinsic;
		auto pc = Cvb::PointCloudFactory::Create(image->Plane(0), *calib.get(), flags);
		pc->Save(CVB_LIT("pointcloud.tif"));
        image->Save(CVB_LIT("rangemap.tif"));
	}
	catch (const std::exception& error)
	{
		std::cout << error.what() << std::endl;
	}
	return 0;
}
Result:

The sample scene shows a cup, a RealSense box and a garbage can. With the code above, a depth map and the corresponding point cloud are created.

The acquired depth map and the generated point cloud are obviously stored as .tif, respectively. As a result they can be visualized by using :cvb: Viewer and :cvb: Player3D.

Range map:

Point cloud:

3 Likes

5. FAQ

The Depth module provides data (by default) as a 16 bit Image.
The value 0 means that there is no distance value available.
Values 1 to 65535 represent depth data (calibrated, value depending on “depth unit” parameter of the device)

The TL or the device do not appear
Avast and Kaspersky might block the TL. Typically, the C-API call fails when opening the camera device. An antivirus exception for the calling software might be needed.

3 Likes