Combining point clouds

Hi,

I don’t mind which language, python, C++, or C#, but is it possible to combine multiple sets of 2.5D data into one 3D point cloud, from N cameras using CVB? Thanks

Hi Adam,

combining measurement data from multiple 3d sensors that each output individual metric point clouds (or 2.5D rangemaps), requires a so called extrinsic calibration. The output of such an extrinsic calibration is a transformation matrix for each sensor. These matrices can then be used to transform all point clouds into the same coordinate system.

CVB offers a function to calculate a transformation matrix using point pairs, determined with a target called AQS12. It can be used to do exactly what you want. It is specifically designed to extransically calibrate laser triangulation sensors, where a linear motion is part of the acquisition, but it can also be used for other imaging technologies. Whether this is the best match for your application depends on the type of 3D sensors that you want to use, on the field of view and on the arrangement of the different sensors to another.

Can you give us more details about your planned setup?

1 Like

The spec given is;

using multiple RealSense D415 cameras (up to 8), arranged in an inward facing circle, with each attached using a 3-12 metre NewNex repeater cable to a dedicated USB C 3.1 port on a single PC (top spec Intel Hades Canyon). The Software is Microsoft Windows 10, Intel RealSense SDK, and Unity with RealSense camera plug in. All software is at the latest release. there are multi cameras working independently in Unity.

The program will need to precisely locate the position and xyz for each camera. The number of cameras used will vary and in future could be up to 32 units. This data can be obtained from the cameras, with the possible exception of their physical position in the room layout. If the latter is the case then one camera should first be used to scan the room and locate the positions of the other cameras. If necessary this may also be achieved by using other recommended equipment (e.g Lightform, HoloLens, Structure Sensor etc).

Once it is possible to determine the position of all the cameras and understand their orientation it is necessary to synchronise data from all the cameras into a consolidated single 3D view in three formats - as a point cloud, a series of polygons, and a rendered RGB view. These must automatically adjust the required settings to achieve these views in Unity.

If i understand correctly you need to calibrate the cameras first since you don’t know where they are located. This is usually done with a specific calibration target like the AQS12 or spheres. The type of pointcloud you need is a sparse point cloud. This can still be done with a calibration file but you need to generate it first for each setup, hence the necessary calibration.

I will have to check with the rest of our team how far this is integrate in the currently available CVB version.

1 Like

Thanks, do let me know

I’ve done a project with this already using CVB. The situation is actually a bit more complex even, depending on the desired accuracy.

The most straightforward approach is to assume that the intrinsic calibration of the realsense is fine; wich means that we assume that there is no (systematic) distortion present in the 3d pointcloud. I don’t think this holds true for the realsense (it most certainly does not for the color data!) but it does make matters easier.

Then, we somehow need to find the extrinisic calibration. Indeed, generally done by looking at some known target that is easy to find in the 3d image such as a sphere (location only) or the AQS12 (rotation + translation). After finding the location of such a target in the images of different cameras it becomes possible to calculate their relative location.

I wouldn’t directly call it easy to build (well, in particularlt this application has a lot of noisy cameras which makes thinks extra complicated), but it is most definitely in a usable state in CVB :slight_smile:

1 Like

Good morning I checked with the rest of the team. As CvK said, it can be done but you need the calibration. We are currently working on adding more functionality to CvB regarding extrinsic calibration and detecting objects in 3D point clouds, but this is not public yet. This will also include automation of some steps that CvK needed to do by himself.

We haven’t combined CvB with unity, so I’m not sure how well this will work. Also point cloud to polygons is not public yet. I’m not sure when this will be available, as it depends on how strong the demand is from customers.

If you are attending one of the Technologieforen starting tomorrow in Munich or later in England, France, Schweden or Netherlands you can hear more about calibration for 3D point clouds and we can talk about in person. Here is a link https://www.stemmer-imaging.com/de-de/technologieforum-bildverarbeitung-2019/

Thanks to Phil and CvK for their great input. Let me summarize all the options for aligning multiple sensor data into the same coordinate system with CVB (extrinsic calibration):

AQS12

If you want to perform the extrinsic calibration using an AQS12, you need to ensure that all RealSenses see all of the target. This can get kind of tricky, depending on the arrangement. In the ring arrangement, mentioned by Adam it might work, but it is probably not the most accurate solution. Furthermore, for the best accuracies, the AQS12 should fill most of the field of view, which can get difficult to manifacture a solid AQS12 in an adequate size.
However, as CvK mentioned before, the RealSense itself is not the most accurate measuring device, so this might be sufficient anyway?!

Spheres

Espacially for cases with big field of views, challenging arrangements and where the accuracy is of great importance, we developped a new extrinsic calibration with the detection of spheres. Having multiple spheres you do not only get a location, but rotation + translation parameters as well.
One mayor advantage is also that spheres can be seen from everywhere. So it is easy to create 360° field of views with multiple 3D sensors, because all sensors can measure the same location of the sphere, but from different viewing directions.

However, as Phil mentioned, it is not released yet.

Match3D

There is also a third possibility in CVB to extrinsically calibrate multiple sensors. Using the tool Match3D you can align two point clouds of the same object to each other. If successfull, this gives you the transformation matrix to rotate and translate the data of one device (e.g. a RealSense) to the other. Doing this step for each RealSense, you will get a single point cloud from all devices.
The advantage of this algorithm is that you do not need a specific target. It only must be sure that there is a big overlap of data seen by two neighbouring RealSenses and that the target has a reasonable amount of 3D structure.
However, the alignment algorithm will be strongly influenced by noisy point clouds, which is the case when using RealSenses. It might be possible though when using filters (e.g. the temporal filter) in the RealSense.

2 Likes