Multiple Intel Realsense D400 + ROS

robot
3d
acquisition
#1

I have seen from the Intel D400 page you will be supporting the realsense drivers out of the gate. I am very interested in the potential for using multiple cameras to build up a highly detailed colour scan of our environment and see CVB as a good option for that. Especially if it can pre stitch the point clouds before passing them to ROS modules and image analysis modules.

Intel Realsense D400 Stitching

Is it possible to use CVB for the acquisition of 4+ D400 point clouds and stitch them together? For example if I know the transforms between the cameras will it be able to deliver both the acquisition and the stitching then output an image?

Do you have any example applications of 3D stitching?

Passing pointclouds/images to ROS

Is it possible to pass pointclouds straight from CVB to ROS modules? Or use CVB within the ROS architecture?

If I can would it be able to pass the pre processed stitched pointcloud discussed above?

#2

This is very interesting and I would also be interested in this for different applications.

#3

Hello @FredI and @BBQRibs,

I’m terribly sorry for the delay. I should have answered earlier.

Regarding Release:
Our intention is to deliver integration with CVB 13.1 however we might not be able to deliver on the first release of CVB 13.1. Short answer: It will be available soon rather than now.

Regarding Realsense, CVB and ROS:
Well yes and no. A camera like Realsense is the typical 3D camera for a ROS system, so there are several examples for this.

With CVB we don’t actually have plans to create a CVB-ROS integration (right now!). However we won’t change datatypes in a way to make these systems incompatible (short answer: “raw” datatypes will exist and should play well with ROS).

Link to librealsense: https://github.com/IntelRealSense/librealsense
Link to Realsense Examples (some with ROS): https://github.com/IntelRealSense

@FredI Could you tell me what “image analysis modules” you meant? This is useful information for us.
@FredI Secondly, could you tell me the parameters of “highly detailed colour scan”?

Once the Realsense integration for CVB is finished / released, you can aquire via CVB and pass the image to ROS. Technically it is possible right now, with some tweaking (GIGE or U3V cameras, we just don’t have the realsense integration in CVB) .

Regarding Stitching Pointcloud:
This is a planned feature of our 3D Tools. I can’t actually name a release date.
Again an answer with your timeframe for your projects is useful to prioritize development.

Hope this helps. Did i miss anything?

#4

Hi @BBQRibs,
Do you mean the realsense in general or the integration into CVB?

#5

No worries for the delay @c.hartmann it was a speculative enquiry.

Realsense + ROS

We are already using this integration but thank you for highlighting it. Intel support this code base very well and its very simple to use.

In regards to the questions

1.We are doing a couple of different detection’s using Tensorflow and Keras with some custom algorithms. Not any CVB modules unfortunately.
2. As we are using deep learning the parameters for a ‘highly detailed colour scan’ are detailed enough for our detector to work. Its not a fixed thing, we believe a number of D415s could be the correct solution.

Stitching

Once you have this module developed we would be extremely interested in purchasing this. The effort of syncing, calibrating and stitching cameras is significant so if you provided a solution for 2+ 3D camera stitching from any setup(as long ad they overlap) that would be great.

Project Timeframe

ASAP, we are actively developing this at the moment. If you had it now we would buy it.

#6

@FredI
Yes the D415 should be the best solution.

Pro:

  • Smaller FOV at same resolution. I.e. better image.
  • Better Projector. Might be insignificant.

Cons:

  • Rolling Shutter. Pictures with Movement might be a Problem. For Slam (or similar) it should be fine.
#7

@c.hartmann thanks for confirming our thought on which camera is most suitable.

Could you give any indication of when you might have the stiching tool set created? Would it be within the next 6 months? 12 months? Or beyond?

#8

Hi @FredI,

What we will be releasing in :cvb: 2018 in terms of 3D processing and handling capabilities will be a first step and a foundation to build upon. To give some background: The 3D functionality in :cvb: in the past (in the version range [10.x, … 12.1]) was based on parts of the Sal3D library by AQSense. In 2016, AQSense was bought by Cognex and the Sal3D library was discontinued as a product. As a side effect of that decision, availability of the 3D functionality in :cvb: ended in October 2017 (which is why :cvb: 13.0 has no 3D tools in it), and STEMMER had to start building a replacement from scratch.

This is why initially in the 2018 release, we will basically cover what was there before with a few added details here and there:

  • Functionality to acquire, handle, transform and work on 3D data (this time, however, with GenICam compliant data format coverage, i. e. we will be able to talk directly to GenICam compliant cameras capable of outputting 3D point clouds once they become available).
  • A component for displaying 3D point clouds interactively.
  • An ICP-based point-cloud-matching algorithm comparable to what was called CVB Match 3D before.
  • A 3D calibration tool mostly for extrinsic camera calibration - this time with support for intrinsic calibration as well (initially intrinsic calibration will cover the calibration models stored in AT cameras, later on also LMI and and custom calibration).

Once we have that rolled out, we will of course not stop there, but if you ask about the time frame for the release of e. g. a general purpose 3D stitching implementation, we are rather looking at 12 months or more.

1 Like
#9

@illusive thanks for the clarification. It is good to understand a bit more about the road map, when the CVB 2018 3D release and the intel realsense support is released we will definitely investigate using it. If the stitching tool was there we would definitely be using it.

#10

Point noted (frankly and that’s in fact also the kind of feedback we hope to get on this platform).

#11

Hi @FredI,

i thought about this a little bit. And realized the pro point “Better image” might be complete nonsense. These cameras don’t use the same optics / lenses. So this statement is a bad guess at best.

Sorry.

#12

@c.hartmann I understood the point though, you will have more pixels per mm.