I have seen from the Intel D400 page you will be supporting the realsense drivers out of the gate. I am very interested in the potential for using multiple cameras to build up a highly detailed colour scan of our environment and see CVB as a good option for that. Especially if it can pre stitch the point clouds before passing them to ROS modules and image analysis modules.
Intel Realsense D400 Stitching
Is it possible to use CVB for the acquisition of 4+ D400 point clouds and stitch them together? For example if I know the transforms between the cameras will it be able to deliver both the acquisition and the stitching then output an image?
Do you have any example applications of 3D stitching?
Passing pointclouds/images to ROS
Is it possible to pass pointclouds straight from CVB to ROS modules? Or use CVB within the ROS architecture?
If I can would it be able to pass the pre processed stitched pointcloud discussed above?