Gige CVB with CUDA

#1

Hi,
is possible to use CVB with CUDA?

Thanks for help

#2

Hi @pavik1994
Our CVB Tools do not use CUDA to speed up the processing if this is what you mean.

To develop own algorithms using the CVB Image Manager for acquisition and combine results with CVB Tools we have our GPUprocessing tool. With this you can create your own Shaders with the High Level Shader Language (HLSL).

If you want to develop your own algorithms with CUDA and combine them with CVB you can access the image data e.g. with GetLinearAccess to get a pointer to the image data. Then you can copy it to the GPU memory via the Nvidia SDK make your processing and copy it back to the system memory.
Then you can create a new CVB Image object with CreateImageFromPointer out of the image data in the system memory and make additional processing with CVB tools.

Regards
Sebastian

2 Likes
#3

You might also be interested in a document that @Frank mentioned in this discussion: We have an application note that describes how to interface :cvb: to OpenCV. This is not directly CUDA, but the concept is the same: If you want to use the content of a :cvb: image as the input buffer for a CUDA kernel you’ll effectively need to do the same things that you’d need to do to use a :cvb: image as the data source for OpenCV (and vice versa). You’ll probably make things a lot easier if the image width is a multiple of four, otherwise all you need to verify is that GetLinearAccess pointed out by @Sebastian returns true (which is usually the case unless you made use of e.g. the RotateImage flag in the GenICam.ini file.

You can download the full document from here: ftp://ftp.commonvisionblox.com/forum/documents/CVBInterop%201.1.0.pdf

@Sebastian: It is not apparent from this post, but @pavik1994 is using a TX1 which means GPUprocessing + HLSL is not an option.