Parsing the line count out of a StreamImage from a Linea camera

(I’m not sure if this is on topic here)

I’ve got the code for getting the line counter from a SapBuffer, but I’m not sure if/how that translates when I have a StreamImage via Cvb. Has anyone got an example of this anywhere?

I only need to read the counter of one line of the image, if that makes a difference.

I hope I understand correctly: you want the valid lines delivered by the camera via :cvb:?

You can use the following code right after the .Wait() call:

// from iDC_GenICam.h: DC_BUFFER_INFO_DELIVERED_TRAILER_HEIGHT   = 0x00001A00,
var getDeliveredHeight = new DeviceControlCommand(DeviceControlOperation.Get, 0x00001A00);
device.DeviceControl.SendCommand(getDeliveredHeight, 0, out long deliveredLines);

Also you can crop the image without copying it via a MappedImage:

var croppedImage = image.Map(new Rect { Width = image.Width, Height = (int)deliveredLines});

But take care: due to not copying the original image returned from .Wait still must be kept valid (not disposed).

1 Like

Thanks. I don’t think that’s what I mean. I’ve found the name for what I’m looking for: the End-of-Line metadata. But I see in the manual that the content of the meta-data is actually configurable, which I guess means custom parsing! Is GetPixel() the best option for this? I’m not sure how to translate from an array of doubles into a qword.

Also, what is a Frame in the context of a line scan? Because I see there’s a LineCounter and a FrameCounter.

I’m no expert with the Linea, but with GEV linescan cameras you send a preconfigured number of lines in one frame to reduce protocol overhead. So you receive such a logical frame when that number of lines have been acquired. Except if you use e.g. a start/stop trigger. Then the frame may be some lines shorter. The actual number of valid lines can be retrieved by my code above.

Regarding the GetPixel: This is the slowest method and more complicated. Use linear access on the plane of the image (e.g. .GetLinearAccess<byte>() on a Mono8 image frame). I will write in a later post about this, when I have more than my phone :slight_smile:.

1 Like

Here just a quick example on how to access Mono8 frame information:

var plane = image.Planes[0];

var access = plane.GetLinearAccess<byte>();
int x = 17, y = 42;
var pixelAtXY = access[x, y];

If you have Mono16 (anything between Mono10 and Mono16 to be exact) you can use this variation:

var plane = image.Planes[0];

var access = plane.GetLinearAccess<ushort>();
int x = 17, y = 42;
var pixelAtXY = access[x, y];
2 Likes

Thanks, I’ll give that a try.
I’ve got BiColorRGBG8 :slight_smile:
The metadata isn’t really ‘pixels’ of course. I’ve managed to configure so I’ve just got the FrameCount in the second to last two pixels, but haven’t quite worked out how they are encoded.

1 Like

Hi @Ben,

@parsd asked me to comment on this topic as I already had some experiences witht he Linea EoL Metadata. Actually the idea of the metadata is something which is used quite often to encode information into an image.
That is easy made if you have only an 8 bit value to put into the image if you e.g. have a Mono8 image. This way the specific 8 bit value at a known coordinate does have a an 8 bit value.
It will is more complex with values greater 8 bit like 16, 32, 64 and so on. The basic idea then is to split the value into 8 bit parts and use two or more pixels from the image for those parts.

I am not sure but in your case the BiColorRGBG8 can be interpreted as Mono8 therefore I would guess that the value FrameCounter that you are refering to is a 16 bit value encoded into two 8 bit pixels where the first pixel does have the MSB (Most Significant Bit) and the second pixel has the LSB (Least Significant Bit).
To extract the correct value from the image typically you just have to use LinearAccess (as @parsd suggested) to get the value of both pixels and then you use a bitshift and add both values.

Example: Int16 FrameCount = Pixel1(MSB) <<8 + Pixel2(LSB) ;

Depending on the programming language you will also need a mask which zeroes out values which are “not initialized”.

If you have a look into the images you are acquireing one of the two pixels will increase with each image. This pixel does have the LSB and the other one the MSB.

Kind regards,
Theo

2 Likes

Ps.: If you transfer the image as BiColorRGBG8 make sure that it is not automatically transformed into RGB by CVB. I would guess that the transformation will make it hard to extract the correct FrameCount value. So use “PixelFormat = RAW” for your first tests. :wink:

1 Like

OK, trying now.

I don’t think I have RAW format, just BiColorRGBG8, RGB8, BGRa 8-bit, Mono12 & G8. Are any of those RAW?

1 Like

The raw is meant for the interpretation of the data in :cvb:. Raw means as delivered by the camera. Here is an example:

2 Likes

Yes, apparently there is a difference in pixel values between Raw and NotRaw :frowning:, is there a way to convert the image to the correct format after checking the pixel values?

One thing I don’t understand: the Sapera documentation says that the FrameCounter and CameraId are in QWORD 6 (64 bits), and yet in the UI it says they occupy 4 pixels (which would only be 32 bits, so only 16 each).

(I’m going to try all 4 pixels/bytes to see if any makes any difference or sense - I can’t see what use a counter that only goes to 65535 is)

Can anyone shed any light on this?

image

:frowning: And of my four pixels, only one changes, so that means my counter range is 256!

1 Like

Well, you have of course the option to transform your BiColorRGBG8 into RGB24 on the camera directly. I think the camera will then write the correct values into the RGB image. Regarding manual transformation of the data via CVB I would like to hand this question over to @parsd :wink: .

I think the QWORD is devided into FrameCounter and CameraID so each value gets 4 pixels (as it is mentioned in the table you have included into your post). So regarding the documentation I would say that you have 32 bits for your FrameCounter and that should be 4 pixels. If this is not the case I would have to double check it here or on your desktop. On the other hand I do not understand why a 16 bit value has no use for you? What is your usecase here? I mean the value will overflow in the end and start from zero again. That would be something your software could track.

The pixel which goes up to 256 is the pixel with the LSB. After those first 256 frames this pixel will overflow and the second pixel will increase by one. If the second pixel overflows the third pixel will increase and so on and so forth =). Did you wait long enough for an overflow? :slight_smile:

regards,
Theo

1 Like

Thanks, there’s something weird in the way the values are encoded that I’m missing. It seems like even though it says 8 bits per pixel, when they arrive, in memory there are actually two bytes for each pixel. And for the metadata, there is ‘other’ information in the second byte.

So, for example when I look the last four pixels of the first row I get this:

var access = image.Planes[0].GetLinearAccess<byte>();
var pixel1 = access[1020, 0]; 
var pixel2 = access[1021, 0];
var pixel3 = access[1022, 0];
var pixel4 = access[1023, 0];

This returns 1, 0, 101, 184

But if I access memory directly:

for (int i = 0; i < 4096; i++)
{
    var byt = Marshal.ReadByte(access.BasePtr, i);
    System.Diagnostics.Debug.WriteLine($"{i}: {byt}");
}

I find this:

2040: 1
2041: 0
2042: 0
2043: 0
2044: 101
2045: 81
2046: 184
2047: 0

(The second half of that is probably the CameraId). So it seems there’s extra information ‘hiding’ in there that I can’t see using access indexers. I guess this has something to do with XInc being 2? This is weird, because I’ve had expected GetLinearAccess<ushort> to work in this case, but that throws.

Now I know this, I can just do a Marshal.ReadInt32(access.BasePtr, 2040) directly to find the FrameCounter. (Correction, it needs to be uint - which Marshal doesn’t handle, and also I haven’t tested beyond 216 to verify).

Switching away from RAW format I have XInc = 3, and for the data range related to FrameCounter, I see this:

3060: 1
3061: 0
3062: 32
3063: 51
3064: 0
3065: 0

I’m not at all sure what the middle two are for. Actually, looking at the images, it looks like that could be actually sensor information ‘bleeding’ through, maybe the metadata only writes onto two out of three bytes?

Anyway, I’ve tried this up to 216 + 1, and it seems to work:

var byt1 = Marshal.ReadByte(access.BasePtr, 3060);
var byt2 = Marshal.ReadByte(access.BasePtr, 3061);
var byt3 = Marshal.ReadByte(access.BasePtr, 3064);
var byt4 = Marshal.ReadByte(access.BasePtr, 3065);
//3 & 4 reversed is on purpose! - found empirically
var id = (byt3 << 24) + (byt4 << 16) + (byt2 << 8) + byt1;

In my context, one image is 20cm, so 216 is about 13km, which is not unrealistic. An extra byte pushes me up beyond 3000km, which is better :slight_smile:

Well, that XInc beeing 2 does really seem to be strange and probably should be one. That is something we will have to look into (@dusalf :hugs: ).

As mentioned before there is still the possiblity to switch the pixel format to RGB24 on the camera. Is your dataload to fast to do that?

We don’t seem to have that option on these cameras (Linea C2048-7um).

The only reason I can think of that the PixelFormat does not support RGB24 is that you have activated TurboDrive. Otherwise RGB24 (or RGB8) should be supported.

Yes, I’ve got RGB8. Not sure what TurboDrive is though (I can’t find it in the Property map), and I’ve got the default settings. I’ll try RGB8 now and let you know what I find.

OK, very strange findings…

With RGB8:

  • The Metadata pixel count now reads (correctly) 8 pixels
  • The image now also contains 8 pixels on the right hand side, the 4 last are black (I didn’t activate CameraId). This is the expected behavior.
  • XInc, still says 3, and I have to offset BasePtr by 3048* to find the first byte of the FrameCount.
  • But strangely (even though XInc says 3), the four counter bytes appear to be consecutive, so I can just do (uint)Marshal.ReadInt32(access.BasePtr, 3048) and that appears to work.

* 2048 pixel sensor, but HorizontalBinning = 2, so 1024 pixels, offset by 8 = 1016, and x3 (because of XInc?) = 3048.

This case with XInc == 3 is correct. :cvb: has a plane concept where each color component is a plane:

Plane

  1. Red
  2. Green
  3. Blue

Thus when the data is stored as RGBRGB…RGBMETA, the offset to the next pixel on the plane is 3 (e.g. R to R). To correctly get the meta data region you can use the width time 3 as an offset from the start of the line. The meta data should lie in the padding region. Thus the YInc should be width * 3 + meta data.

Thus also the meta data should not be encoded in the witdth of the image. Otherwise our viewers like the GenICamBrowser would show invalid images.

Well I can see the metadata pixels in GenICamBrowser, is that a bug?

I did wonder about that, but I think using Marshal is still better than getting the three values from GetPixel?