Accurate dating of image capture

Hello,
I’m working on a camera positioning system.
I would like to know if there is a way to accurately date the image sent by the camera (out of date by the OS, during the treatment of the event) => Is the camera able to give this information?
In static, my measurement is very good. In dynamics, it degrades depending on the speed of movement, my position is late vs reality.

Development on:

  • Windows OS (7 & 10) 64 bit
  • VS 2015
  • CVB 12.01.003

Hardware:

  • AVT Mako G419 (free running or snap image)…
  • Intel Pro 1000 PT

Thank you
Best regards
Xavier

Hi Xavier,

First up: May I ask, what your usecase is exactly?

Some ideas:
Your camera should have a Node in the NodeMap, that indicates the number of tics, the internal timer of the camera performs per second.
You can read the number of tics at the moment you acquire the image (RawTimestamp property on the image) and compare this number to the current DateTime of your system.
From now on you can calculate the time, that has passed since the last acquired image and convert this to a DateTime.
This however is not a very accurate way of tracking the time, an image has been acquired.

Another way would be to get the systemtime every time you acquired an image and write it onto the image or use it for the filename of the image.

Hello Chris,
We use this camera in an offshore robotics project.
The measurements made by the camera serve as input data for
the calculator, which must predict the state of the sea to allow a good guidance.
This is why a precise dating of the image (=> measurements) is important. Delay of 10 or 15 ms is too important for us… The maximum’s deviation will be 1 ms.
I use :
“this.m_cvImage.ImageSnaped += new System.EventHandler(this.cvImage_ImageSnaped);”
to get image for processing, and i use “DateTime.UtcNow” for image datation.

I am currently on leave. As soon as I get back to work, I will test the accuracy of your solution.

Thanks
Best regards
Xavier

Hello,
With your indication, using the IGrab2 interface, I’m getting the timestamp of the last image.
Cvb.Driver.IGrab2.G2GetGrabStatus() get counter in ns, is it allright?
I significantly improve accuracy at the beginning of acquisition. But accuracy decreases as time passes.
I tried to implement the other method, with IDeviceControl interface, like the VB .Net example or VC++, with adaptation for C#.
But neither of the two functions return a result.
Last try in C#:
m_cvGrabber.SendBinaryCommand(Cvb.Driver.IDeviceControl.DeviceCtrlCmd(0x00000800, Driver.IDeviceControl.DC_OPERATION.GET),
ref iParam, 0, ref _iOut, ref iSize);
Wich value for iParam?
No value in return.
Same problem with m_cvGrabber.SendCommand()…
Cvb.Driver.IDeviceControl.DeviceCtrlCmd(0x00000800, Driver.IDeviceControl.DC_OPERATION.GET) is working, return 2048(0x800).

Thank you
Best regards
Xavier

Hello,
Some bad news… Accuracy not decrease in time, but a time shifting evolve like flapping with a period of 1 min approx.

I made test with linear axis and my camera (snap acquisition | Vlinear from 0 to 1 m/s), and compare camera’s position with axis position feedback.
I use cross correlation algorithm to compare camera measures and axis position.
In static, my error is about ~2.5 mm, but in dynamic, errors increase with velocity of movement.
By using the error of position and the speed at the moment T, I deduct the gap temporal at this moment.

With the function “Cvb.Driver.IGrab2.G2GetGrabStatus()”, we obtain +/- 50 ms on time shifting (good repartition of error on interval).
With computer datation, we obtain [-35;+20] ms ( for [min,max] error, [-15;0] for majority of measures) on time shifting. And we haven’t flapping.
“Cvb.Driver.IGrab2.G2GetGrabStatus()” is not a good solution in any case.

RawTimestamp property from NodeMap is more reliable than “Cvb.Driver.IGrab2.G2GetGrabStatus()”?
In C#, which codes to use to get back this information.

With a periodic external trigger perfectly calibrate(10 Hz for example), is it possible to obtain a constant gap?
If the gap is constant, it will be easy to correct it.

Thanks
Best regards

Hi @XBo,

if you want to go the G2GetGrabStaus() way, you have to pass Grab_Info_Timestamp to get the timstamp for the current image (documentation):

You can get the TimeStamp in .NET with the following call (assuming the new .NET wrappers you get here:
Wrapper):

var stamp = image.RawTimestamp;
With image being a Stemmer.Cvb.Driver.StreamImage.

Also the values might get more accurate with higher framerates in the camera, have you tested that?

Regards
Chris

Thanks for response,
My call seems correct, i think:
Cvb.Driver.IGrab2.G2GetGrabStatus(m_cvImage.Image,
Driver.IGrab2.TGrabInfoCMD.GRAB_INFO_TIMESTAMP, out val);

I’m working currently @ ~14fps (not possible to have higher frame in snap mode) with large multithreading pose estimation (All logical cores occupation are ~80% on i7 6700K). It’s not reasonnable (my opinion) to have higher framerate for my application.

What gain can be expected? For what frequency?
(In free running, camera’s framerate is about 21 fps)

Regards
Xavier

Hi @XBo,

I had a short talk with @parsd and it seems like your problem is, that the Quartz of the camera has a different frequency than the one of your system.
This results in the drift that you are currently facing.
What you want to do is to synchronize the time that passed in the system with the time that passed in the camera.

I will provide you some pseudocode for now, a coded example will follow.

Tuple<T, T> Synctimes()
{
startTime = DateTime.Now();
LatchTicks();
ticksInLatch = Timestamp Tick Value;

Nodes needed for latching

grafik

endTime = DateTime.Now();
passedTimeSpan = endTime - startTime;
timeOfLatch = startTime + (passedTime / 2);
return Tuple<timeOfLatch, ticksInLatch> SystemTimeCameraTimeRelation;
}

Routine(Tuple<T,T> syncTuple)
{
now = DateTimeNow();
image = G2Wait();
timeStamp = image.RawTimestamp;
passedTicks = timeStamp -syncTuple.Item2;
passedTime = CalculatePassedTimeFrom(passedTicks);
camImageWasTakenAt(now + passedTime);
}

I hope I got everything correct, as mentioned above, a coded example will follow.
This logic will also drift over time, so you want to call the Synctimes() every xTime, to get the needed accuracy.

Regards
Chris

Hello Chris,
Thanks for your response.
I try to implement this code now.
Regards
Xavier

1 Like

Hi @XBo,

as promised my example:

I used the new .NET wrappers that I already linked you above.
You can just open it in LINQPad to test it.
You will need to reference the Stemmer.Cvb.dll.

void Main()
{
	using (Device device = DeviceFactory.Open("GenICam.vin"))
	{
		var cameraQuartzTicks = GetCameraQuartzTicks(device);
		device.Stream.Start();
		// Call this whenever your timedrift gets too big.
		var syncTuple = SyncTimes(device);
		// Some kind of loop around this block
		for (int numTestWaitCalls = 0; numTestWaitCalls < 10; numTestWaitCalls++)
		{
			using (var image = device.Stream.Wait())
			{
				var newImageTimestamp = (long)image.RawTimestamp;
				long passedTicks = newImageTimestamp - syncTuple.Item2;
				var passedTime = CalculatePassedTimeFrom(passedTicks, cameraQuartzTicks);
                                var timeImageWasTaken = syncTuple.Item1.Add(passedTime);
				Console.WriteLine("Image was taken at " + (timeImageWasTaken).ToString("hh:MM:ss,mm"));
			}
		}
	}
}

private TimeSpan CalculatePassedTimeFrom(long passedTicks, long cameraQuartzTicks)
{
	return TimeSpan.FromSeconds(passedTicks / cameraQuartzTicks);
}

private Tuple<DateTime, long> SyncTimes(Device device)
{
	DateTime startTime = DateTime.Now;
	LatchTicks(device);
        DateTime endTime = DateTime.Now;
	var timeStampTickValue = GetTimeStampTickValue(device);	
	var passedTimeSpan = (endTime - startTime);
	var timeOfLatch = new DateTime(startTime.Ticks + (passedTimeSpan.Ticks / 2));
	Console.WriteLine(timeOfLatch.ToString("hh:MM:ss,mm"));
	return new Tuple<DateTime, long>(timeOfLatch, timeStampTickValue);
}

private void LatchTicks(Device device)
{
	using (var nodemap = device.NodeMaps[NodeMapNames.Device])
	{
		var latchNode = nodemap["GevTimestampControlLatch"] as CommandNode;
		latchNode.Execute();
	}
}

private long GetTimeStampTickValue(Device device)
{
	using (var nodemap = device.NodeMaps[NodeMapNames.Device])
	{
		var timeStampTickValue = nodemap["GevTimestampValue"] as IntegerNode;
		return timeStampTickValue.Value;
	}
}

private long GetCameraQuartzTicks(Device device)
{
	using (var nodemap = device.NodeMaps[NodeMapNames.Device])
	{
		var quartzFrequency = nodemap["GevTimestampTickFrequency"] as IntegerNode;
		return quartzFrequency.Value;
	}
}

Hope it helps!

Regards
Chris

4 Likes

Hi Chris!
Big thanks for your example.
I try it monday.

With your previous message, i try to implement the correction in my own software.
Now camera working in free running @ ~17fps, and i use function IGrab2.G2Grab(image)/G2Wait.
The charts are my results:

  • In orange, the linear movement (scale : 1/50).
  • In dark blue, the difference from my mesurement and feedback from linear axis, with camera timestamp.
  • In green, the difference from my mesurement and feedback from linear axis, with computer DateTime.
    => The “flapping” disappear, compare with my previous test.
    => Computer DateTime is always better. :frowning:

I check my version with your example on Monday.
Many thanks.
Have a good WE
Regards
Xavier

1 Like

Hello @XBo, as @Chris said we talk about your problem. He pointed out that you need sub millisecond accuracy. With the current method you improve the situation, but you only achieve a few milliseconds. This is normally enough to properly identify the images, but if you have more demanding needs you should look at

IEEE 1588 Precision Time Protocol (PTP)

Your camera should support that. But you need additional components to get a working system. Best contact our support team or your sales contact:

https://www.stemmer-imaging.de/en/request-support/

Hello @Chris and @parsd,
I check my code with your code example.
I’m near your code, i use GenApi function:

res = GenApi.NMGetNode(_nodeMap, “GevTimestampTickFrequency”, out _nodeTickFrequency);
// get Timestamp Control Latch Node
res = GenApi.NMGetNode(_nodeMap, “GevTimestampControlReset”, out _nodeReset);
// get Timestamp Control Latch Node
res = GenApi.NMGetNode(_nodeMap, “GevTimestampControlLatch”, out _nodeControlLatch);
// get timestamp tick value node
res = GenApi.NMGetNode(_nodeMap, “GevTimestampValue”, out _nodeTimestampValue);

and

GenApi.NGetAsInteger(_nodeTickFrequency, out dTickFrequency);
// get press the timestamp control latch execute
GenApi.NSetAsBoolean(_nodeControlLatch, true);
// get current timestamp tick value
GenApi.NGetAsInteger(_nodeTimestampValue, out value);

Ok, i’m going to contact the support, about PTP.
But, i’m not sure about the gain in my case, we increase the precision for the absolute time of the system (Computer & camera), but not the relative datation of data (in my opinion).
Regards
Xavier

There are a PCIe Intel network cards that support IEEE 1588. They come with an SDK with which you can query the hardware clock.

With this setup you have the same time base.

1 Like

Yes it’s a good solution to have “absolute timestamp” on system.

With my new acquisition process with free running camera, my best results are not with PC dateTime nor Camera TimeStamp, but with acquisition’s frequency…

So may be the solution is to use camera’s output with exposing signal to know exactly when the sensor is exposed with an “FPGA” (shortcut for electronic card…) to determine the “right ticks” for each image…

For the moment, i stop my investigation on the subject.
Thank you very much for your help.
Regards
Xavier

Hi XBo,

yesterday we met at the Technology Day in Paris. Unfortunately, we are still not really sure to understand fully what you’re trying to do.

The camera itself has a timestamp, that you can get in the way that Chris showed you in the previous examples. This timestamp is a relative value and should be very precise (<1 ms). However, according to you this timestamp is inaccurate of up to 30ms. Is this your issue?

And how do you measure the inaccuracy? Do you compare the differences between two consecutive images to the pulses used for triggering? Or do you run the cameras in freerun?

One possible explanation could be that you lose images in your aacquisition. To verify that, please check all of the streaming statistics, e.g. the number of buffers lost:

double numLostBuffers = device.Stream.Statistics[StreamInfo.NumBuffersLost];

Hi @Moli,
Yes it’s my problem.
I mount my camera on a linear axis ( Vmax < 1 m/s) and i acquire image of my target in static and in dynamic (Exposure time under 4ms in all acquisition, in free running mode).
My camera worked @ ~17.728 fps (Max fps on 12 bits on Mako G419-B).
Like Chris explain me, i use camera timestamp (software in C#).

I check on my data (my vision software) dTmax and dTmin on image timestamp => dT=[28;88] for 56.4 ms in average. The repartition:


Delay (in ms) between acquisition:

I compare the recorded position of my linear axis (Facq=100Hz), with the result of my “Vision Pose Estimation” with camera timestamp (Cross correlation algorithm), i made transform to minimize error (defect on linear axis positon).
In this condition, Maximum error is 28 mm (Not Acceptable).
If i suppose that dT is always 56.4 ms (17.728Hz) => Maximum error is about 10 mm (Acceptable).
In static, it’s about 1 mm.

For the moment, I cannot check this value: device.Stream.Statistics[StreamInfo.NumBuffersLost]
The camera is at the customer’s office and i have no opportunity to make test now.
But it seems that i have no “hole” (dTmax > 100 ms), i’m quite sure to have no lose in image
We need to order a new camera, I’ll measure it at that time, but I doubt the loss of images.

According to AlliedVision, the timestamp is assigned either at the start or the end of exposure for the corresponding frame. This should be really precise and it is specified to have a drift of only ±50 PPM, which would be around 0,05 ms. Thus, much less than what you see with your measurements.

Can you try to set the frame rate in the camera to a fixed (and not maximum) value? Do the same variations occur there as well?

And did you disable the flow control in your network card? This parameter automatically regulates the network traffic, which might influence timings.

If the timing variations are still present afterwards, and since this question is not CVB related, I advise you to contact our support team (support@stemmer-imaging.de) for further help.