I’m looking for a simple way to merge two trainingsets, but I cant find an otpion for that in Teach Bench or Minos Teach.
You’re right, neither TeachBench nor TeachNT (aka Minos Teach) offer an option to merge two training sets (as far as I remember the only teach program in Common Vision Blox to ever offer something like that was Manto Teach). So merging two training sets into one will need to be done in software.
You didn’t mention which programming language you’re using, but it doesn’t really matter, the approach is the same for all the currently supported languages:
- Load both training sets you want to merge into
- Get the number of images in
NumMTSImages; loop over all the images in
MTSImageto get the current MTS image from
- Retrieve the underlying Common Vision Blox image with
GetImageFromImage(MTS images and Common Vision Blox images are not exactly the same thing; one contains the other, but there’s also extra information) and add it to
- Get the number of marked instances in
NumMTSImageInstancesand loop over all the instances.
MTSImageinstanceto retrieve the current instance
- If the instance belongs to a model that already exists in
mts1just add it with
NewMTSInstance; otherwise add the model to
NewMTSModel(you can use
GetModelFromInstanceto get the required model information for
If in doubt it’s a good idea to check the resulting training set for inconsistencies like contradicting or exceedingly similar models.
It generally works, but there are pitfalls… If
mts2 contain two different models with the same name, just adding the instances might not yield the results you want.
You’re right, in that case it’ll be necessary to try and identify if the two models with identical name might in fact show the same thing.
A heuristic initial check could be done by comparing the models’ feature windows - if those are already different between the two training sets, then in all likelihood the two models are indeed different.
Otherwise one could correlate the model image (
mts2; if the correlation result is below a certain threshold then you assume a new and different model that happens to have the same name as an already existing one, otherwise you assume them to be identical.
This is by the way basically the same thing that happened in the older TeachNT application if you clicked a location in an image and provided the name of a model that already exists: In that case, the program would correlate the existing model’s image versus the location that was clicked, and if the match was below a predefined threshold that was by default set to 0.6 then it would show a message box asking whether or not the new location should be added to the existing model despite the poor match, and if the answer was now then prompt for the generation of a new model.
The message box in this case did not come from TeachNT but from the MinosCVC.dll itself (one of the few cases where a Common Vision Blox function may actually show a message box…). The function that does this is
NewMTSInstance (if the
AskForce parameter is
true) - and you could in fact leverage this if you want the operator to interactively select whether the models should be forced together or not. The correlation threshold for this functionality can be defined using
Again not an entirely stable solution. Imagine a case where I have a pattern of e.g. 100x100 pixels, trained at the center (50, 50) and named “A” in
mts1 and the same pattern, this time trained with a size of only 50x50 pixels but at the same center location and under the same name “A” in
mts2. If I go ahead as described above this would add this pattern to the bigger model “A” in
True and fair - but then again: What’s the point? If the same patten has been trained in both training set files but one time with the central quadrant only and one time as the entire pattern then a merged training set would necessarily produce a classifier where one class contradicts the other - a situation that should always be avoided.
Avoiding that sort of problem generically i going to be tough, but he general recommendation should probably have been to check the merged training set for such inconsistencies.