Hello,
I have a Polimago application using both a search and a classification classifier (search for a defect, then classify it). I would like to make a restricted ‘Teach’ application that will let me control users’ access to the parameters. I’ve been looking at the manual, and I believe I will have to:
create a SIL: SilCreate(…). Really, I’ll need one for the search, one for the classification
add images: SilAddNamedImageItem(…). This looks straightforward
Next I get a little confused - I think I need to go through TTrainParams and TTrainSearchParams, before I call PMTrainClassifierfromSil and PMTrainSearchClassifierFromSil. Is that right?
A C# code snippet would be useful but I think I might not be too far away.
This is just a rough outline, but it should point you in the right direction. The method names here are in regard to the C++ API. There are a few additional methods for configuration and training set interaction. But in general:
For search you need:
Create a training set:
CreateMTS()
NewMTSImage()
NewMTSModel() (in case of PM Search, there is only one)
NewMTSInstance() (for every MTSImage)
WriteMTSFile()
Train the classifier:
PMTrainSearchClassifierFromMts() (here you need to pass the TInvarianceParams)
PMSaveSearchClf()
Do the search (applying the search classifier) on a given image or image ROI
PMOpenSearchClf()
PMGridSearch() to get a list of search results
I think, you could also create a search training set based on a SIL similar to how the classification training set is handled below.
For classification you need:
Create a training set:
SilCreate()
SilAddItem() (with SilCreateImageData() and SilCreateStringLabel() prior)
SilStore()
For training:
PMTrainClassifierFromSil()
PMSaveClf()
Classify a given position in an image:
PMClassify() to get the classifaction result for that position
The main difference is the method you call for the acutal training and what you do with the resulting classifier and, of course, how the training set is strucured; For search you only have a single class/model, for classification you have at least one but potentially multiple classes.
The topic that you touched upon is actually one of the more difficult to grasp aspects in . @Frank’s answer pretty much covers all the details already so I’ll try to also give the big picture.
First of all: When looking at Polimago it’s important to keep in mind that what you’re looking at are in fact almost two different tools:
On one side we have the Classification/Regression functionality. Long time users of have certainly noticed the similarities between the classification aspect of Polimago and the older Manto tool. This is no coincidence: Both tools come from the same author and draw upon the same basic ideas - they only differ in how they achieve their goal.
While Manto was to my knowledge the first commercially available machine vision tool based on Support Vector Machines, the Classification aspect of Polimago could in fact be seen as the logical next step in evolution, an improvement upon Manto in some key aspects, but not radically different. The entire preprocessing approach that maps input images to the classification space is common to both tools (with a few detail improvements like the Retina-Paxel approach in Polimago). The SVM got replaced by a Tikhonov regularization to enable the commodity of the holdout test plus it yields slightly better results, but that’s about it.
(Regression has not been specifically mentioned thus far, but at the end of the day the main difference between classification and regression boils down to the fact that for classification training you match the input data to the discrete results { -1, +1 } while for regression you may draw upon a continuum of values)
On the other side there is the search classifier of Polimago which is a unique, new and very versatile approach to the problem of determining an object’s position and shape using a classification method. Reducing the search classifier to what MCSearchAll did in Manto wouldn’t do it justice because by determining rotation and scale state it can actually do much more than what was possible with Manto - and faster.
Technically the search classifier is a small group of classification and regression classifiers that result from a fairly specialized training approach - and this is where the two entirely different aspects of the Polimago tools meet: They are both built on the same mathematical approach and using the same methods with different use cases in mind.
These different aspects almost necessarily result in very different training requirements and these in turn result in different database types being used:
Aspect
Classification / Regression
Search
Use case
Classification into one of N different classes / Mapping into the range of trained target values
Determination of object position, scale, rotation, other affine transformation state
# of classes trainable
Classification: practical limit lies somewhere between 20 and 50; Regression: class concept does not apply
Only one class per search classifier possible (but: images in that class do not need to look similar! e.g. training “0” through “9” into just one classifier is ok)
Training Set Size
Usually 100+ per class
Typically some 10 to 30
Automatic instance generation
None
Typically in the 4-digit range
Training from negative samples
None
Implicit
From the table above one can deduce that for a classification/regression training we’ll need a database that can handle a large number of small images just the size of the feature window. Whereas for search training it is essential that the images are significantly larger than the feature window so that the training process has pixel material to generate the implicit counter samples from. This is the reason why TeachBench handles both project types very differently and with different training database formats.
Fun fact: The first iterations of Polimago did not even foresee a database format of any kind being used. Instead one had to fill a structure of callback functions which the tool would use to find out anything it needs to know about the training data. These functions are still there: In the C and Delphi headers these are TGetLearningDataNumImagePlanes through TLearningDataFinalRelease and you can create a set of these by calling PMCreateLearningDataAccessSearch or PMCreateLearningDataAccess (or the Unicode counterparts of these two…). In C# you would create an object that implements the interfaces ILearningDataAccessClassificationAndRegression or ILearningDataAccessSearch for the same purpose.
This approach is very versatile and allows the user to interface to anything that can provide annotated image data for training. However, for something like the TeachBench we needed a native format for both use cases and ended up using the Sample Image Lists known from Manto for classification and - with slight adaptations - Regression, while the search aspect is covered using Minos training sets (rename your *.pts files to *.mts and TeachBench will open them as a Minos project…). This is why the first five functions that @Frank mentioned are not actually found in the Polimago.dll but in the MinosCVC.dll, whereas all the Sil* functions are located in the Sil.dll. It’s also the reason why the PMTrain* functions are named the way they are.
And - finally - the differences in the use cases (table above) are also the reason why the training for a search classifier requires quite a few more parameters than regular training - hence the difference between TTrainParams (which you would actually never need to create as a struct yourself because the training functions accept these as parameters) and TTrainSearchParams.
I would strongly advise against training a search classifier from a SIL. It is technically possible: You can generate a SIL that has a huge fringe, but for this to make sense for search classifier training you’d need to make the fringe so big that the feature window only covers about 1/9th of each image. This quickly becomes unwieldy - not least because in the SIL the position of the feature window is fixed for each image, so you’d need to make sure that your training samples are always at the same location of each training image.
The Minos Training Set removes these restriction and is generally much better suited to the use case of search classifier training. What it unfortunately lacks are support for color images (search classifier could in principle also be trained on color images!) and annotation of rotation and scale state (i.e. at the moment you’ll need to make sure that all your images that enter search classifier training are given to the tool at 0° rotation and 100% scale). But we’re working on that and you can expect for these restrictions to be lifted no later than 14.0
I think I need to read that a couple more times @illusive, but I really appreciate the detail - there is nothing worse than being asked questions in front of a customer and not even being able to provide a partial answer without further research (thank you STEMMER support…).
A couple of points:
“fringe” - you mean the border outside of the feature window? The neighbourhood information? I understand why this would be important for a search classifier.
So I should use the Minos mts functions for search - I know there are some Quickteach tutorials (handy for alignment applications) but I haven’t seen any tutorials that build an MTS - I will play and see how I get on.
Thanks for the information.
PS - good to hear that there is more in the works: the difference between the original release of Polimago and the second release was really nice.
In the context of Sample Image List, fringe means a belt of pixels that surrounds the feature window on all sides. The size of the fringe is a property of the Sample Image List and is therefore the same for each sample in the list. Its purpose originally was to provide the information necessary to make use of Manto’s instance generation feature (Manto offered an option to increase the number of training samples by shifting and rotating the trained samples by random amounts; the limits for these shifts and rotations had to be specified during Sample Image List creation and based on the limits Manto added a fringe around to feature window to make sure that all rotations and translations within the specified limits there are always valid pixels available.
With Polimago, this fringe has lost its meaning: Currently Polimago (classification & regression) does not offer the same shift & rotate option that Manto used to have. Yet, the fringe is still part of Sample Image List specification - basically a remnant from previous versions.
Should not be too difficult. Just remember two things:
Restrict yourself to just one class (MinosCVC.dll will let you build and MTS with more than just one class because with Minos this makes perfect sense - so when using the MTS for Polimago you will have to adhere to the 1-class-limit yourself rather than rely on the software to enforce it.
Building and feeding an MTS is not actually very difficult. The most surprising aspect to me was that although the MTS objects (MTSIMAGE, MTSMODEL, MTSINSTANCE are in fact reference counted objects, the MinosCVC.dll will usually not modify the reference count if you query the MTS for one of these objects. I recommend working with at least 13.0 - in that version the Minos API documentation was updated and the reference count behavior is now described more explicitly than in previous versions.