Hi and welcome @schulz!
Sorry, but that isn’t possible - that sort of data isn’t actually gathered during the search operation (this would significantly increase search times). What happens is that the tool goes through the image pixel by pixel and evaluates the local gradient by matching it versus all transformations covered by the classifier and for those that are a match on this single feature, the counter in an accumulator is incremented by one (and the result candidates are then the highest values in the accumulator). In doing so, no connection is made between the different evaluated features and the information you have in mind is in fact not present when the image has been processed.
(The approach is different for e.g. Minos: Minos goes through the image pixel by pixel and determines how many features match on the entire classifier for the investigated position - there it would be easy to gather such a list on-the-fly)