First, you might want to get rid of the inbalance of the training set. Having a balanced training set (e.g. all classes have the same number of samples) should always be the goal. Otherwise you create a bias for the class that has the most (or more) samples. That would distort the classifiers performance and make your evaluation less accurate. In your case, you create a bias toward the Failed class; You’ve provided much more samples for that class and thus the classifier “knows” this class much better. This is usually not desirable.
Your screenshot does not reflect the 300/1200 split of your classes. The screenshot states a 1496/391 split. Where stem the additional images from?
In regard to training parameters, it is hard (or impossible) to say what will improve your results without having seen the actual training set images. But, you could start with only “a” in your pre-processing code for max accuracy and max details (again, this also depends on the images/use case) and toggle the “Interpolate”. You could also increase the feature resolution.
Also, you already have an accuracy of about 98% (in regard to the sum of 1887 samples, and 37 errors) what is already really good for any given use case. I would not expect much improvement beyond that. You will never reach 100% in practice.