Given a set of 128x128 images from three classes, I obtained an accuracy of 50% with a SVM on the flattened images (16384 'features').
Is this an upper bound on the performance of a SVM using any features extracted from the images?
Given a set of 128x128 images from three classes, I obtained an accuracy of 50% with a SVM on the flattened images (16384 'features').
Is this an upper bound on the performance of a SVM using any features extracted from the images?
Is this an upper bound on the performance of a SVM using any features extracted from the images?
I SAY NO, at least not necessarily.
From the comments:
If the classifier reaches a certain accuracy by using all the available information (the raw pixel data), is it even possible to improve this by using any feature extraction methods on the data?
I SAY YES
I can imagine two scenarios.
You overfit to all of the pixels. Reducing the feature space dimension through some feature extraction technique leads to a simpler model with less opportunity to overfit, possibly leading to improved out-of-sample performance.
You use your domain knowledge to extract useful features that an SVM might struggle to figure out on its own, and these features provide improved ability to distinguish between the categories.