It means that the data available to construct the displayed image statistically exceeds one sample per displayed pixel.
For example if you had a 256 x 256 data array that you were displaying at 64 x 64 pixel resolution, then the data is sufficient to give you subpixel accuracy.
The extra data does not need to be uniformly spaced for the purpose of being "subpixel": if you have more (non--redundant) data than you have discrete pixels, then the term "subpixel" applies.
Another way of phrasing it is that your available data must be down-sampled for output purposes.
One field where having extra data is common, is CGI (Computer Generated Imagery). When you are doing animation (for example) then having extra data can allow you to be more realistic on constructing lighting and shadows and specular reflection.
Likewise, suppose you were going to output a low-pass filtered version of an image. You could get a more accurate representation of "real life" if you applied the low-pass filter to the full (extra-data) version and then converted that down to your final resolution.
An analogy in signal processing would be a situation in which you only need to output (say) 60 Hz for your signal, but you sample your input signal at (say) 720 Hz, apply all the processing and filtering to that 720 Hz signal, and then down-sample to 60 Hz for output. If you have more precise input data, using it will give you more precise results (except for round-off error perhaps). It might, however, be "more expensive" (time, electricity, complexity) to work with extra data, so it is not always done.