When the digital age arises it's possible to increase from 16 to 24 bits practically without issues and that means to increase the resolution from 65.535 to more than 16 million. That raises the question of how many levels could differentiate the human ear. For instance, different digital devices establish a master level for listening audio of zero to fifteen or even less. Is it enough? Some people have a very precise ear system, nevertheless other great characteristic of it is the ability to adapt itself to different circumstances.
This effect could be perceived after using a certain set of headphones with more definition in a range of frequencies, like bass for instance. After a certain period, the brain tries to accommodate the source. Then, taking another set, the sensation would be a lack of basses if this time the frequency response is linear.
Also there's the chance of letting the brain to lose objectivity when the source comes from a recognized equipment or with some singular feature that in theory improves the sound. Some people believe that an expensive / audiophile system sounds better, but perhaps this is a mistake. The best method to avoid the bad side of adaptation and biased judgments could be accomplished almost without effort. It's based on A-B tests, keeping the listener without information about what source is A or B, so he/she can judge objectively.
Doing this system some people have been discovering several aspects of the human physique, and it remains as the ultimate proof for ending any discussion.