Models exist in order to simplify the modeler’s relations with the world. We may gain in precision by adding more detail to a model, but this may reduce its usefulness.
We assume that even the most complex symbol system, like the brain, has a correct and detailed physical description, at least in principle, but we recognize that a correct model need not be a useful model. Recall Einstein’s reply when asked if everything has a correct physical description. He said, “Yes, that is conceivable, but it would be of no use. It would be a picture with inadequate means, just as if a Beethoven symphony were presented as a graph of air pressure.”
— Pattee (2004)
Maynard Smith and Szathmáry (1999, 146) take Einstein’s point a bit further:
… complex systems can best be understood by making simple models. … the point of a model is to simplify, not to confuse. … if one constructs a sufficiently complex model one can make it do anything one likes by fiddling with the parameters: a model that can predict anything predicts nothing.
According to Peirce, the main ‘difficulties of explanatory science’ have not been that adequate hypotheses were in short supply, but ‘that different and inconsistent hypotheses would equally acount for the observed facts’ (EP2:467).