Big Ideas: Moving beyond ‘trusting your gut’

The output of a computer program predicts a big storm will hit your city. You’re the mayor and you have to decide whether or not the computer’s prediction is to be trusted.

Another computer program says a skyscraper will not vibrate dangerously in the prevailing winds, if it’s built according to the specs programmed into the model. You’re the consulting design engineer and you have to sign a legal document attesting to this conclusion.

For perhaps a more gripping example, a recent PhD thesis uses differential equations and computer simulation to assess the safety of patients taking ‘holidays’ from an onerous treatment plan. The computer models say it’s safe, but you’re the doctor, and some of your patients’ lives hang on whether you believe the computer or not.


All of these vignettes were made up for this article, but each has similarities with real situations. Computer simulation is a principal tool of modern science, engineering, economics and medicine. Lives do depend on the reliability of the simulations, and more impersonally, so does money. In many cases, the computer simulations are indispensable.

There is no ‘trusting your gut.’

On the other hand, we all know many people trust computers. Computers are right so often at simple things we feel they must have been programmed correctly for the important things: national security, finance, medicine. If not, we feel betrayed – someone, somewhere, hasn’t done their job.

Some people know just how hard the problems being tackled are. Nonlinearity and high dimensionality translate almost instantly into problems we know (really know, as in the sense of mathematical proof) are impossible to solve in general. Not only do we not know computers are wrong sometimes, owing to bugs, there are times they must be wrong because what we ask of them is impossible.

Yet, we want to do the best we can.

We want our computer simulations to be robust (meaning they never fail with ‘did not converge’ or ‘data out of range’ errors), fast, and reliable. Oh, and cheap, while we’re at a wish list. Sometimes this is possible. Sometimes we get to ‘choose any two.’ Sometimes we have to take what we can get.

But when? And how do we know when our computers have done a good job?

A simpler question is, how do we know when our computer simulations have been (reasonably) faithful to the model we’ve built?

The new discipline of ‘uncertainty quantification’ relies on older ideas of error analysis and statistics to begin to address these questions. The philosophical ‘Big Idea’ is some of these tools shed new light on old philosophical problems, too. How do we know what we know? How do we know the truth when we see it?



This suggests investigations into what we call ‘computational epistemology,’ a study of the knowledge yielded by computational methods and its reliability might be very fruitful.

Distinguished University Professor Robert Corless is cross appointed to the departments of Applied Mathematics, Philosophy and Computer Science.