A novel paradigm that promises to reduce resource usage, while ensuring that satisfactory performance is delivered to the end users. The fundamental premise behind approximate computing is that the result of a computation often does not need to be perfectly correct. Approximate computing has been demonstrated at various levels of computer architecture, from the hardware level, where incorrect adders have been designed to sacrifice the probability of a correct result for reduced energy consumption, to compiler-level optimisations, which omit certain lines of code in order to lower the energy needed for video encoding. However, approximate computing lacks general applicability. For instance, incorrect addition might be acceptable in the case it results in a shuffle among suggested search results, but is not acceptable if a chronic patient is reminded to take the wrong medication. The automatic discovery of, and adaptation to, situations in which approximate computation both delivers acceptable results, as well as saves substantial resources, is crucial for the paradigm to be widely embraced. The utility of a nearly-correct outcome of a computation is dependent on the context in which the result is used. Thus, gauging the limits of result acceptance in a different context and for different users, is essential in order to make approximate computation transparent to the end users.