Orders of approximation

In computer science, big O notation is used to classify algorithms according to how their run time or space requirements grow as the input size grows.
In analytic number theory, big O notation is often used to express a bound on the difference between an arithmetical function and a better understood approximation.

Comments