**Accuracy** is a [[Classification|classification]] [[Error Metrics|error metric]] that measures how close or far off a given set of predictions are of their expected labels. Its most widespread definition is a description of systematic errors; a measurement of statistical bias. Low accuracy causes a difference between a prediction and a true value. It is normally coupled with (and even mistaken for) [[Precision|precision]]. The formal expression of it is: $ \textrm{Accuracy} = \frac{\textrm{correct classifications}}{\textrm{all samples}} $ In binary classification, the correct classifications are the sum of both true positives and true negatives. In multiclass classification this is also referred to as top-1 accuracy, to distinguish it from other less-unforgiving metrics like **top-5 accuracy**, where a prediction is correct if the correct class falls within the top 5 predicted labels.