xnormalized = (x - xminimum) / range of x
The normalization formula is one way to process data to get easily comparable results within a data set and across several different data sets. It can be useful for anyone who is interpreting data, but those who are working with large amounts of data and machine learning may use it most frequently. You can learn about the normalization formula to understand whether it's the right approach to process your data set.
In this article, we discuss what the normalization formula is, how to use it, a variation for getting results within a custom range and the differences between the normalization formula and other statistical normalization processes.
What is the normalization formula?
The normalization formula is a statistics formula that can transform a data set so that all of its variations fall between zero and one. This can be helpful when comparing two or more data sets with different scales. Applying the normalization formula lets you express data points as values from zero to one, with the smallest data point having a normalized value of zero and the largest data point have a normalized value of one. All the other data points have decimal values between these two, in proportion to where that data point is within the range of the data set.
Example: If a data set had values of 2, 4 and 6, the normalized value of the first data point would be zero, the normalized value of the last data point would be one and the normalized value of the middle data point would be 0.5 since it's halfway between the two.
Related: How To Find Mean, Median and Mode
What is the normalization formula used for?
Normalization is useful in statistics for creating a common scale to compare data sets with very different values. This normalization formula, also called scaling to a range or feature scaling, is most commonly used on data sets when the upper and lower limits are known and when the data is relatively evenly distributed across that range.
Professionally, data analysts may use a normalization technique to mine or process data. It can also be useful for prediction modeling and forecasting. Some teachers and exam companies use normalization to grade exams when the questions are of varying difficulty, since the normalization process can distribute scores more evenly over a range and compensate for exams that may have more difficult questions.
How to use the normalization formula
Here are the steps to use the normalization formula on a data set:
1. Calculate the range of the data set
To find the range of a data set, find the maximum and minimum values in the data set, then subtract the minimum from the maximum. Arranging your data set in order from smallest to largest can help you find these values easily. Here's the formula:
Range of x values = xmaximum - xminimum
Example: A scientist is using the normalization formula to analyze a set of data. They did their experiment four times, and their results were 12, 26, 28 and 32. The largest data point in the set is 32, and the smallest is 12.
Range of x values = 32 - 12 = 20
Read more: How To Calculate Statistical Range
2. Subtract the minimum x value from the value of this data point
Next, take the x value of the data point you're analyzing and subtract the minimum x value from it. You can start with any data point in your set.
Example: The scientist's first data point is 25, so the scientist subtracts the minimum x value from that:
x - xminimum = 25 - 12 = 13
3. Insert these values into the formula and divide
The final step of applying this formula to an individual data point is to divide the difference between the specific data point and the minimum by the range. In this process, that would mean taking the result of step two and dividing it by the result from step one.
Example: For this data point, the scientist fills in the complete equation:
xnormalized = (x - xminimum) / range of x = 13 / 20 = 0.65
This result falls between zero and one, so they applied the normalization formula correctly.
Related: 17 Jobs That Use Statistics
4. Repeat with additional data points
Since the normalization formula is useful for analyzing and comparing complete sets of data, it's important to apply it to each data point so that you can compare your whole set. You might automate this with a spreadsheet program to save time.
Example: The scientist completes their analysis by using the normalization formula on the remaining three data points, 12, 28 and 32. Their results are 0, 0.8 and 1.
Normalization formula for custom ranges
While this normalization formula brings all results into a range between zero and one, there is a variation on the normalization formula to use if you're trying to put all data within a custom range where the lowest value is a and the highest value is b:
xnormalized = a + ( ((x - xminimum) * (b - a)) / range of x)
This formula may be better if you're normalizing values for a particular use, like scoring exams or comparing data on a scale from one to 10.
Similar analysis techniques in statistics
Other normalization techniques in statistics can help data analysts and scientists modify their data for other purposes. Here are some other common normalization techniques:
Z-score normalization is useful in machine learning settings since it can tell you how far a data point is from the average of the whole data set. It can be most appropriate when there are just a few outliers, since it provides a simple way to compare a data point to the norm. You might calculate a z-score when comparing data sets that are likely to be similar because of some genetic or experimental reason, like a physical attribute of an animal or results within a certain time frame.
Read more: How To Calculate a Z-score
Feature clipping is the process of removing data points beyond a certain minimum or maximum. It's useful for removing extreme outliers from a data set. For instance, a scientist studying items orbiting a certain planet may remove all items orbiting beyond a certain distance, so that they can be sure the items they look at are orbiting the specific planet and not just flying nearby.
Log scaling is a method that uses logarithms to compress a wide range into a smaller range. This means that the distances between the data before and after the scaling process may not be proportional. It's best for measuring many natural phenomena, like the magnitude of earthquakes, the brightness of stars and acidity.
How is normalization different from standardization?
Normalization generally refers to processes that achieve scales between zero and one, while standardization uses a principle called the standard deviation to describe the distribution of the data points. Calculating a z-score is a standardization process, since the results can be outsize of the zero-to-one range. Normalization places data points within the range proportionally to the minimum and maximum of the range, while standardization relates data points to the mean or average of all the data points.