Today Aris is once again writing as head of Cybenetics and refers (not only) to the measurements and evaluations of power supply units. But his comments can certainly be generalized, because the principle is usually the same everywhere. We still know Fritz Hunter’s explanations of the fan tests at the time and today there is an interesting addition and an important distinction between errors and uncertainty. But now it’s better to read Aris himself, because it’s quite complex reading material and therefore suitable weekend reading (the article was published in original on hwbusters.com):
What is a measurement?
Our lab performs measurements on a wide variety of products throughout the day, from GPUs and CPUs to power supplies and fans. But before we delve deeper into today’s topic, let’s clarify some basic questions. A measurement is a recorded property of an object. For example, the efficiency of a power supply under certain conditions is a measurement, as is the voltage of the 12V rail, which is measured with a multimeter. To carry out a measurement, you need a measuring instrument, such as a multimeter or a thermometer. A measurement always consists of two parts: a numerical value and the corresponding unit. For example, a temperature of 28 degrees Celsius can be measured or a power of 120 watts.
What is measurement uncertainty and how can it be expressed?
With every measurement, there is a certain amount of uncertainty regarding the result. How can I know that a particular laboratory will deliver precise measurements? Can I be sure that the thermometer I am using will give accurate readings? Even in state-of-the-art laboratories, such as those used by NASA, there is always some uncertainty as to whether a measurement result is 100% accurate. This uncertainty is called measurement uncertainty. Since every measurement is subject to a certain amount of uncertainty, this uncertainty must be quantified. Two values can describe the uncertainty:
- Interval: The range in which the actual value lies with a high probability.
- Confidence level: The probability with which the true value lies within the specified interval.
An example to illustrate this:
Assume a power supply has an efficiency of 90% at 50% load with a measurement uncertainty of ±0.1% and a confidence level of 95%. This is indicated as follows:
90 % ±0.1 % with a confidence level of 95 %.
This means that we can assume with 95% probability that the actual efficiency is between 89.9% and 90.1%.
Types of measurement uncertainty
There are two methods for determining measurement uncertainty:
- Type A: Based on statistical analysis.
- Type B: Based on additional available information, such as calibration certificates or manufacturer specifications.
What is the difference between error and uncertainty?
These two terms are fundamentally different, so it is useful to explain them in more detail.
- Error refers to the deviation between a measured value and the actual, true value. This deviation can be caused by measurement errors or technical limitations. Errors can be divided into two categories:
- Systematic errors, which are constant and predictable, for example due to incorrect calibration of a measuring device.
- Random errors, which are unpredictable and inconsistent, for example due to noise or environmental influences.
- Uncertainty, on the other hand, describes the degree of doubt about a measurement result, i.e. how certain or uncertain we are about the value determined.
To summarize: Uncertainty indicates the range in which the true value probably lies, while error describes the actual difference between the measured and true value.
Errors can be corrected by calibrating measuring devices and optimizing measurement procedures. However, if the source of the error is unknown, for example when using an uncalibrated device, it is not possible to determine how far the measurement result is from the true value. Without this information, the measurement uncertainty cannot be calculated correctly. Therefore, it is necessary in every laboratory to use calibrated measuring devices to provide reliable uncertainty information.
Accuracy vs. uncertainty
Accuracy describes how close a measured value is to the true or generally accepted value. Uncertainty, on the other hand, indicates the variability of a measurement result, i.e. how much it could deviate from reality. While accuracy describes the correctness of a measurement, uncertainty refers to its reliability and confidence interval.
Multiple measurements for more precise results?
The well-known saying “Measure twice, cut once” describes the principle of carrying out a measurement several times in order to avoid errors. The same applies to scientific and technical measurements. In practice, three to five measurements are often taken to ensure that they are within a consistent range. If one value deviates significantly from the others, it can be identified as erroneous and excluded. For multiple measurements, two statistical tools are used:
- Arithmetic mean: this is the average of all measured values, calculated by the sum of the values divided by their number. Example: The arithmetic mean of 2, 4 and 6 is (2 4 6) / 3 = 4. The more measured values there are, the more precise the mean is, although the benefit of additional measurements decreases over time.
- Standard deviation: This indicates the extent to which the individual measured values scatter around the average.
Deviation of the measurement results
It can be frustrating when repeated measurements of an identical object give different results. However, these deviations are useful for quantifying the measurement uncertainty. A large variation indicates a high uncertainty, while a low variation indicates a more accurate measurement. A simple comparison between the highest and lowest value is often not sufficient. Instead, the standard deviation (SD) is used to evaluate the spread. A low SD means that the values are close to the average value, while a high SD indicates large fluctuations.
A rule of thumb is that about 68% of all measured values fall within one standard deviation of the mean, while 95% fall within two standard deviations. The exact standard deviation requires a very large amount of data, which is rarely available in practice. Instead, the estimated standard deviation (s) is used, which is calculated with a limited number of measured values.
Distribution pattern
Measured values are not always evenly distributed. In many cases, they follow a normal or Gaussian distribution, where most values are close to the mean and only a few are very different. If, on the other hand, the values are evenly distributed between the highest and lowest values, this is referred to as a uniform or rectangular distribution. There are other types of distribution, but a detailed look at these is beyond the scope of this article.
22 Antworten
Kommentar
Lade neue Kommentare
Urgestein
Urgestein
Mitglied
Urgestein
1
Urgestein
1
Urgestein
Urgestein
Urgestein
Urgestein
Urgestein
Urgestein
Urgestein
Urgestein
1
Mitglied
Veteran
Urgestein
Alle Kommentare lesen unter igor´sLAB Community →