MP211 Principles of Audio Technology

Similar documents
CHAPTER 5 CONCEPTS OF ALTERNATING CURRENT

Chapter 4 Voltage, Current, and Power. Voltage and Current Resistance and Ohm s Law AC Voltage and Power

Introduction. Chapter Time-Varying Signals

EXPERIMENTAL ERROR AND DATA ANALYSIS

Module 1: Introduction to Experimental Techniques Lecture 2: Sources of error. The Lecture Contains: Sources of Error in Measurement

Appendix III Graphs in the Introductory Physics Laboratory

P a g e 1 ST985. TDR Cable Analyzer Instruction Manual. Analog Arts Inc.

MIL-STD-202G METHOD 308 CURRENT-NOISE TEST FOR FIXED RESISTORS

SEPTEMBER VOL. 38, NO. 9 ELECTRONIC DEFENSE SIMULTANEOUS SIGNAL ERRORS IN WIDEBAND IFM RECEIVERS WIDE, WIDER, WIDEST SYNTHETIC APERTURE ANTENNAS

Module 5. DC to AC Converters. Version 2 EE IIT, Kharagpur 1

USE OF BASIC ELECTRONIC MEASURING INSTRUMENTS Part II, & ANALYSIS OF MEASUREMENT ERROR 1

Lab 1: Basic Lab Equipment and Measurements

A 11/89. Instruction Manual and Experiment Guide for the PASCO scientific Model SF-8616 and 8617 COILS SET. Copyright November 1989 $15.

The information carrying capacity of a channel

Experiment 3. Ohm s Law. Become familiar with the use of a digital voltmeter and a digital ammeter to measure DC voltage and current.

Chapter 7. Introduction. Analog Signal and Discrete Time Series. Sampling, Digital Devices, and Data Acquisition

Experiment 2. Ohm s Law. Become familiar with the use of a digital voltmeter and a digital ammeter to measure DC voltage and current.


The quality of the transmission signal The characteristics of the transmission medium. Some type of transmission medium is required for transmission:

Determining MTF with a Slant Edge Target ABSTRACT AND INTRODUCTION

APPLICATION BULLETIN PRINCIPLES OF DATA ACQUISITION AND CONVERSION. Reconstructed Wave Form

Goals. Introduction. To understand the use of root mean square (rms) voltages and currents.

Goals. Introduction. To understand the use of root mean square (rms) voltages and currents.

Basic Electronics Learning by doing Prof. T.S. Natarajan Department of Physics Indian Institute of Technology, Madras

Unit 1.1: Information representation

DC and AC Circuits. Objective. Theory. 1. Direct Current (DC) R-C Circuit

Table of Contents...2. About the Tutorial...6. Audience...6. Prerequisites...6. Copyright & Disclaimer EMI INTRODUCTION Voltmeter...

Electricity Basics

THE SINUSOIDAL WAVEFORM

Voltage-Versus-Speed Characteristic of a Wind Turbine Generator

Advanced Digital Signal Processing Part 2: Digital Processing of Continuous-Time Signals

Alternating Current Page 1 30

ECE 2006 University of Minnesota Duluth Lab 11. AC Circuits

UNIVERSITY OF TECHNOLOGY, JAMAICA SCHOOL OF ENGENEERING. Electrical Engineering Science. Laboratory Manual

AMPLITUDE MODULATION

Technician License Course Chapter 3. Lesson Plan Module 4 Electricity

Jitter in Digital Communication Systems, Part 1

Chapter 1: DC circuit basics

Chapter 5: Signal conversion

EE ELECTRICAL ENGINEERING AND INSTRUMENTATION

EET 223 RF COMMUNICATIONS LABORATORY EXPERIMENTS

Chapter 3 Data and Signals 3.1

Bakiss Hiyana binti Abu Bakar JKE, POLISAS BHAB

SIGNALS AND SYSTEMS LABORATORY 13: Digital Communication

AC CURRENTS, VOLTAGES, FILTERS, and RESONANCE

1. Introduction to Power Quality

KOM2751 Analog Electronics :: Dr. Muharrem Mercimek :: YTU - Control and Automation Dept. 1 1 (CONT D) DIODES

SNA Calibration For Use In Your Shack

Evaluating Electrical Events on the Dairy Farm

4. Digital Measurement of Electrical Quantities

Making sense of electrical signals

Nonuniform multi level crossing for signal reconstruction

Signal Characteristics

END-OF-SUBCOURSE EXAMINATION

2.0 AC CIRCUITS 2.1 AC VOLTAGE AND CURRENT CALCULATIONS. ECE 4501 Power Systems Laboratory Manual Rev OBJECTIVE

Appendix C: Graphing. How do I plot data and uncertainties? Another technique that makes data analysis easier is to record all your data in a table.

Sonoma State University Department of Engineering Science Spring 2017

Chapter 2: Your Boe-Bot's Servo Motors

About the Tutorial. Audience. Prerequisites. Copyright & Disclaimer. Linear Integrated Circuits Applications

INSTALLATION & OPERATION MANUAL

Dr. Cahit Karakuş ANALOG SİNYALLER

Device Interconnection

Page 21 GRAPHING OBJECTIVES:

ANALOG TO DIGITAL CONVERTER ANALOG INPUT

Synthesis Algorithms and Validation

Signal Paths from Analog to Digital

Contents. Core information about Unit

Laboratory Assignment 2 Signal Sampling, Manipulation, and Playback

TO PLOT OR NOT TO PLOT?

Electrical Theory. Power Principles and Phase Angle. PJM State & Member Training Dept. PJM /22/2018

Proportional-Integral Controller Performance

Lab 0: Orientation. 1 Introduction: Oscilloscope. Refer to Appendix E for photos of the apparatus

Pulse Code Modulation

Notes on Experiment #12

System Inputs, Physical Modeling, and Time & Frequency Domains

Chapter 1: DC circuit basics

AC phase. Resources and methods for learning about these subjects (list a few here, in preparation for your research):

Lecture 7 Frequency Modulation

electrical noise and interference, environmental changes, instrument resolution, or uncertainties in the measurement process itself.

Navy Electricity and Electronics Training Series

A Few (Technical) Things You Need To Know About Using Ethernet Cable for Portable Audio

CHAPTER 3 SINGLE SOURCE MULTILEVEL INVERTER

Alternating voltages and currents

Appendix A Decibels. Definition of db

The Discrete Fourier Transform. Claudia Feregrino-Uribe, Alicia Morales-Reyes Original material: Dr. René Cumplido

Combinational logic: Breadboard adders

Graphing Techniques. Figure 1. c 2011 Advanced Instructional Systems, Inc. and the University of North Carolina 1

EIA STANDARD TP-27B. Mechanical Shock (Specified Pulse) Test Procedure for Electrical Connectors EIA B ELECTRONIC INDUSTRIES ASSOCIATION

Analytic Geometry/ Trigonometry

Experiment 2: Transients and Oscillations in RLC Circuits

Laboratory 1: Uncertainty Analysis

Table of Contents. Introduction...2 Conductors and Insulators...3 Current, Voltage, and Resistance...6

Specifying A D and D A Converters

Noise Measurements Using a Teledyne LeCroy Oscilloscope

Lecture 2: SIGNALS. 1 st semester By: Elham Sunbu

Resonance Tube Lab 9

Experiment 12: Microwaves

Analysis of Complex Modulated Carriers Using Statistical Methods

Ac fundamentals and AC CIRCUITS. Q1. Explain and derive an expression for generation of AC quantity.

Modulation. Digital Data Transmission. COMP476 Networked Computer Systems. Analog and Digital Signals. Analog and Digital Examples.

Transcription:

MP211 Principles of Audio Technology Guide to Electronic Measurements Copyright Stanley Wolfe All rights reserved. Acrobat Reader 6.0 or higher required Berklee College of Music MP211 Guide to Electronic Measurements

GUIDE TO ELECTRONIC MEASUREMENTS AND LABORATORY PRACTICE Selected Notes from the 2nd Edition, by Stanley Wolf This document draws from material presented in the first three chapters of Stanley Wolf s Guide to Electronic Measurements and Laboratory Practice, 2nd Edition, as appropriate for MP 211 students. The reader is encouraged to consult the text for further information and background. Electrical Measurements Charge, Voltage and Current Electrical Charge Atoms consist largely of electrically charged particles. The nucleus of an atom is a central core consisting of protons (which have a positive charge) and neutrons. The nucleus is surrounded by a swarm of electrons. The electron has an electric charge that is equal in magnitude but opposite in polarity to the charge of a proton. Therefore, an electrically neutral atom must contain an equal number of electrons and protons. If electrons are removed from an atom, that atom is no longer electrically neutral but instead has a net positive charge. If electrons are removed from many neutral atoms of a substance and are then removed from the boundaries of the substance, the entire substance acquires a positive charge. By the same token, if a neutral substance acquires extra electrons, it acquires a net negative charge. If two adjacent substances both acquire net positive or net negative charges (called polarities), they will repel each other. If two adjacent substances acquire different polarities or charges, they will attract each other. The forces of electricity are derived from these attractions and repulsions, and from the migration of electrons from one atom and one substance to another in response to forces of attraction and repulsion. Voltage Voltage may be expressed as the force inherent in any electrical charge. It is equivalent to the difference in potential energy (see a Physics text) between any two adjacent substances. Often, the electrical charge of the earth itself is used as a reference, and is defined as 0 Volts (hence the term ground ). Current Electrical current is defined as the number of charges (electrons) moving past a given point in a circuit in 1 second. The unit of current is the ampere (current itself is often expressed by the letter I ). It may be simply thought of as the flow of electrons in a circuit, in response to the forces of attraction and repulsion (voltages) acting upon that circuit. Most of the currents found electric circuits involve the motion of electrons in solids (and vacuums in the case of vacuum tubes). Mostly, we concern ourselves with the flow of current in solid metal conductors (wires). Electrical conductors contain essentially free electrons, which can move about quite easily within the boundaries of the conductor. When an electrical field or force is applied to the conductor, these electrons move in response to the applied electric field. The total number of electrons that

move past some cross-sectional area of the wire per unit of time yields the magnitude of the current. The apparent velocity of current flow is essentially the velocity of light. Electrical Units The following units are used in connection with electricity Quantity Unit Abbreviation Time Second S Current Ampere A Voltage Volt V Resistance Ohm Ω Impedance Ohm Z Power Watt W Frequency Hertz Hz Sine Waves, Frequency and Phase The instantaneous values of electrical signals can be graphed as they vary with time. Such graphs are known as the waveforms of the signals. Signal waveforms are analyzed and measured in many electrical applications. Generally speaking, if the value of a signal waveform remains constant with time, the signal is referred to as a direct-current (DC) signal. An example of a dc signal is the voltage supplied by a battery. If a signal is time varying and has positive and negative instantaneous values, the waveform is known as an alternating-current (AC) waveform. If the variation is continuously repeated (regardless of the shape of the repetition), the waveform is called a periodic waveform. The most basic waveform is the sinusoid. It is a waveform that contains energy change at a single frequency or rate only. The amplitude of the sine wave describes the maximum value of the waveform, also called the peak value. In electrical signals, it is usually expressed in volts. The frequency, f, of the sine wave is defined as the number of cycles of that waveform occurring in one second. The time duration of any single cycle is called its period, T. The frequency and the period of any periodic waveform are inversely proportional and may be related to each other by the expression: f = 1/T If two sine waves of identical frequency exist simultaneously, the difference in their values are a function of phase angle (Ø), which may be thought of as their relative difference in time expressed in degrees of the cycle of one period. Average and Root Mean Square Values The value of a DC signal is relatively easy to measure at any point in time. However, an AC signal varies in both amplitude and polarity over its period, and measurement of the voltage value at any point in time during that period yields incomplete information about that AC signal. Therefore, when waveforms possess time-varying shapes, it is no longer sufficient to measure the value of the quantity they represent at only one instant of time. It is not possible from one measurement to the determine all that must be known about the signal. However, if the shape of a time-varying waveform can be determined, it is possible to calculate some characteristic values

of the waveform shape (such as its average value). These values can be used to compare the effectiveness of various waveforms with other waveforms, and they can also be used to predict the effects that a particular signal waveform will have on the circuit to which it is applied. The two most commonly used characteristic values of time-varying waveforms are their average and their root-mean-square (RMS) values. Average Value The average value of a time-varying current waveform over its period T is the value that a DC signal current would have to have if it delivered an equal amount of electron charges in that same period T. Therefore the average value of any periodic waveform is found by dividing the area under the curve in one period T by the time of the period, so that: A av = (area under curve) / length of period (in seconds) Note that the average value for a sinusoidal waveform (sine wave) is zero! Root-Mean-Square Values The second common characteristic value of a time-varying waveform is its root-mean-square (RMS) value. This value is used much more often that the average value to describe electrical signal waveforms. This is because the average value of symmetrical waveforms is zero, as noted above. Such a value does not provide much useful information about the properties of the signal. The RMS value of a waveform refers to its power delivering capability. In connection with this interpretation, the RMS value is sometimes called the effective value. This name is used because the RMS value is equal to the value of a DC waveform which would deliver the same power if it replaced the time-varying waveform.

To determine the RMS value of a waveform, we first square the magnitude of the waveform at each instant. (This makes the value of the magnitude positive for both positive and negative values of the original waveform). Then the average (or mean) value is taken to get the result. Finally, the square root of this average value is taken to get the result. Because of the sequence of calculations that is followed, the result is given the name root-mean-square. When referring to sine waves, it is customary to describe them in terms of their RMS values. For example, the 115 Volt, 60 Hz voltage that is delivered by electric power companies to domestic electricity consumers is really a sine wave whose (peak) amplitude is about 163 volts and whose RMS value is 115 volts. For a sine wave, the ratio between the RMS value and the peak value is.707:1. For a square wave, that ratio is 1:1. Language of Digital Measurement Systems Signal handling systems can be divided into two broad categories: analog systems and digital systems. In analog systems the information is processed and displayed in analog form. The measured quantity is an analog quantity (i.e., a quantity whose value can vary in a continuous manner). In digital systems, the measurement information is processed and displayed in digital form. In digital systems the original information may also be acquired in the form of an analog electrical signal, but the signal is then converted to a digital signal (via a process known as quantization) for further processing and display. A digital electrical signal has the form of a group of discrete and discontinuous pulses. Virtually all of the digital data formats are based on the fact that signal levels in digital systems are restricted to having binary values (i.e., only one of two possible values). These two values are represented by the symbols 1 and 0, which are known as binary digits. In addition, a single binary digit is often referred to as a bit. The digits using the decimal numbering system (0, l, 2, 3,... 9) are known as decimal digits. A system using 16 symbols, called hexadecimal, uses the 10 decimal digits plus the first 6 letters of the alphabet (0 1 2 3 4 5 6 7 8 9 A B C D E F). To represent a value of measured data in digital form, a group of such binary digits must be used. A value such as 25 (in decimal) could be represented in binary as 11001. This number consists of 5 bits. Electronic digital systems are typically designed to function by handling data formatted in groups containing a specific number of bits. Each decimal digit or alphabetic character may be represented by a group made up of a unique combination of bits. Such groups are known as digital words, and usually contain 8, 16 or 32 bits. Eight-bit words have acquired their own designation and have come to be known as bytes. Note that the left-most bit of a digital word is known as the most significant bit (MSB) and the right-most bit is the least significant bit (LSB). Digital systems are designed to transfer digital words from one part of the system to another. Such transfers can be done in either a serial or a parallel fashion. In serial transmission, one bit at a time of the digital word is sent from part of the system to the other, and only one signal path is required. In parallel transmission, all the bits of the word are transmitted simultaneously, and this requires that there be an individual signal path for each bit.

Several block diagrams referring to digital signal handling are shown below. The above shows an analog signal being amplified (in the analog realm) and then converted (i.e., quantified) to an array of digital numbers in an analog-to-digital convertor (A/D) and then sent (as an array of bits) to any of several digital devices, such as a printer, a digital display and/or a computer. In the above drawing, the basis for quantification (either D/A or A/D) is shown. A measured voltage of less than 2 volts equals a "0" and a measured voltage of greater than 2 volts equals a "1.

The above drawing shows the analog representation (used for signal storage and transmission) of a digital number. In this case, the decimal number "nine," converted to an 8-bit binary word or byte (10001001). (The 1 at the beginning of the word denotes the beginning of the word, and is not part of the number 9, which is 1001 in binary.) In the above, a binary number stream of bits is sent to a digital-to-analog convertor (D/A) where it is converted to an analog signal, which is then amplified and sent to any appropriate analog device. Experimental Data and Errors Measurements play an essential role in substantiating the laws of science. They are also essential for studying, developing and observing devices, processes and systems. The process of measurement itself involves many steps before it yields a useful set of information. For the purpose of studying measurement, the process of measurement can be viewed as a sequence of five operations: 1. Design an efficient and effective measurement system/setup. This includes the proper selection of equipment, correct interconnection, and verification of correct operation. 2. Correct and intelligent operation of the measurement system/setup 3. Recording the data obtained from the measurement system in a manner that is clear and complete. This documentation should provide unambiguous and accurate data for any future interpretation. 4. Establishing (via estimation) the accuracy of the measurements and the magnitudes of possible attendant errors.

5. Preparing a report that describes the measurements and results for those who may be interested and who may need to use them. All five of these items must be successfully completed before a measurement is truly useful. Measurement Recording and Reporting The original data sheet is a most important document. Mistakes can be made in transferring information, and therefore copies cannot have the validity of an original. If disputes arise, the original data sheet is the basis from which they are resolved (even in courts of law). It is essential to label, record and annotate data carefully and completely as they are taken. A short statement at the head of the data sheet should explain the purpose of the test and list the variables to be measured. Items such as the date, wiring diagrams used, equipment models and serial numbers, and unusual instrument behavior should all be included. The measurement data themselves should be neatly tabulated and properly identified. (All this should emphasize the fact that writing down data on scrap papers and trusting the memory to record data is not an acceptable procedure for recording data. Such practices will certainly lead to the eventual loss of valuable data and the use of invalid and inaccurate data.) In general, the record of the experiment on the data sheet should be complete enough to specify exactly what was done and, if need be, to provide an accurate and effective guide for duplicating the work at a later date. The report presented at the end of a measurement should also be carefully prepared. Its objective is to explain what was done and how it was accomplished. It should give the results that were obtained, as well as an explanation of their significance. In addition to containing all pertinent information and conclusions, the report must be clearly written with proper attention to spelling and grammatical structure. To aid in organizing the report and avoid omitting important information, an outline and rough draft should always be used. The rough draft can be later polished to produce a concise and readable document. The form of the report should consist of three sections: 1. Abstract of results and conclusions 2. Essential details of the procedure, analysis, data and error estimates 3. Supporting information, calculations and references In industrial and scientific practice, the abstract is likely to be read by higher-level managers and other users who are scanning reports for possible information contained in the report body. The details, on the other hand, will probably be read by those needing specific information contained in the report or by others wanting to duplicate the measurement in some form. The latter groups will be interested in the details on the data sheets, the analysis of the level of accuracy, and the calculations and results that support the conclusions and recommendations. For these readers, the references from which source material and information were obtained should also be provided. The results and conclusions of the report form its most important parts. The measurement was made to determine certain information and to answer some specific questions. The results indicate how well these goals were met.

Graphical Presentation of Data Graphical presentation is an efficient and convenient way of portraying and analyzing data. Graphs are used to help visualize analytic expression, to interpolate data, and to discuss errors. Graphs should always contain a title, the date the data was taken, and adequately labeled and scaled axes. A sharp pencil and straightedge should be used to draw the curves, to ensure neat and legible graphs. Plotting data on a graph as they are taken allows unexpected data points to be rechecked before an experimental setup is dismantled. It is typical that that the independent variable be plotted along the horizontal axis (the X-axis) and the dependant variable along the vertical axis (Y-axis). The data points are typically shown as small circles, the diameter of which can be proportional to the estimated error of the readings. In addition to normal linear graphs, there are graphs utilizing linear/logarithmic axes, logarithmic axes, and polar plots. Linear/logarithmic axes (called "semilog" or "linlog ) are graphs where one axis is expressed in a linear way and the other in a logarithmic way. Logarithmic graphs utilize logarithmic expressions for both vertical and horizontal axes (so-called "loglog"). Polar plots are single-axis graphs (similar to pie-charts) that show the dependent variable as a line around a point where the independent variable is degrees of a circle. These special graphs all are used extensively in audio, to a point where they are more prevalent than normal linear graphs. Precision and Accuracy In measurement analysis the terms accuracy and precision are often misunderstood and used incorrectly. Although they are taken to have the same meaning in everyday speech, there is a distinction between their definitions when they are used in descriptions of experimental measurements. The accuracy of a measurement specifies the difference between the measurement and the true value of a quantity. The deviation from the true value is the indication of how accurately a reading has been made. Precision, on the other hand, specifies the repeatability of a set of readings, each made independently with the same instrument. An estimate of precision is determined by the deviation of a reading from the mean (average) value. For example, consider a defective measuring instrument. The instrument may be giving a result that is highly repeatable, yet far prom accurate. Therefore, the precision of the measurements made with that device would be good, but the accuracy would be poor. It should be noted that precision does not guarantee accuracy, but accuracy is limited by the precision of a measuring system. If an instrument is specified to be accurate to within 10%, that means that no measurement will be greater than plus or minus 10% of the actual value of the measured item. If the precision of that instrument is specified as 1 %, then no measurement of the measured item will vary by more than plus or minus 1%. Errors in Measurement Errors are present in every experiment. They are inherent in the act of measurement itself. Since perfect accuracy is not attainable, a description of each measurement should include an attempt to evaluate the magnitudes and sources of its errors. From this point of view, an awareness of errors and their classification into general groups is a first step toward reducing them and minimizing their effect.

Sometimes a specific reading taken during an experiment is rather far from the mean value. If faulty function of the measurement instruments is suspected as the cause of such unusual data, the value can be rejected. However, even such data should be retained on the data, properly annotated as suspect and rejected. Even when all items involved in a measurement setup appear to be operating properly, unusual data may still be observed. We can use a guide to help decide when it is permissible to reject suspect data, based on statistical evaluation: individual measurement readings taken when all the instruments of a measurement setup appear to be operating properly may be rejected when their deviation from the average value is four times larger than the probable error of one observation. Such a random error will occur less than 1% of the time, and it remains highly probable that some external influence affected the measurement. Keep in mind that when a large error does occur, it may signal the commission of a major measurement or system error. An attempt to locate such an error should be undertaken. Also, keeping (with proper annotation) such rejected errors can be of assistance in finding the extent and cause of error. Principles of Measurement and Errors in Outline Form: 1. Quantification The conversion of a continuum to discrete increments 2. Interpolation The estimation of more precise discrete increments within the given quantification 3. Accuracy Accuracy is a specification of the error between the true value and the measured value. 4. Precision and reliability The precision with which a measurement is made is an expression of the consistency of the measurement and the range of variation of repeated measurements. A measurement could be extremely precise, but inaccurate. Reliability is a function of the verified accuracy of a measurement and the precision of the set of measurements leading to that expression. It is a prediction of the likelihood of error in a set of measurements. 5. Measurement Errors 1. Human A. Examples Faulty reading of data, faulty calculations, poor choice of instruments, incorrect setup or adjustment, failure to account for side effects B. Mathematical estimation not possible C. Ways to reduce or eliminate Careful attention to detail; awareness of instrumentation limits and problems, use of multiple observers; use of multiple readings; motivation and awareness of need for results 2. System A. Examples 1. Equipment Mechanical friction; calibration errors, damaged equipment, data tainted during transmission

a. Mathematical estimation Comparison with standard; determine if error is constant or proportional b. Elimination or reduction Calibration; inspection; correct application of correction after errors have been found; use of multiple methods of measurement 2. Environmental Changes in temperature, humidity, electrical and magnetic fields a. Mathematical estimation Careful monitoring of variables, and calculation of predicted changes b. Elimination or reduction Seal equipment; maintain temperature and humidity, shield equipment from electro-magnetic and radio frequency radiation, use of equipment that is not affected by these factors 3. Random Examples Unknown effects Mathematical estimation Use of many readings and application of laws of probability Elimination or reduction Careful design of equipment Use of statistical methods for evaluation Statistical Evaluation of Measurement Data and Errors Statistical methods can be very helpful in allowing one to determine the probable value of a quantity from a limited group of data. Further, the probable error of one observation and the extent of uncertainty in the best answer can also be determined. However, a statistical evaluation cannot improve the accuracy of a measurement. The laws of probability utilized by statistics operate only on random errors and not on system errors. Therefore, errors caused by the measurement system must be comparatively small compared to the random errors if the results of the statistical evaluation are to be meaningful. If the "zero adjustment" on an instrument is incorrectly adjusted, the statistical treatment will not remove this error. But a statistical analysis of two different measurement methods may demonstrate the discrepancy. In this way, the measurement of precision can lead to a detection of inaccuracy. The following quantities are normally calculated using statistics: 1. Average or mean value of a set of measurements 2. Deviation from the average value 3. Average value of the deviations 4. Standard Deviation (related to the concept of RMS) 5. Probability of error size in one observation 1. Average or mean value. The most likely value of a measured quantity is found from the arithmetic average or mean (both words mean the same thing) value of the set of readings taken. The more readings that are taken, the more reliable the average will be.

The average value is calculated: a av = (a 1 + a 2 +... a n ) / n where: a av = average value a 1, a 2, a 3,... = value of each reading n = number of readings 2. Deviation from the average value. This number indicates the departure of each measurement from the average value. The value of the deviation may be either positive or negative. 3. Average value of the deviations. This value will yield the precision of the measurement. If there is a large average deviation, it is an indication that the data taken varied widely and the measurement was not very precise. The average value of the deviations is found by taking the absolute magnitudes (disregarding any minus signs) of the deviations and computing their mean. 4. Standard deviation and variance. The average deviation of a set of measurements is only one of the methods of determining the dispersion of a set of readings. However, the average deviation is not mathematically as convenient for manipulating statistical properties as the standard deviation (also known as the root-mean-square or RMS deviation). The standard deviation is found from the formula: s = ((d 1 2 + d2 2 + d3 2 +... + dn 2 ) / (n-1)) where: s = standard deviation d 1 d 2 d 3... = deviations from the average value n - 1 = one less than the number of measurements taken The variance V is the value of the standard deviation a squared: V = s 2 5. Probable size of error and Gaussian distribution. If a random set of errors about some average value is examined, we find that their frequency of occurrence relative to their size is described by a curve known as a Gaussian curve (or bell-shaped curve). Gauss was the first to discover the relationship expressed by this curve. It shows that the occurrence of small deviations from the mean value are much more probable than large deviations. In fact, it shows that large deviations are extremely unlikely. The curve also indicates that random errors are equally likely to be positive or negative. If we use the standard deviation as a measure of error, we can use the curve to determine what the probability of an error greater than a certain s value will be for each observation.

Error (+/-) In units of standard deviation Probability of error being greater than +s or -s in one observation 0.675 0.250 1.0 0.159 2.0 0.023 3.0 0.0015 6. Probable error. From the above table we can calculate the probable error that will occur if only one measurement is taken. Since a random error can be either positive or negative, an error greater than +/- 0.675s is probable in 50% of the observations. Therefore, the probable error of one measurement is: r = +/- 0.675 s