EECS 6 III. Signal Similariy Measures In many siuaions, we need a quaniaive measure of he similariy of wo signals. For eample, suppose () is he signal some sysem should ideally produce, y() is he signal he sysem acually produces. Then, as a measure of how well he sysem has performed, we need a quaniaive measure of how similar y() is o (). As anoher eample, suppose r() is a measured signal ha is eiher he "desired" signal s () plus some measuremen noise, or he "desired" signal s () plus some measuremen noise, and suppose a sysem mus be buil ha decides which of he wo desired signals he measured signal r() conains. Such a sysem needs a signal similariy measure in order o compare r() o s () and r() o s (). In summary, signal similariy measures are needed for quaniaive performance measures for he sysems we design and as an inegral piece of cerain sysems. In he following we inroduce and discuss he wo mos imporan signal similariy measures. A. Difference Energy, Mean-Squared Difference and Mean-Squared Error The difference energy beween signals () and y() is simply he energy of he difference signal ()-y(). For coninuous-ime signals, he difference energy over he ime inerval (, ) is E(-y) = (()-y()) d. Similarly, for discree-ime signals [n] and y[n], he difference energy over he ime inerval [n,n ] is E(-y) = n n ([n]-y[n]). A closely relaed signal similariy measure is he mean-squared difference (MSD) beween signals () and y(), which is simply he mean-squared value of he difference signal ()-y(). For coninuous-ime signals, he MSD over he ime inerval (, ) is MSD(,y) = - (()-y()) d = - E(-y). Similarly, for discree-ime signals, he MSD over he ime inerval [n,n ] is n MSD(,y) = n -n + ([n]-y[n]) = n -n + E(-y). n When one of he signals, say (), is considered o be he "desired" signal and he oher, say y(), is considered o be an approimaion o i, hen he difference signal ()-y() is considered o be an error signal, and he mean-squared difference is called he mean-squared error and abbreviaed MSE(,y) or simply MSE. MSE is considered a measure of he qualiy of y() as an approimaion o (), wih small MSE indicaing good qualiy. In many siuaions, he significance of a paricular value of MSE generally depends on he size or srengh of he signal (). For eample, an MSE value of is considered large if he squared signal values of he desired signal are mosly smaller han, and is considered small if he squared values of he desired signal are much larger han. For such reasons, i is common o use signal-o-noise raio as a measure of signal qualiy, which is defined by SNR(,y) = σ () MSE, where σ (), which is \he variance of (), is used as he measure of signal size. Large signal-o-noise raio indicaes good qualiy. January, 3 DLN -- P : Inro o sigs and sysems
EECS 6 B. Signal Correlaion Anoher measure of he similariy of signals () and y() is heir correlaion, which is defined C(,y) = () y() d, where (, ) is he ime inerval of ineres. Similarly, he correlaion beween wo discree-ime signals [n] and y[n] is defined as n C(,y) = [n] y[n], n where [n,n ] is he ime inerval of ineres. The discussion o follow focuses on coninuous-ime signals. Bu everyhing applies equally o discree-ime signals. To ge a feeling for why correlaion is a good measure of signal similariy eamine consider he signal () shown below - and consider he similariy of each of he signals below, y (), y (), y 3 (), y 4 (), o (). y() y() y3() y4() - 4 ()*y() - 4 ()*y() - 4 ()*y3() - 4 ()*y4() corr = -3.67 - corr = 3.47 - corr =.6 - corr = -.783-4 4 4 4 As a reference, () is shown wih a doed line in each of he above plos. Also shown below each signal is a plo of he produc of () wih he signal. The correlaion beween () and he given signal, which is he area under his plo, is also marked on he plo. Inuiively, we see ha () is more like y () han he oher signals, and his is refleced in C(,y ) being larger han he oher correlaions. Wha is happening is ha y () ends o be posiive where () is posiive and negaive where () is negaive. Thus, he produc () y () is mosly posiive, and he correlaion C(,y ) is large. The signal y () has he same sign as () less ofen. Thus () y () has negaive area cancelling some of he posiive area, leading o a smaller value of correlaion. This is aken o he ereme in () y 3 (), for which he posiive area is nearly compleely cancelled by he negaive area, causing C(,y 3 ) o be zero. The fourh signal, y 4 (), almos always has he oppposie sign of (), causing () y 4 () o be almos enirely negaive, leading o C(,y 3 ) being very negaive. These eamples show ha C(,y) ends o be large when y() follows he same rends as () -- posiive a imes ha () is posiive, negaive a imes ha () is negaive. This eplains why he everyday word "correlaion" is aken as he name for he similariy measure C(,y). We say ha () and y() are posiively or negaively correlaed, according o wheher C(,y) is posiive or negaive. When C(,y) =, January, 3 3 DLN -- P : Inro o sigs and sysems
EECS 6 we say he signals are uncorrelaed, meaning ha hey are very differen in he sense ha he posiiviy of one a ime gives no clues as o he posiiviy of he oher. As a ne se of eamples consider correlaing () wih y (), shown above, and also wih y 5 () = 3 y (). y() 5 y5() - ()*y() 4 - corr = 3.47 4-5 ()*y5() 4 - corr = 6.9 4 W observe ha even hough () is inuiively more similar o y () han o y 5 (), he correlaion C(,y ) is smaller han he correlaion C(,y 5 ). Wha is happening is ha correlaion is being heavily influenced by he fac ha y 5 () is considerably larger signal han y (), i.e. i has much larger energy. In many siuaions, i is imporan o preven correlaion from being influenced by signal size. In such cases, i is cusomary o use normalized correlaion as defined by C N (,y) = C(,y) E() E(y) = E() E(y) () y() d as he signal similariy measure. Here, we have divided C(,y) by he square roo of he energies of boh signals. The following liss he values of C(,y) and C N (,y) y () y () y 3 () y 4 () y 5 () E(y) 5.4 5.44 4.95 5.4 49. C(,y) 3.47.6 -.8-3.67 6.9 C N (,y).89.53 -. -.94.53 We see now ha C N (,y 5 ) = C N (,y ), i.e. ha normalized correlaion is no affeced by he size of he y 5 (). If, as suggesed by he eample above, normalized correlaion is no affeced by he sizes of he signals, hen here ough o be some larges value ha i can have. The following inequaliy, called he Cauchy-Schwarz inequaliy, shows ha he normalized correlaion can never be larger han one, nor less han negaive one. Equivalenly, E() E(y) C(,y) E() E(y) - C N (,y). The proof of his inequaliy is beyond he scope of he course. Noice ha if y() is simply an ampliude scaling of (), as in y() = a () for all, where a >, hen One may find a version of he Cauchy-Schwarz inequaliy in mos linear algebra ebooks. January, 3 4 DLN -- P : Inro o sigs and sysems
EECS 6 Thus in he case we see ha or equivalenly E(y) = E(a) = ( a () ) d = a () d = a E() C(,y) = () a () d = a () d = a E() E() E(y) = E() a E(s) = a E(). C(,y) = E() E(y) C N (,y) =, i.e. he Cauchy-Schwarz relaion holds wih equaliy. In fac, his is he only way o obain equaliy. Tha is, i can be shown ha C(,y) = E() E(y), or equivalenly, C N (,y) =, when and only when () and y() are he same ecep for a posiive muliplicaive scaling, i.e. when and only when y() = a () for some a > and all. Similarly, i can be shown ha he only way for C(,y) o equal - E() E(y), or equivalenly for C N (,y) o equal -, is when and only when () and y() are he same ecep for a negaive muliplicaive scaling, i.e. when and only y() = a () for some a < and all. A corollary o he Cauchy-Schwarz inequaliy is he fac ha he correlaion of a signal wih iself equals he signals energy, i.e. C(,) = E() for any signal. The relaion beween correlaion and mean-squared difference energy: The relaion beween mean-squared difference and signal correlaion is E(-y) = E() - C(,y) + E(y). Thus, for eample, a large posiive correlaion C(,y) implies a small difference energy E(-y). This relaion is demonsraed below. (()-y()) ( E(-y) = d = () - () y() + y() ) d = () - () y() d + y () d = E() - C(,y) + E(y) Since difference energy and correlaion are closely relaed, he choice of which o use is a maer of ase, of convenience, or dependen upon oher facors. For eample, correlaion C(,y) ends o preferred over difference energy in siuaions where one signal, say, is much larger han he oher, y, is small. In his case E(-y) E(), which indicaes ha E(-y) depends very weakly on he smaller signal. Thus, i is very sensiive o noise and compuaional roundoff errors. In conras, C(,y) is always grealy influenced by y. For eample, when y is much smaller han, doubling y causes C(,y) o double, bu has lile effec on E(-y). Thus correlaion is less sensiive o noise and roundoff errors. January, 3 5 DLN -- P : Inro o sigs and sysems
EECS 6 The uses of correlaion in EECS 6 Correlaion will be used in a couple of he lab assignmens as a mehod for deecing, classifying or recognizing signals. I will also be seen laer ha one of he principal analysis echniques ha we sudy (Fourier analysis) and he principal kind of sysems we sudy (linear ime-invarian filers) are based on correlaion. Tha Fourier analysis is based on correlaion relaes o he discussion below abou "signal componens". Signal componens The quesion addressed in his subsecion is: Wha does i mean for one signal o be a componen of anoher? Specifically, suppose we are given signals () and p() (or [n] and p[n] in he discree-ime case). Is here a componen of () ha is like p()? (or of [n] ha is like p[n]?) If so, how much p() is in ()? (or p[n] in [n]?) How o define "how much of is in "? For eample, is here a componen of () ha is like p() = cos(3)? Vecor geomery: Such quesions are similar o he following radiional quesions in vecor geomery: Suppose = (,..., N ) and p = (p,...,p N ) are N-uple vecors, illusraed below. p Is here a componen of ha is like p? How much of p is in? The convenional approach o answering hese quesions in vecor geomery is o find he value α such ha α p is as close o as possible, i.e. such ha -αp is as small as possible, where u-v denoes he Euclidean disance beween u and v, as defined by u-v = N (u i -v i ) i= For eample, α p for one choice of α is illusraed below. -αp p α p Acually, i is a bi easier o find he value of α ha minimizes -αy, because his avoids he square roo. To find he proper α, le's equae o zero he derivaive of -αy wih respec o α, and solve for α. Firs le's rewrie -αy : -αp N = (i -αp i ) N N = i - α i p i + α N pi i= i= i= i= = - α ( o p) + α p This secion should be skipped or skimmed on firs reading. I becomes suggesed reading, bu no required, when Fourier analsysi is inroduced, as in Chaper 3 of our e. January, 3 6 DLN -- P : Inro o sigs and sysems
EECS 6 where and p are he lenghs of and p, respecively, and ( o p) is he do produc defined by N ( o p) = i p i i= Now differeniaing and equaing o zero gives which yields = α = d dα -αp = = - ( o p) + α p, ( o p) p We conclude ha he componen of ha is like p is d ( - α ( ) o p) + α p dα ( o p) p p. ( o p) Fac: α = p is he unique value of α ha makes he residual vecor (-αp) and p orhogonal, where u and v are said o be orhogonal if u o v =. Proof: The do produc of (-αp) and p is (-αp) o p = ( o p) - α(p o p) = ( o p) - α p by he lineariy of he do produc ( o p) which is zero when and only when α =, i.e. when and only when (-αp) and p p are orhogonal. Wih his fac in mind, we see ha he componen of ha is like p is he vecor in he direcion of p obained by projecing ono he direcion of p as illusraed below. p α p Back o signals: Le us now reurn o he original quesions for signals: Suppose we are given signals () and p(). Is here a componen of () ha is like p()? If so, how much p() is in ()? How o define "how much of is in "? Our approach will be o find he value α such ha he difference energy E(() α p()) is as small as possible. We will hen say ha "α p() is he componen of () ha is like p()" and "α is he amoun of p() ha is in ()". The same approach applies o discree-ime signals. The idea is ha he quesion we are asking is jus like he quesion for vecors, and we can use he same approach. The only difference is ha insead of Euclidean disance as a measure of similariy we use difference energy. Indeed, for discree-ime signals he quesion is eacly he same, because he signals are vecors and difference energy is Euclidean disance squared. Thus in he discree-ime case, we can simply use he answers o he vecor quesion. In doing so, we recognize ha wha is called "do produc" in he "vecor domain", is jus wha we have called "correlaion". Morever, i is easy o check ha wih "correlaion" replacing "do produc", "energy" replacing January, 3 7 DLN -- P : Inro o sigs and sysems
EECS 6 "lengh squared", and "uncorrelaed" replacing "orhogonal", he answer we found o he vecor quesion applies o coninuous-ime signals as well as o discree-ime signals. Therefore, we immediaely obain he following: The value of α ha minimizes he difference energy E(() α p()) is α = c(,p) E(p). The amoun p() ha is in () is c(,p) E(p). The componen of () ha is like p() is c(,p) E(p) p(). (++) α = c(,p) E(p) is he unique value ha makes he difference signal (()-αp()) and p() uncorrelaed. These answers apply o discree-ime signals as well, wih p[n] replacing p(). These answers apply o comple-valued signals, in discree or coninuous ime. (Correlaion beween comple-valued signals is discussed below.) Commens: Engineers have long recognized he connnecions beween signals and vecors. As a resul, basic ideas from geomery, and more generally from linear algebra, are commonly used in signals and sysems analysis. One of he mos beneficial ransferences is he idea ha we can draw geomeric picures ha represen signals and heir relaionships, such as hose on he previous pages. For eample, uncorrelaed signals are drawn a righ angles o one anoher. I ofen happens ha a geomeric picure will help one o undersand some comple signal siuaion. I is also rue ha sudying linear algebra will lead o increased undersanding of signals and sysems. For eample, you migh wish o learn as much as possible abou linear algebra in Mah 6 and o ake Mah 49 as an elecive. January, 3 8 DLN -- P : Inro o sigs and sysems
EECS 6 III. Basic Signal Processing Tasks In his secion, we describe hree broad and nearly ubiquious asks ha require he processing of signals. Tha is, here is need o develop sysems ha perform hese asks. Much of he remainder of he course will be devoed o developing echniques o design and improve such sysems. The firs wo asks have a similar flavor. In each, he signal o be processed conains a componen ha ineress us and a componen ha does no. Tha is, he signal r() o be processed can be modeled as r() = s() + n(), where s() is he componen ha ineress us and n() is he componen ha does no. For eample, he componen ha ineress us migh be he signal produced by someone speaking ino a microphone, and he componen ha does no migh be he signal produced by background noise. In he firs ask, called signal recovery or noise reducion, he goal is o recover he signal componen s() ha ineress us. For eample, we migh wish o recover he speech signal wihou he background noise. In he second ask, called signal deecion or signal classificaion or signal recogniion, we wish o make a decision abou he signal componen ha ineress us. For eample, we migh wish o decide he ideniy of he speaker or wha he speaker has said. These wo asks will be inroduced in he ne wo subsecions. In each of he asks, he noise n() is no a known signal. If i were known, we could simply subrac i from r(), and here would be no need for a signal recovery or signal deecion sysem. We also assume ha he desired signal s(), or some aspec of i, is no known. If s() were enirely known, we could dispense wih r(), and simply display he signal s(). On he oher hand, here mus be somehing we do know abou s() and n(), such as heir signal value or signal shape characerisics. Indeed, here mus be somehing we know ha is differen for s() han for n(). Oherwise, we will have no way o separae one from he oher. For eample, much of he course will be devoed o developing sysems ha work when s() and n() have specra ha differ in known ways, e.g. one conains only low frequencies and he oher conains only high frequencies. The hird ask o be discussed is signal digiizaion. Nowadays, when signals such as audio or picures or video mus be processed, sored or ransmied, i is generally done in digial fashion, i.e. he daa is convered o binary. This is done because ecellen digial echniques have been found, and because he bis so produced can be processed, ransmied and sored rapidly and reliably. A. Signal Recovery/Eracion/Enhancemen Suppose we are given a signal r() wih wo componens, r() = s() + n(), and our ask is o design a sysem, such as illusraed below, which processes r() in order o produce s(), or more precisely, an approimaion ^s() o s(). inpu signal r() signal recovery/ enhancemen/ eracion sysem oupu signal ^s() We consider r() o be he original or measured or received signal, s() o be he desired signal, and n() o be noise. I is someimes called signal recovery, because he sysem is recovering he signal s() from he noise corruped signal r(). I is also called noise reducion or noise suppression, because i aemps o do precisely his. January, 3 9 DLN -- P : Inro o sigs and sysems
EECS 6 Eamples of signals requiring recovery/eracion/enhancemen include: An audio signal, especially when i is paricularly fain, or when he microphone is par of a hearing aid, or when here is much background noise, such as in an auomobile or helicoper or crowded cockail pary. A phoograph or movie or video aken in fain ligh A signal being played back on an analog ape player (video or audio). Magneic apes inroduce significan amouns of noise due o he granulariy of he magneic media. An AM or FM radio signal, or an analog TV signal, as i emerges from he receiving anenna. There is always los of background noise, much of i due o oher radio signals. A digial communicaion signal as i emerges from he receiving wire, anenna or oher sensor. This signal mus be eraced from background noise and from all oher communicaion signals on he same medium. Linear Filers: There are many possible approaches o signal recovery. In his course, we focus mosly on linear filering, which is he mos common approach. Le us inroduce i wih an eample. Suppose s() is an audio signal, for eample he one shown below..5 -.5 Suppose he measured signal is r() = s() + n(), where n() looks like he signal below..5 -.5 Then r() is.5 -.5 Since he noise signal flucuaes more rapidly han he audio signal 3, a naural approach o reducing he noise is o use a running-average filer. Tha is, we design a sysem ha replaces r() by an average of r() over an inerval up o ime. Specifically, i replaces r() wih he average over he of r() over he ime inerval (-T,), where T is chosen small enough ha he audio signal s() changes lile in he inerval and large enough ha he noise signal flucuaes a grea deal in he inerval and, consequenly, averages o a small value. In oher words, he running-average filer produces he oupu signal 3 This is he signal-shape characerisic ha differeniaes he s() from n() in his eample. January, 3 3 DLN -- P : Inro o sigs and sysems
EECS 6 ^s() = T r(') d'. -T When such a filer is applied o r(), i has he effec of smoohing he signal r(). In our eample, i produces he signal shown below, which sounds much more like s() han does r(). Noice ha he filering has no only reduced he noise, bu i has also modified he desired signal somewha.. -. While he running average filer is fairly common, here are many oher linear filers. As a precursor o inroducing he full variey of possible linear filers, le us noe ha by applying he change of variables " = '- o he above inegral, we may rewrie he running average filer as producing ^s() = T r(+") d", -T which in urn may be rewrien as where ^s() = r(+") w(") d". - w(") = T, -T ", else. Oher linear filers are obained by replacing he funcion w("), which we call a weighing funcion, by somehing else. Tha is, he oupu is produced by a running average, ecep ha he average is wih respec o a weighing funcion w("). We obain differen linear filers by making differen choices of w("). For eample, if we choose Then w(") = e 3", ", "> ^s() = r(+") e 3" d" - In his case, we see ha ^s() is he average of all pas values of r(). However, in compuing he average, pas values are muliplied by eponenially decreasing weighs. By careful choice of he weighing funcion w("), one can develop filers ha do a beer job of eracing a signal from noise han he running average filer. Quie a differen sor of weighing funcion is needed o perform he comple ask of eracing a single radio signal from all hose a oher frequencies. As he course progresses, we will develop beer and beer echniques for designing filers for recovering signals or suppressing noise. Acually, in his course, we will focus primarily on discree-ime linear filers for filering discree-ime signals. (Chapers 5-8 of our e.) Specifically, a discree-ime filer performs he analogous operaion January, 3 3 DLN -- P : Inro o sigs and sysems
EECS 6 ^s[n] = r[n+k] w[k], k=- where he w[k]'s are a sequence of weighs ha disinguish one linear filer from anoher. For eample, if w[k] = /M, k = -M+,...,, hen we obain a discree-ime running average filer, which produces ^s[n] = n M r[k]. k=n-m+ Performance Measure: As engineers, wherever possible we wish o quanify he goodness of he sysems ha we build. In his course, for he signal recovery ask, we will use mean-squared error (MSE) as our measure of goodness. Specifically, if he signal s() has suppor inerval (, ), hen MSE = - (s()-^s()) d Our goal, hen, is o design a sysem ha makes MSE as small as possible. One be aware ha MSE is sensiive o scale and o ime shifs. For eample, suppose he signal recovery sysem has compleely eliminaed he noise, bu has scaled and delayed he somewha, for eample, suppose i prodcues ^s() =. s(-.). Then, even hough he sysem has done well, he measured MSE may be large. In such cases, we may wish o allow ^s() o be scaled and ime-shifed before measuring MSE. Oher Signal Recovery Tasks: There are oher siuaions where he desired signal and noise are no simply added. Raher r() depends on he desired signal s() in some more complicaed way. For eample, in AM radio ransmission he audio signal we wish o recover is he envelope of he ransmied signal (minus a consan), and i is desired o recover his audio signal from he ransmied signal plus noise. In omographic imaging (e.g. X-ray, MRI, PET, ec.), he desired signal is a wo or hreedimensional image, which mus be eraced from a comple se of measuremens. The same is rue of synheic aperure radar. These are advanced opics ha will no be pursued in his course or in hese noes. B. Signal Deecion/Classificaion/Recogniion Suppose we are given a signal r() wih wo componens, r() = s() + n(), and our ask is o design a sysem, such as illusraed below, which processes r() and produces a decision abou s(). inpu signal r() signal deecion classificaion recogniion sysem decision There are hree closely relaed versions of his, inroduced below along wih eamples.. Signal/No Signal? In his case, s() = or s() = v(), where v() is some known or parially known desired signal. From r(), he sysem mus decide which of hese wo possibiliies has occurred. This is considered o be a deecion or recogniion ask because he goal is o deec or recognize wheher or no u() has occurred. Some specific eamples are given below. Radar: Decide if he signal r() from he receive anenna conains a refleced pulse a ime o. The same issues apply o sonar. January, 3 3 DLN -- P : Inro o sigs and sysems
EECS 6 Dollar bill changer: Decide if he signal r() obained by opically scanning a bill is due o a genuine dollar bill. Fingerprin recogniion: Decide if he signal r() obained by opically scanning a fingerprin conains he fingerprin of John Smih. Similar asks include recogniion from reinal scans or voice prins. Hear monioring. Decide if an ekg signal r() conains a characerisic indicaing a hear defec.. Which Signal? Here, s() = v () or v () or... or v M (), where M is some finie ineger and he v i () are known signals. From r() decide which of he v i ()'s is conained in r(). This is considered o be a classificaion or recogniion ask because he goal is o classify r() according o which v i () has occurred, or equivalenly o recognize which v i () has occurred. Some specific eamples are give below. Digial communicaion receiver: Decide if he received signal r() conains he signal represening "zero" or he signal represening "one". Tha is, he sysem mus decide if he ransmier sen "zero" or "one". In some sysems, he ransmier has more han wo signals ha i migh send, and so he receiver mus make a mulivalued decision. Opical characer recogniion: Decide if a characer prined on paper is a or b or c or.... This is especially challenging when he characers are handwrien. Spoken word recogniion: Decide wha spoken word is presen in he signal r() recorded by a microphone. The "signal/no signal" ask may be considered o be a special case of he "which signal ask". 3. Signal? And if So Which Signal? This is a combinaion of he wo previous subasks. Suppose s() equals or v () or v () or... or v M (). From r() decide wheher or no s() =, and if no, decide which of he v i ()'s is conained in r(). Eamples: Digial communicaion receiver: Some digial communicaion sysems operae asynchronously in he sense ha he receiver does no know when he bis will be ransmied. In his case, he receiver mus decide if a bi is presen, and if so, is i a zero or a one. Personal idenificaion sysem: Decide if a humb has been placed on he elecronic humbpad, and if so, whose humb. Touch-one elephone decoder: Decide if he signal from a elephone conains a key press, and if so, which key has been pressed. Spoken word recogniion: Decide is a word has been spoken and if so, wha word. For breviy, we will use he erm deecion as a broad erm encompassing all of he above. Deecion Sysems: As illusraed below, a deecion sysem ordinarily has wo subsysems: he firs processes he received signal in order o produce a number (or several numbers) from which a decision can be made. The second makes he decision based on he number (or numbers) produced by he firs. The number or numbers produced by he firs sysem are called decision saisics or feaure values, and he firs subsysem is called a decision saisic calculaor or a feaure calculaor. The second subsysem is called he decision maker or decision device. We will discuss wo general ypes of deecion sysems, corresponding o wo ypes of decision saisic generaors -- energy deecors and correlaing deecors. January, 3 33 DLN -- P : Inro o sigs and sysems
EECS 6 inpu signal r() decision saisic calculaor decision saisic decision maker decision Qualiy/Performance Measures: For deecion sysems, he mos commonly used measure of performance is he error frequency, which as is name suggess, is simply he frequency wih which is decisions are incorrec. We le he symbol f e denoe he error frequency. The ypical goal is o design he deecion sysem o minimize f e. In some siuaions, cerain ypes of errors are more significan han ohers. For eample, from he poin of view of he owner of a dollar bill recognizer, classifying a counerfei bill as valid is a more significan error han classifying a genuine dollar bill as invalid. In such cases, one will wan o keep rack of he frequency of he differen ypes of errors. And one may choose o minimize he oal frequency of errors subjec o consrains on he frequencies of cerain specific ypes of errors. For eample, he owner of a dollar bill recognizer migh insis ha deecor make as few errors as possible, subjec o he consrain ha i classifiy counerfei bills as valid no more han one ime in a million. Energy Deecors for Deciding Signal/No Signal: For he "signal/no signal" ask, he deecor mus decide wheher r() conains signal AND noise, i.e. r() = v() + n(), or jus noise, i.e. r() = n(). Since i is naural o epec ha r() will have larger energy in he former case han in he laer, i is naural o choose he energy E(r) of r() as he decision saisic. (One would normally measure he energy of r() over he suppor inerval of v().) The decision maker would hen decide ha v() is presen if he energy is sufficienly large, and would decide ha v() is no presen oherwise. To make such a decision, one needs o specify a hreshold, denoed τ, and he decision rule becomes v() is presen if E(r) τ, and v() is no presen if E(r) < τ. How o choose he hreshold? The firs hing o noe ha is ha he noise signal n() is usually random. Tha is, i is no known in advance, and i is differen every ime we measure i. In paricular, he energy of he noise will vary from decision o decision. However, based on pas eperience, i is usually possible o esimae he average value of he noise energy, which we denoe E(n). Then we can say ha when v() is no presen, he signal r() = n() has a random energy value, wih average E(n). On he oher hand, when he signal v() is presen, he energy of r(), hough sill random ends o be larger. Specifically, i ordinarily has average energy equal 4 o E(v) + E(n). In summary, when he signal v() is presen, he average energy of r() is E(v) + E(n), and when v() is no presen, he average energy of r() is E(n). I is naural hen o choose a hreshold ha lies half way beween hese wo average energy values. Tha is, we choose τ = (E(v) + E(n)) + E(n) = E(v) + E(n). Energy deecors can also be used for he "which signal" ask, provided he signals v (), v (),..., v M () have sufficienly differen energies -- so differen ha he differences will no be obscured by he noise. In his case, he ypical decision maker sraegy is o compare E(r) o he average energies E(v )+ E(n), E(v )+ E(n),..., E(v M )+ E(n) ha one epecs if he various v i ()'s were presen. The decision maker hen decides in favor of he signal v i () such ha E(v i )+ E(n) is closes o E(r). Correlaing Deecors for he "Which Signal Task": For he "which signal" ask, an alernae and usually more effecive mehod of deecion (han energy deecion) is o direcly compare r() o each of he signals v (), v (),..., v M (). 4 This is because v() and n() are usually uncorrelaed. January, 3 34 DLN -- P : Inro o sigs and sysems
EECS 6 Accordingly, we need a measure of similariy, and we will choose correlaion. Specifically, he correlaion beween wo coninuous-ime signals () and y() is defined o be C(,y) = () y() d, where (, ) is he ime inerval of ineres. Similarly, he correlaion beween wo discree-ime signals [n] and y[n] is defined o be n C(,y) = [n] y[n]. n For breviy, we will coninue he discussion presuming coninuous-ime signals. To see why correlaion is a good measure of similariy o use in deecion, consider he signal pairs shown below, in which a signal r() is compared o he hree possibiliies v (), v () and v 3 (). To aid he comparisons, r() is ploed above each signal. One can see ha r() and v () are similar in ha, roughly speaking, where one is posiive, he oher is as well; where one is negaive he oher is as well. Moreover, r() roughly follows he shape of v (). On he oher hand, he signals r() and v () are raher dissimilar. Where v () is posiive, r() is someimes negaive; where v () is increasing, r() is someimes decreasing. Finally, r() and v 3 () are very dissimilar. Indeed, r() is very much like he negaive of v 3 (). If one were o make a decision abou which of he hree signals v (), v (), v 3 () was conained in r() based on visually comparing r() o he hese signals, one would clearly choose v (). And indeed his is correc, because r() was generaed by adding noise o v (). Le us now consider how he same decision could be based on correlaion. To do so, le's eamine he value of correlaion for each pair of signals. The produc of each pair of signals is shown below he pair. Correlaion is he inegral of he produc, i.e. he area under he plo of he produc signal. For he firs pair, he produc is almos enirely posiive, and he correlaion is large. For he second pair, he produc is approimaely half negaive and half posiive, and he correlaion is small because he posiive and negaive areas of he produc end o cancel each oher. Finally, for he hird pair, he produc is mosly negaive, and he correlaion gives a large negaive value. r() r() r().5.5.5 -.5 -.5 -.5 - v().5 - v().5 - v3().5 -.5 r()*v().6.4. -.5 r()*v().4. -. -.5 r()*v3().4. -. -. -.4 -.4 C(r,v ) = 6.7 C(r,v ) =.8 C(r,v 3 ) = -6.7 January, 3 35 DLN -- P : Inro o sigs and sysems
EECS 6 If a deecion sysem had o decide from he hree correlaion values which of he hree signals v (), v (), v 3 () was conained in r(), clearly i should choose he one corresponding o he larges correlaion, namely, v (). Though correlaion would work well in he eample above, consider wha would have happened if, for eample, v () were imes larger. In his case, i is easy o see ha he correlaion C(r,v ) = 8, raher han.8. Thus even hough v has a very differen shape han r(), a decision based solely on he size of he correlaion would make he wrong decision. We can remedy his poenial shorcoming by normalizing correlaion. Tha is, i is beer o make a decision based on normalized correlaion, which is defined by C N (,y) = C(,y) E() E(y) = E() E(y) () y() d where E() and E(y) are he energies over he inerval (, ) of and y, respecively. If he energies of he v i ()'s are he same, hen signal v i () ha has he larges correlaion C(r,v i ) also has he larges normalized correlaion C N (r,v i ). However, when he v i ()'s have differen energies, he normalized correlaion accouns properly for such and permis he decision o be properly based. Having discussed correlaion, we can now compleely describe a ypical correlaing deecor. Suppose we mus decide which of he signals v (), v (),..., v M () is conained in r(). The decision saisic calculaor compues and oupus C N (r,v ), C N (r,v ),..., C N (r,v M ). The decision maker makes finds he larges of hese, and oupus he corresponding decision. Comparison of Energy and Correlaing Deecors: There are some siuaions where energy deecors canno be used and some where correlaing deecors canno be used. For eample, energy deecors canno be used for he "which signal" problem when he signals have he same energy, which is ofen he case in digial communicaions. On he oher hand, correlaing deecors canno be used when he precise shape of he signals is no known. For eample, in Marconi's original ransalanic radio ransmission, he ransmied signal was generaed by a spark, wih no known signal shape. Clearly, a correlaing deecor was ou of he quesion! In siuaions where boh energy and correlaing deecors can be used, i is usually found ha he laer performs significanly beer han he former, i.e. i makes fewer errors. C. Signal Digiizaion for Daa Sorage and Transmission In oday's world where signal processing is increasingly done by general or special purpose compuers, i is necessary o conver signals ino digial form. Moreover signal sorage and ransmission are increasingly done in digial fashion. Again, his necessiaes conversion o digial form. Such conversion involves wo seps: () sampling, and () represening each sample as a binary number. Boh of hese seps generally involve losses, i.e. changes o he signal. Sampling is he opic of Chaper 4 and will be eensively discussed here. Convering o bis will be he subjec of one of our lab assignmens. However, le us describe here he mos elemenary mehod of convering samples o bis, called uniform scalar quanizaion. Wih uniform scalar quanizaion, if we wish o represen a sample value [n] wih b bis, hen as illusraed below for he case ha b = 3, we divide he range of sample values, ( min, ma ) ino b nonoverlapping bins of widh = ( ma - min )/ b. These bins are indeed from lef o righ by he inegers,,,..., b -, and each of hese inegers is represened as a b-bi binary number. For eample, if b = 3, hen 5. Le i = min + / + i denoe he cener of he ih bin. Now, if he sample [n] o be quanized lies in he ih bin, hen we represen i by he binary represenaion of i, and we consider [n] o have been quanized o he value c i. Noe ha when using his binary number in a processing ask, we consider i o represen he value c i, January, 3 36 DLN -- P : Inro o sigs and sysems
EECS 6 and mus ac accordingly. Acually, if he processing is done in a general purpose compuer, we migh conver i o binary using one of he sandard convenions ha are convenien for doing arihmeic, such as "wo's complemen". c c c c 3 c 4 c 5 c 6 c 7 in ma bin inde: 3 4 5 6 7 binary represenaion: A sysem ha does boh sampling and uniform scalar quanizaion is called an analogo-digial converer. There are more sophisicaed mehods for convering samples o bis ha produce many fewer bis. These are generally called daa compression mehods. Eamples include JPEG image compression, MP3 audio compression, and CELP speech compression, which is he sysem used in digial cellular elephones, digial answering machines, and he like. A simplified version of a JPEG like image compression sysem is included in one of he lab assignmens. Generally speaking, daa compression is done in order o reduce he amoun of memory needed o sore a signal or he amoun of ime needed o ransmi a signal. When he signal acually needs o be processed or played, he compressed represenaion mus ordinarily be changed back ino a represenaion like he one produced by a uniform scalar quanizer. This is called decompression. Concluding Remarks Having discussed several basic signal processing asks, i should be menioned ha from now on, we will no focus on hem in fuure lecures. Insead we will focus on developing ools and echniques ha enable sysems o perform hese asks well. In paricular, we will discuss sampling (Chaper 4 of our e), specra (Chaper 3 and handous) and linear filers (Chapers 5-8). Alhough hese signal processing asks will no be he focus of he lecures, from ime o ime we will discuss how he echniques being developed in lecure apply o hem. On he oher hand, hese basic signal processing asks will be he focus of a number of he lab assignmens in his course. January, 3 37 DLN -- P : Inro o sigs and sysems
EECS 6 Appendi A: Comple-Valued Signals Comple-valued signals will be inroduced in Chaper as a way o simplify cerain calculaions involving sinusoidal signals. This appendi briefly summarizes he properies, saisics of comple signals and elemenary operaions on hem. I should be read afer comple signals are inroduced in lecure. Definiion: A comple-valued signal is simply a signal whose values a each ime are comple. As such, i has a real par and an imaginary par, a magniude and a phase. For eample, if z() = () + j y() = r() e jφ(), hen () is he real par, y() is he imaginary par, r() is he ampliude and φ() is he angle or phase. Signal Characerisics and Saisics: The following able shows he definiions of he signal characerisics menioned previously for real-valued signals, wih he ecepion of signal value disribuion, which is no easily summarized in able form. Coninuous-ime signal z() z() = () + j y() Discree-ime signal z[n] z[n] = [n] + j y[n] suppor inerval [, ] {n,n +,...,n } duraion - n -n + mean value: M(z) = - z() d M(z) = n z[n] n -n + n=n = M() + j M(y) = M() + j M(y) magniude: squared value,aka insananeous power: z() = ()+y () z() = ()+y () z[n] = [n]+y [n] z[n] = [n]+y [n] mean-squared value, aka average power: MS(z) = - z() d MS(z) = n -n + n=n n z[n] = MS() + MS(y) = MS() + MS(y) RMS value: RMS(z) = MS(z) RMS(z) = MS(z) energy: E(z) = z() d n E() = z[n] n=n = E() + E(y) = E() + E(y) Periodiciy of comple coninuous-ime signals: A comple coninuous-ime signal z() is said o be periodic wih period T if z(+t) = z() for all values of. This is equivalen o saying ha boh () and y() are periodic wih period T.. A coninuous-ime signal z() wih period T is also periodic wih period nt for any posiive ineger n.. The fundamenal period T o is he smalles period. The reciprocal of T o is called he fundamenal frequency f o of he signal. Tha is, f o = /T o. 3. z() is periodic wih period T if and only if T is an ineger muliple of T o. January, 3 38 DLN -- P : Inro o sigs and sysems
EECS 6 4. If signals z() and z'() are boh periodic wih period T, hen he sum of hese wo signals, w() = z() + z'() is also periodic wih period T. This same propery holds when hree or more signals are summed. 5. The sum of wo signals wih fundamenal period T o is periodic wih period T o, bu is fundamenal period migh be less han T o. 6. The sum of wo signals wih differing fundamenal periods, T and T, will be periodic when and only when he raio of heir fundamenal periods equals he raio of wo inegers. The fundamenal period of he sum is he leas common muliple of T and T. The fundamenal frequency of he sum is he greaes common divisor of he fundamemenal frequencies of he wo sinusoids. Periodiciy of comple discree-ime signals: A comple discree-ime signal z[n] is said o be periodic wih period N if z[n+n] = z[n] for all inegers n. This is equivalen o saying ha boh [n] and y[n] are periodic wih period N.. A discree-ime signal wih period N is also periodic wih period mn for any posiive ineger m.. The fundamenal period, denoed N o, is he smalles period. The reciprocal of N o is called he fundamenal frequency f o of he signal. Tha is, f o = /N o. 3. z[n] is periodic wih period N if and only if N is an ineger muliple of N o. 4. If signals z[n] and z'[n] are boh periodic wih period N, hen he sum of hese wo signals, w[n] = z[n] + z'[n] is also periodic wih period N. This same propery holds when hree or more signals are summed. 5. The sum of wo signals wih fundamenal period N o is periodic wih period N o, bu is fundamenal period migh be less han N o. 6. The sum of wo signals wih differing fundamenal periods, N and N, is periodic wih fundamenal period equal o he leas common muliple of N and N and fundamenal frequency equal o he greaes common divisor of heir fundamenal frequencies f and f. Noe ha unlike coninuous-ime case, he raio of he fundamenal periods of discree-ime periodic signals is always he raio of wo inegers. Therefore, he sum is always periodic. Elemenary Operaions On One Comple Signal These are illusraed for coninuous-ime signals, bu apply equally o discree-ime signals. Adding a consan: z'() = z() + c, where c is a real or comple number. Ampliude scaling: z'() = c z(), where c is a real or comple number. This has he effec of scaling boh he average and he mean-squared values. Specifically, M(z') = c M(z) and MS(z') = c MS(z). Time shifing: If z() is a signal and T is some number, hen he signal is a ime-shifed version of (). z'() = z(-t) = (-T) + j y(-t) Time reflecion/reversal: The ime refleced or ime reversed version of a signal z() is z'() = z(-). Time scaling: The operaion of ime-scaling a signal () produces a signal z'() = z(c) January, 3 39 DLN -- P : Inro o sigs and sysems
EECS 6 where c is some posiive real-valued consan. Combinaions of he above operaions: In he fuure we will frequenly encouner signals obained by combining several of he operaions inroduced above, for eample, z'() = 3 z(-(-)). Elemenary Operaions on Two or More Comple Signals These are illusraed for coninuous-ime signals, bu apply equally o discree-ime signals. Summing: w() = z() + ^z(). Linear combining: w() = c z () + c z () + c 3 z 3 (), where c,c,c 3 are real or comple numbers. Muliplying: w() = z() z(). Concaenaing: Concaenaion is he process of appending one signal o he end of anoher. Correlaion The correlaion beween coninuous-ime comple signals z() and ^z() is C(z,^z) = z() ^z * () d, where (, ) is he ime inerval of ineres. Similarly, he correlaion beween discree-ime comple signals z[n] and ^z[n] is defined o be C(z,^z) = n n z[n] ^z*[n]. Why he comple conjugae? The reason is ha his enables he relaion E(z) = C(z,z) o coninue o be valid. Specifically, C(z,z) = z() z * () d = E(z). Unforunaely, correlaion for comple-valued signals is no symmeric, i.e. C(z,^z) C(^z,z). However, here is a close relaion beween C(z,^z) and C(^z,z), namely, C(^z,z) = C * (z,^z). This is because C(^z,z) = ^z() z*() d = z() ^z * () d * = C*(z,^z). The normalized correlaion beween signals z and ^z is C(z,^z) C N (z,^z) =. E(z) E(^z) The Cauchy-Schwarz Inequaliy coninues o hold for comple signals. Tha is, C N (z,^z), wih equaliy if and only if one signal is an ampliude scaling of he comple conjugae of he oher; i.e. y() = c () for some real or comple consan c. January, 3 4 DLN -- P : Inro o sigs and sysems
EECS 6 Appendi B: Trigonomeric Ideniies and Facs Abou Comple Eponenials Trigonomeric Ideniies We will no use hese much, bu neverheless i is nice o have a able. The firs five comprise Table. on p. 4 of DSP Firs.. sin θ + cos θ =. cos θ = cos θ - sin θ 3. sin θ = sin θ cos θ 4. sin(α±β) = sin α cos β ± cos α sin β 5. cos(α±β) = cos α cos β - + sin α sin β 6. sin α sin β = cos(α-β) - cos(α+β) 7. cos α cos β = cos(α-β) + cos(α+β) 8. sin α cos β = sin(α+β) + sin(α-β) 9. cos α sin β = sin(α+β) - sin(α-β). sin α + sin β = sin (α+β) cos (α-β). sin α - sin β = cos (α+β) sin (α-β). cos α + cos β = cos (α+β) cos (α-β) 3. cos α - cos β = - sin (α+β) sin (α-β) 4. sin θ = (-cos θ) 5. cos θ = (+cos θ) 6. sin θ = cos(θ- π ) 7. cos θ = sin(θ+ π ) January, 3 4 DLN -- P : Inro o sigs and sysems
EECS 6 Useful Facs Abou Comple Eponenials. e jθ = cos θ + j sin θ (Euler's formula). cos θ = e jθ +e -jθ (Inverse Euler formula) 3. sin θ = j e jθ -e -jθ (Anoher Inverse Euler formula) 4. = e jπ = e jπn for any ineger n 5. - = e jπ = e -jπ 6. (-) n = e jπn 7. j = e jπ/ 8. -j = j = e -jπ/ January, 3 4 DLN -- P : Inro o sigs and sysems
EECS 6 Problems Elemenary Signal Characerisics. Sae he defining formula for: (a) The suppor inerval of a coninuous-ime signal signal (). (b) The duraion of a coninuous-ime signal (). (c) The mean value of he coninuous-ime signal () over he ime inerval [, ]. (d) The average power of he coninuous-ime signal () over he ime inerval [, ]. (e) The energy of he coninuous-ime signal () over he ime inerval [, ].. The coninuous-ime signal () = 3 cos(π) is sampled wih sampling inerval T s =.5 msec, creaing he discree-ime signal [n]. Find a simple formula for [n] ha does no include a cosine or any oher rigonomeric funcion. 3. Consider he wo coninuous-ime signals shown below: () () - - From which of he above signals could each of he following discree-ime signals be obained by sampling. Find he sampling inerval in each case. If more han one sampling inerval is possible, find he smalles among hose ha are possible. (a) y [n] - (b) y [n] - (c) y 3 [n] - 4. Find he suppor inerval, he mean value, he mean-squared value, and he energy of each he following signals, wih he las hree values compued over he suppor inerval of he signal. (a) () - 4 6 8 January, 3 43 DLN -- P : Inro o sigs and sysems
9 8 7 6 5 4 3 -.4 -.3 -. -....3.4..5..5 -.5 -. -.5 -. -.5 3 4 5 6 7 8 9 EECS 6 (b) () - 4 6 8 5. Derive he relaionship beween he mean-squared value, he variance and he mean value: MS() = σ () + M () () 6. Mach each signal below wih is signal value disribuion. Signals: (a) b) (c) (d) 4 4 4. Signal value disribuions: (i) (ii) (iii) (iv). (v) (vi) (vii) (Tip: You can work his problem from boh ends. For each signal, you can look a he range of signal values, and see wha you can deduce abou which values occur more frequenly han ohers. Also, look a each signal value disribuion and see wha you can deduce abou he signal from which i came.) 7. The funcion below is he signal value disribuion of which of he following signals () () 3()... January, 3 44 DLN -- P : Inro o sigs and sysems
EECS 6 8. A discree-ime signal [n] has he hisogram shown below. 9 8 7 6 5 4 3 3 5 7 (a) Find, approimaely, he mean value of [n]. (b) Find, approimaely, he mean-squared value of [n]. Periodiciy 9. (a) Sae he condiion defining he periodiciy of a signal (). (b) Sae he definiion of he fundamenal period of a periodic signal ().. Which of he signals shown below are periodic? For hose ha are periodic, find heir fundamenal period. (a) a () = 3 sin() (b) b () = 4 sin(e ) (c) c () = cos(++) (d) 4 () = 4(-) floor(/3), where floor(z) = larges ineger z. Le s() = A cos(ω+φ). (a) Show ha s() is periodic wih fundamenal period of π/ω. (b) Find he mean value of s() over one period. (c) Show ha he average power of his signal over one period is A /.. Show ha if () and y() are periodic wih period T, and a and b are arbirary numbers, hen z() = a () + b y() is also periodic wih period T. 3. (a) Show ha if () is periodic wih period T and a is a posiive number, hen y() = (a) is periodic wih period T/a. (b) Repea Par (a) wih he word "period" replaced by "fundamenal period". 4. Which of he signals below are periodic? For hose ha are periodic, find heir fundamenal period. (a) a () = cos() + sin(3) (b) b () = cos(π) + sin(6π) (c) c () = cos() + sin(6π ) 5. Le () = 3 cos(). Is y() = 4 (-3) periodic? If so, find is fundamenal period. Envelope 6. Find a formula for he envelope of he signal () = sin() sin(3). January, 3 45 DLN -- P : Inro o sigs and sysems
EECS 6 Elemenary Operaions on One Signal 7. Le y() = () + c. Le (, ) be he ime inerval of ineres. (a) Derive a formula for he mean M(y) of y() in erms of c and he mean M() of. (Hin: Sar by wriing he defining formula for wha you need o find, namely, for M(y).) (b) Derive a formula for he mean-squared value MS(y) of y() in erms of c, he mean-squared value MS() of, and he mean value M(). (Hin: Sar by wriing he defining formula for wha you need o find, namely, for MS(y).) 8. Le y() = c (). Le (, ) be he ime inerval of ineres. (a) Derive a formula for he average M(y) of y() in erms of c and he mean M() of. (b) Derive a formula for he mean-squared value MS(y) of y() in erms of c, he mean-squared value MS() of, and he mean value M(). 9. Le y() = a () + b. Le (, ) be he ime inerval of ineres. (a) Derive a formula for he mean M(y) of y() in erms of a, b, and he mean M() of. (b) Derive a formula for he mean-squared value MS(y) of y() in erms of a, b, he mean-squared value MS() of, and he mean value M().. Le y() = (a), where () is a signal wih suppor inerval (, ). (a) Find he suppor inerval of y(). (a) Derive a formula for he mean M(y) of y(), over is suppor inerval, in erms of a and he mean M() of. (b) Derive a formula for he mean-squared value MS(y) of y(), over is suppor inerval, in erms of a and he mean-squared value MS() of.. Le () =. Plo he following signals: (a) y () = - (3-) (b) y () = 3 (-+6). Le () and y() be as shown below. Find numbers a and T such ha y() = a (-T) () y() - 4 - (No sysemaic procedure has been developed o solve his problem. Use your creaiviy.) January, 3 46 DLN -- P : Inro o sigs and sysems