the codephaser Add a new dimension of CW perception to your receiver by incorporating this simple audio device

Similar documents
Chapter 4: AC Circuits and Passive Filters

Perception of pitch. Definitions. Why is pitch important? BSc Audiology/MSc SHS Psychoacoustics wk 4: 7 Feb A. Faulkner.

Perception of pitch. Definitions. Why is pitch important? BSc Audiology/MSc SHS Psychoacoustics wk 5: 12 Feb A. Faulkner.

Perception of pitch. Importance of pitch: 2. mother hemp horse. scold. Definitions. Why is pitch important? AUDL4007: 11 Feb A. Faulkner.

A CLOSER LOOK AT THE REPRESENTATION OF INTERAURAL DIFFERENCES IN A BINAURAL MODEL

FREQUENCY RESPONSE AND LATENCY OF MEMS MICROPHONES: THEORY AND PRACTICE

Envelopment and Small Room Acoustics

Sound is the human ear s perceived effect of pressure changes in the ambient air. Sound can be modeled as a function of time.

An audio circuit collection, Part 3

III. Publication III. c 2005 Toni Hirvonen.

EE 233 Circuit Theory Lab 2: Amplifiers

First read the summary. Otherwise, you might find it confusing. There are 2 types of voice to skull:

AUDITORY ILLUSIONS & LAB REPORT FORM

Active Filter Design Techniques

ECEN 325 Lab 5: Operational Amplifiers Part III

Computational Perception. Sound localization 2

Assignment 11. 1) Using the LM741 op-amp IC a circuit is designed as shown, then find the output waveform for an input of 5kHz

Outline. Communications Engineering 1

A cat's cocktail party: Psychophysical, neurophysiological, and computational studies of spatial release from masking

Complex Sounds. Reading: Yost Ch. 4

Acoustics Research Institute

Dr.Arkan A.Hussein Power Electronics Fourth Class. 3-Phase Voltage Source Inverter With Square Wave Output

The psychoacoustics of reverberation

Code No: R Set No. 1

Intensity Discrimination and Binaural Interaction

Module 5. DC to AC Converters. Version 2 EE IIT, Kharagpur 1

Michael F. Toner, et. al.. "Distortion Measurement." Copyright 2000 CRC Press LLC. <

Computational Perception /785

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Loudness & Temporal resolution

Designing Information Devices and Systems II Fall 2017 Miki Lustig and Michel Maharbiz Homework 3

Introduction. 1.1 Surround sound

ALTERNATING CURRENT (AC)

Lab 2: Capacitors. Integrator and Differentiator Circuits

Experiment No. 6. Audio Tone Control Amplifier

Binaural Hearing. Reading: Yost Ch. 12

Effects of Reverberation on Pitch, Onset/Offset, and Binaural Cues

UNIVERSITY OF UTAH ELECTRICAL AND COMPUTER ENGINEERING DEPARTMENT ELECTROMYOGRAM (EMG) DETECTOR WITH AUDIOVISUAL OUTPUT

THE PERCEPTION OF ALL-PASS COMPONENTS IN TRANSFER FUNCTIONS

Laboratory 9. Required Components: Objectives. Optional Components: Operational Amplifier Circuits (modified from lab text by Alciatore)


Using LME49810 to Build a High-Performance Power Amplifier Part I

Musical Acoustics, C. Bertulani. Musical Acoustics. Lecture 13 Timbre / Tone quality I

ANALOGUE TRANSMISSION OVER FADING CHANNELS

Simulating Inductors and networks.

Capacitive Touch Sensing Tone Generator. Corey Cleveland and Eric Ponce

Lateralisation of multiple sound sources by the auditory system

A Digital Signal Processor for Musicians and Audiophiles Published on Monday, 09 February :54

The EarSpring Model for the Loudness Response in Unimpaired Human Hearing

Optical Modulation and Frequency of Operation

INFLUENCE OF FREQUENCY DISTRIBUTION ON INTENSITY FLUCTUATIONS OF NOISE

MUSC 316 Sound & Digital Audio Basics Worksheet

Laboratory Assignment 4. Fourier Sound Synthesis

EE 233 Circuit Theory Lab 3: First-Order Filters

UNIT-2 Angle Modulation System

NAME STUDENT # ELEC 484 Audio Signal Processing. Midterm Exam July Listening test

CHAPTER. delta-sigma modulators 1.0

University Tunku Abdul Rahman LABORATORY REPORT 1

Psychology of Language

Shift of ITD tuning is observed with different methods of prediction.

1 of 14 5/13/2013 3:20 PM

AUDIO OSCILLATOR DISTORTION

Physics 120 Lab 6 (2018) - Field Effect Transistors: Ohmic Region

Pre-Lab. Introduction

Excelsior Audio Design & Services, llc

EECS 216 Winter 2008 Lab 2: FM Detector Part I: Intro & Pre-lab Assignment

NXDN Signal and Interference Contour Requirements An Empirical Study

Week 1. Signals & Systems for Speech & Hearing. Sound is a SIGNAL 3. You may find this course demanding! How to get through it:

Goals. Introduction. To understand the use of root mean square (rms) voltages and currents.

Evaluation of a new stereophonic reproduction method with moving sweet spot using a binaural localization model

B.Tech II Year II Semester (R13) Supplementary Examinations May/June 2017 ANALOG COMMUNICATION SYSTEMS (Electronics and Communication Engineering)

Sampling and Reconstruction

Tone-in-noise detection: Observed discrepancies in spectral integration. Nicolas Le Goff a) Technische Universiteit Eindhoven, P.O.

PRODUCT DEMODULATION - SYNCHRONOUS & ASYNCHRONOUS

Comparison of Signal Attenuation of Multiple Frequencies Between Passive and Active High-Pass Filters

ANALYSIS AND EVALUATION OF IRREGULARITY IN PITCH VIBRATO FOR STRING-INSTRUMENT TONES

Homework Assignment 10

Lab 7: Let s Make a Little Noise

Microphone audio, from the MFJ-1278B to your transmitter. Ground, audio and PTT common. Push-to-talk, to allow the MFJ-1278B to key your transmitter.

11. Chapter: Amplitude stabilization of the harmonic oscillator

Neuronal correlates of pitch in the Inferior Colliculus

Chapter 12. Preview. Objectives The Production of Sound Waves Frequency of Sound Waves The Doppler Effect. Section 1 Sound Waves

CMPT 468: Frequency Modulation (FM) Synthesis

The role of intrinsic masker fluctuations on the spectral spread of masking

Music 171: Amplitude Modulation

Audio Applications for Op-Amps, Part III By Bruce Carter Advanced Analog Products, Op Amp Applications Texas Instruments Incorporated

ME scope Application Note 01 The FFT, Leakage, and Windowing

Experiment No. 2 Pre-Lab Signal Mixing and Amplitude Modulation

Sound Waves and Beats

Designing Information Devices and Systems II Fall 2018 Elad Alon and Miki Lustig Homework 4

EECS 216 Winter 2008 Lab 2: FM Detector Part II: In-Lab & Post-Lab Assignment

Monaural and binaural processing of fluctuating sounds in the auditory system

This tutorial describes the principles of 24-bit recording systems and clarifies some common mis-conceptions regarding these systems.

Week 8 AM Modulation and the AM Receiver

RD75, RD50, RD40, RD28.1 Planar magnetic transducers with true line source characteristics

Broadcast Notes by Ray Voss

Preliminary simulation study of the front-end electronics for the central detector PMTs

Local Oscillator Phase Noise and its effect on Receiver Performance C. John Grebenkemper

Elements of Communication System Channel Fig: 1: Block Diagram of Communication System Terminology in Communication System

Final Exam Study Guide: Introduction to Computer Music Course Staff April 24, 2015

AC Theory and Electronics

Transcription:

the codephaser Add a new dimension of CW perception to your receiver by incorporating this simple audio device Pseudo-stereo reception of radio telegraphy or CW signals has been taken up repeatedly by amateur radio magazines. I will show that false assumptions concerning the process of sound localization and a misconception concerning the term group delay renders most of the published circuits inferior or even counterproductive. Finally, I will present a superior, easy to build and well-tried circuit design. sound localization Today we know that our auditory system uses two strong cues for estimating the azimuthal location of sound sources, namely the interaural time delay (ITD), caused by a longer sound path around the head to the far ear, and the interaural level difference (ILD), caused by the shadowing effect of the head. ILD is minimal for low frequencies with wavelengths greater than the head width due to diffraction which overcomes the shadowing effect and it has been shown that low-frequency sounds can be localized by the interaural phase delay (IPD) alone, which is the time delay between the fine-structure of the signals at the two ears produced by the ITD. For many mammals, azimuthal localization of sound sources is achieved by an exquisite sensitivity to the IPD of low-frequency components (<1500 Hz). Initial processing of IPD occurs in the medial superior olive (MSO), part of the auditory system, where neurons tuned for low frequencies are relatively over-represented. All evidence suggests that these MSO neurons perform an operation of coincidence detection between excitatory inputs from the two ears, that they are largely insensitive to ILD and that they exhibit a robust sensitivity to the interaural relative phase even of pure and steady tones - not simply to onset or envelope delay. Moreover, they are not only sensitive to the IPD of pure tones but also to the ITD of broadband noise, which suggests that cross-correlation provides a more general description of their response mechanism. With increasing frequency the number of tuned neurons in the MSO and the ability to detect IPD diminishes. But at the same time ILD increases and is mainly useful above 1500 Hz, where the acoustic shadow produced by the head becomes more and more effective. For these high frequencies IPD presents an ambiguous cue, since the phase difference produced by ITD exceeds 180. Initial processing of ILD occurs in the lateral superior olive (LSO), another part of the auditory system, where - contrary to the MSO - high-frequency neurons are relatively over-represented. These ILD sensitive cells are also sensitive to envelope delay, presumedly mediating our abilities to localize modulated high-frequency sounds on the basis of the time delay of their envelopes. Interestingly, humans can reliably discriminate ITDs of high-frequency signals at thresholds approaching those for low-frequency signals, provided that the signals are not steady tones but have a time-varying envelope, which is the case for most natural acoustic signals including human voice. It follows that 1) for the low frequencies involved in the reception of CW signals (<1000 Hz) the interaural level difference is negligible in natural hearing and 2) not the interaural envelope delay but the phase delay is definitely the dominant cue for sound localization. By Karl Fischer, DJ5IL Friedenstr. 42, 75173 Pforzheim, Germany www.cq-cq.eu - DJ5IL@cq-cq.eu DJ5IL radio topics: the codephaser 1

ITD has a maximum value of about 0.63 ms with a lateral sound source at an angle of ψ = +-90 azimuth. With the speed of sound being roughly 343 m/s the effective ear distance is D = 0.63 ms 343 m/s = 0.216 m and we arrive at the following equations: interaural phase delay: IPD [s] = 0.216 m sin ψ [ ] / 343 m/s interaural phase angle: φ [ ] = 360 f [Hz] IPD [s] azimuthal angle: ψ [ ] = arcsin (IPD [s] 1588 Hz) = arcsin (φ [ ] 4.41 s / f [Hz]) The wavelenghth λ = 0.216 m corresponds to a frequency of 1588 Hz, so the lowest frequency at which an interaural phase angle of 180 is possible is about 800 Hz which means that above 800 Hz IPD alone should present an ambiguous cue. It has been shown that our sound localization abilities are indeed degraded in the frequency interval between 800 and 1600 Hz. envelope delay = group delay? The phase delay of a network is the time delay in seconds that exists between its output and input for a pure and steady sinusoidal signal with a certain frequency. We can, for example, calculate or measure the time it takes for the positive peak of a sinus at the input to arrive at the output and thus we obtain the phase delay of the network for that specific frequency. The phase response of a network describes its phase shift in radian as a function of radian frequency ω = 2 π f experienced by each sinusoidal component of the input signal. So while phase delay is a single time value for a specified frequency, phase response is a graph of radian values for any radian frequency within a certain interval. The CW signals from our amateur radio receivers sound like pure sinusoidal tones, on-off keyed in the rhythm of telegraphy. However, their representation in the frequency domain reveals that they are a mixture of multiple tones with a predominat carrier signal plus upper and lower sideband signals. These sideband signals are located at odd multiples of the keying frequency above and below the carrier and their amplitudes depend on the keying waveform. The bandwidth of a CW signal increases with the keying speed and the steepness of the keying envelope. For example, the average 99% power bandwidth of a CW signal keyed at a speed of 30 wpm (words per minute) with square-wave envelope is about 525 Hz, which means that if we listen to that CW signal adjusted for a pitch of 800 Hz then 99% of the signal power is distributed within the frequency spectrum between 537.5 Hz and 1062.5 Hz. And if the amplitudes and phase relations of all these signal components are not preserved, the keying envelope will be distorted and intelligibility will be degraded. For the following discussion, we will assume a network with flat amplitude response that only affects phase - active networks with this property are called allpass filters. If we wanted to know how much a pure and steady sine wave is delayed by such a network, we could use the phase delay. However, since we are interested in the delay of a CW signal, the socalled group delay seems to be our weapon of choice. It is often said that group delay would be the time delay of a narrow group of frequencies around a carrier frequency and thus it would equal the true time delay of the modulated information or the envelope delay. Consequently the synonyms carrier delay for phase delay and envelope delay or signal delay for group delay are often used. However, though this interpretation is widespread the term "narrow group" sounds suspiciously vague - and indeed it can be so far away from the truth that it often leads to misconceptions in circuit design and analysis. Group delay is defined as the negative of the rate of change or derivative of the phase response and it has the dimension of time. It is a theoretical concept and in practice cannot be measured directly. What is actually measured is differential radian phase and differential radian frequency which can be assumed to give the approximate group delay. It is the variations in delay at different frequencies that causes problems and is referred to as group delay distortion. Ideal non-dispersive delay devices, such as a hypothetical ideal transmission line, exhibit a linear phase response across the whole infinite frequency spectrum. Their group delay is constant for all frequencies and may be interpreted as the envelope delay or true time delay for any information signal without restrictions. But real analog networks do not behave like this - their phase response curves are more or less non-linear, transforming to deviations from constant group delay and causing signal distortion. So for the interpretation envelope delay = group delay to be valid essentially all the signal power must be contained within a frequency interval over which phase response is linear and hence group delay is constant. In that case the output signal envelope is delayed by the group delay but an exact replica of the input signal. However, if the phase response is non-linear and hence the signal envelope is being distorted, measurement and definition of the envelope delay is more or less arbitrary, because it depends on reference points on the envelope chosen for measurement and corresponding points cannot be easily defined. It follows that if the signal falls within a highly non-linear portion of the phase response curve, then the signal envelope undergoes a significant change of shape and its delay can not easily be defined, measured or directly related to the nominal group DJ5IL radio topics: the codephaser 2

delay value at the carrier frequency. practical circuits Several audio circuits for pseudo-stereo reception of CW signals have been published up to now. Most of them are quite simple, using a low-pass filter in one and a high-pass filter in the other channel in order to produce some sort of amplitude dispersion. Not only that the anticipated interaural level difference ILD will only puzzle our brains because we do not need it and we are not used to it at the low frequencies involved, together with the always present radio noise - being separated into hiss noise to one ear and dull noise to the other ear - the result is a very strange, unpleasant and unnatural sound sensation. A more elaborate design has been published by the RSGB's amateur radio magazine RadCom 3. I have extensively studied the circuit and the underlying work by F. Charman, G6CJ - his patent Improvements relating to Radio Telegraph Receivers (GB 916843) as well as the stereocode processor that emerged from his co-operation with R. Harris, G3OTK. G6CJ was convinced that not the interaural phase delay but the envelope delay is the dominant cue for sound localization. The stereocode processor is designed for a group delay decreasing in one and increasing in the other channel with frequency, so that their graphs form an "X" with crossover at about 750 Hz and a nominal difference of +-0.5 ms, about the time it takes for sound from a lateral source to reach the far ear, at the two corner frequencies of about 500 Hz and 1000 Hz respectively. According to the concept, this differential group delay together with the applied amplitude dispersion should give a spatial hearing sensation with low and high tones progressively displaced to both sides about the center frequency. If we supply the unit with an audio carrier around 750 Hz modulated with a very low frequency sinewave, the envelope of the output signal is an almost perfect copy of the input signal. We can measure the signal delay between, for example, the peak amplitude points on the envelopes of the output and input signal and it will be almost equal to the nominal group delay at the carrier frequency. That is because though the group delay of the device is nowhere constant over a substantial frequency interval, the bandwidth of the input signal is almost zero so that it is not distorted. However, if we suppy it with a square-wave keyed carrier (for example a string of CW dots from the receiver) the envelope of the output signal is heavily distorted and its edges do not resemble a square wave any more but slowly rise and fall in exponential fashion. That is because the square-wave keying envelope contains many harmonics of the keying frequency and thus exhibits a high bandwidth. Now, which reference points on the envelopes should we choose for the measurement of signal delay? While the stereocode processor circuit perfectly meets its design goals for nominal group delay, it fails to realize the anticipated CW signal envelope delay for the reasons explained. The differential envelope delay can neither be defined or measured nor being simply related to the nominal differential group delay. These facts can be easily verified by SPICE simulation. The same applies to the circuits in the patent paper of G6CJ. But even if it would realize the anticipated envelope delay it would still be an inferior design, because envelope delay is not the dominant cue for CW sound localization as already explained. The stereocode processor indeed generates some sort of frequency-dependent spatial hearing effect, but it is caused by arbitrary interaural phase delay instead of envelope delay as intended, together with amplitude dispersion. This amplitude dispersion is not only useless and unnatural at the low frequencies involved, it also contradicts the information derived from the interaural phase delay IPD, which is the most important localization cue but ironically the only one the circuit is not designed for. In fact, according to SPICE simulation its IPD alone would yield the impression of a centered signal at 250 Hz slowly moving to one side only with a maximum azimuth of 37 at 550 Hz and coming back to the center at 1200 Hz. a superior circuit design Shown in fig. 1 is the codephaser circuit, which I designed for proper interaural phase delay IPD as the only localization cue, giving a very natural and pleasant spatial sound sensation. The nominal differential phase response is 0 at the cross-over frequencies 580 Hz / 1100 Hz and +90 / -180 at 400 Hz / 800 Hz giving azimuthal angles of +90 / -90 respectively. Fig. 2 shows the circuit-board etching pattern together with the parts list. Construction is straightforward, but do not try to use common tolerance resistors and capacitors at places marked with an asterisk *: high-quality 1% resistors and 2.5% capacitors are mandatory here, otherwise you will not get the predicted results! The tolerance of all other resistors and capacitors is not critical, but do not change their values since this could upset phase response. Fig. 3 depicts parts placement on the board and its wiring with switches and jacks. Align the unit as follows: 1) Set the LEVEL trimpot to its minimum and the BAL- ANCE trimpot to mid position, connect a receiver and stereo headphones but no power supply yet. 2) Identify and mark the IN and OUT positions of the IN/OUT switch: with the codephaser IN there should be no receive audio in the headphones, with the codephaser OUT the original receiver audio is heard. 3) Switch the codephaser OUT and connect a power supply, tune the receiver to a steady carrier at comfortable pitch and volume. 4) Switch IN/OUT back and forth adjusting the LEVEL DJ5IL radio topics: the codephaser 3

fig. 1. Schematic diagram of the codephaser. 2x 2.7 Ω resistor 3x TL081 op-amp IC 3x 16 Ω resistor 2x LM380 amp IC 2x 1 KΩ resistor 1x 78L08 regulator IC 2x 2.2 KΩ 1% resistor 1x DPDT (double-pole/ 2x 4.7 KΩ resistor double-throw) switch 4x 10 KΩ resistor 1x TPDT (triple-pole/ 2x 10 KΩ 1% resistor double-throw) switch 1x 15 KΩ resistor 1 x cinch jack 2x 20 KΩ resistor (receiver audio) 2x 20 KΩ 1% resistor 1x 1/4" phone jack 4x 22 KΩ 1% resistor (stereo headphones) 2x 27 KΩ 1% resistor 1x coaxial DC jack 1x 1 KΩ trimpot (+12V) 1x 10 KΩ trimpot 1x 270 pf capacitor 2x 6.8 nf 2.5% capacitor 4x 10 nf capacitor 6x 10 nf 2.5% capacitor 4x 100 nf capacitor 8x 10µF 16V electrolytic capacitor 3x 100µF 16V electrolytic capacitor fig. 2. Circuit-board etching pattern (left) and parts list (right) for the codephaser. The circuitboard measures 55 x 79 mm. The etching pattern is shown in X-ray view from the component (non-foil) side of the board, black areas represent unetched copper foil. Caution: when printing this page as a film, in the Adobe Reader print dialog box you must select Page Scaling: None from the drop-down list, otherwise the pattern will not be printed in the correct size! DJ5IL radio topics: the codephaser 4

fig. 3. Magnified codephaser circuit-board showing parts placement and its wiring with switches and jacks. The board is shown in X-ray view from the component (non-foil) side, grey areas represent unetched copper foil on the soldering (foil) side. trimpot for equal volume in both positions. 5) Switch the codephaser IN and adjust the BAL- ANCE trimpot for equal volume on both channels. Go through steps 4) and 5) several times. The amplitudes of both channels should be equal and the phase relation should vary with frequency being in-phase at 580 Hz / 1100 Hz and anti-phase at 800 Hz. Don't expect astounding but rather subtle effects, since spatial sound sensations are the normal condition for our auditory system. And certainly it will take some time and several listening sessions before you can fully appreciate this new dimension of CW reception. Close your eyes and play with the switches while tuning through CW signals, pile-ups are especially impressive! With increasing frequency, a signal moves from one side at 400 Hz to the center at 580 Hz and on to the other side at 800 Hz where it changes direction to come back again to the center. Direction can be reversed with the NORM/REV switch. The sound does not seem to originate from the earphones as usual, but from the inside of your head. Multiple CW signals in the receiver passband are spreaded out holding specific positions, while static crashes appear randomly distributed throughout the entire space - a real stereo sensation with depth and presence that makes CW copy unique. Use a receiver bandwidth as broad as possible, since narrow filters degrade the spatial sound sensation. Also subjective selectivity should now be improved by the cocktail party effect 1,2,4 which is made possible by the exquisite localization abilities of our auditory system. references 1. P. Hawker, G3VA, "The cocktail party effect", Radio Communication, August 1973, p. 548. 2. P. Hawker, G3VA, "Subjective selectivity - or more cocktail parties", Radio Communication, October 1973, p. 694. 3. F. Charman, G6CJ, and R. Harris, G3OTK, "Subjective selectivity and stereocode", Radio Communication, September 1975, p. 674. 4. P. Hawker, G3VA, "Binaural cocktail parties", RadCom, September 2005, p. 71. File: DJ5IL_rt001.pdf - Original version: August 2006 - Revisions: 8.4. 2010, 23.11. 2010. DJ5IL radio topics: the codephaser 5