Intro | Anvil | Ear Canal | Semicircular Canals | Cochlea | Eardrum | Hammer | Auditory Nerve | Stirrup
Part 1: Image-Mapped Tutorial
Part 2: Matching Self-Test
Part 3: Multiple-Choice Self-Test
Return to main tutorial page
The vibrations of molecules in the environment generate waveforms that are transmitted through physical media such as the air or water. The ear of the auditory system is responsive to these "sound" waveforms. Amplitude, wavelength, and purity describe sound waves. Each characteristic affects in general the perception of loudness, pitch, and timbre, respectively.
The wavelength of sound is described in terms of frequency (as measured in cycles per second or Hertz). In general, the higher the frequency of a sound wave the higher the pitch, although wave amplitude also affects our perception of pitch. The human auditory system is responsive to waveforms in the frequency range of 20 Hz up to approximately 20,000 Hz. Of course, our sensitivity falls off at the edge of this range. The frequency ranges of other organisms vary from the human range, which is why our canine pets often mysteriously respond to events of which we are not aware!
The increasing amplitude of a sound wave, as measured in decibels (dB), generally increases the perception of loudness. In general, the perception of loudness doubles about every 10 dB. For most humans, zero dB marks the threshold for perception of sound, although loudness ultimately depends on both amplitude and frequency. The human ear is most sensitive to sounds between 1000 and 5000 Hz. The amplitude of a soft whisper is approximately 30 dB. Heavy traffic and factory noise will typically register at 80 dB. Ear damage will likely occur following prolonged exposure (2 hours) to sounds of 100dB, such as made by a chainsaw. Whereas, the level of sound at a rocket launching pad (170 dB) will inevitably result in some immediate hearing loss.
The human ear is also sensitive to sound purity. A pure sound has a single frequency, such as that generated by a tuning fork. In most instances, however, sounds are complex mixtures of multiple frequencies, resulting in the perceptual quality called timbre. The tone of identical loudness and pitch will sound different when generated by a violin versus a flute; this is the quality of timbre.
Knowledge of how perceptual qualities of hearing are derived from neurophysiological events is weaker than our knowledge of visual perception. Amongst the three perceptual qualities of pitch, loudness, and timbre, pitch has been studied the most. Two theories have been offered to explain perception of pitch, the Place and Frequency Theories. Place Theory (Hermann von Helmholtz, 1863) maintains that perception of pitch depends on the vibration of different portions of the membrane formed by the receptive cells of the inner ear. That is, receptive cells in each region of the membrane are specialized for the detection of specific sound frequencies. Frequency Theory (Rutherford, 1886) holds that pitch perception corresponds to the rate of vibration of all receptive cells along the inner ear membrane. For example, a sound of frequency 2000 Hz would cause the whole membrane to vibrate at a rate of 2000 Hz. The brain then detects the frequency based on the rate of neuronal firing that matches the rate of vibration.
Over many years of debate between the proponents of each theory, the findings suggest that both are in part valid in their explanation of the mechanism underlying pitch perception. Place theory is accurate, except that receptive cells along the inner membrane lack independence in response. They vibrate together as suggested by the frequency theory. Sound waves travel along the membrane, peaking at a given region depending on the frequency. Likewise, the frequency theory was weakened when discovered that receptor cells are unable to fire at rates reflecting the higher frequency range of hearing. The volley principle has been offered to deal with this weakness, and holds that different groups of receptive cells may fire in rapid succession. This volley of impulses could generate the high frequencies that single receptive cells are incapable of generating. Current thinking maintains that sounds under 1000 Hz are translated into pitch through frequency coding. Sounds between 1000 and 5000 Hz are coded via a combination of frequency and place coding. Finally, for sounds over 5000 Hz pitch is coded via place only. As is often found in the end, competing theories are often complementary.
The human ear is divided into 3 main regions that differ based on the contribution made to auditory processing, the outer ear, middle ear, and inner ear. The outer ear is structured to collect and channel sound energy in the form of vibrating air molecules to the neuronal tissue specialized to encode the information within. The sound-collecting compartment of the outer ear is called the pinna. This cone functions poorly for the human, hence why the elderly may cup their hand to their ear when attempting to improve their hearing. The middle ear specializes in transmitting the sound from the outer to the oval window opening of the inner ear via vibration of movable bones called ossicles. The inner ear conducts the same information to the receptor neurons via waves in a fluid.
This figure illustrates some of the primary structures of the middle and inner ear, and describes the basic process contributed by each to audition.
Whereas the eye is an organ that synthesizes or "puts together" sensory information, the ear is an organ of analysis or breakdown. When light composed of two different wavelengths is processed by the visual system, the wavelengths are mixed and we perceive a single color. When a tone of two different frequencies is processed by the auditory system, we hear both tones and not a blend or mixture of the two. It is this quality of auditory processing that underlies our ability to differentiate identical tones generated by different instruments; to distinguish the talent of Placido Domingo from Luciano Pavarotti, the sound of oboe from bassoon. This ability to analyze a complex waveform, breaking it down into component sine waves is called Fourier analysis after its discoverer. Jean Baptiste Joseph de Fourier (1768-1830), French mathematician and physicist, formulated the Fourier theorem in 1826; a series of mathematical formulae that describe complex waveforms in terms of individual sine waves of unique amplitude and frequency. A plotting of component sine waves provides the spectrum of the sound; a principle now employed by computers for speech recognition. As suggested by recent evidence, Fourier analysis may be employed by the right hemisphere for the recognition of differing visual patterns. This discovery by F.W. Campbell of the University of Cambridge has led to enhanced understanding of unique processing capabilities of the right hemisphere cortex.
Hermann Helmholtz (1821-1894), the originator of the place theory of auditory perception, is a scientist of particular notability. Among the contributions of this German physiologist and physicist are the first measurement of the rate of conduction of neuronal signals, invention of the ophthalmoscope for examination of the retina, invention of the ophthalmometer for measuring the curvature of the cornea, and the founding of the experimental study of perception. His study of auditory perception led to the resonance theory of pitch discrimination. He impressively achieved these results before the development of electronic instruments for producing and measuring sound waves. Helmholz was adept at applying penetrating psychological insights to exceptionally effective physical measures in ways leading to enhanced understanding of fundamental aspects of brain function.
|Suggestions for further study|
Beranek, L.L. (1966, December). Noise. Scientific American, 215(6), 66-74.
Borg, E., Counter, S.A. (1989, August). The middle-ear muscles. Scientific American, 261(2), 74-80.
Carlson, S. (1996, December). Dissecting the Brain with Sound. Scientific American, 112-115.
Gordon, B. (1972, December). The superior colliculus of the brain. Scientific American, 227(6), 72-82.
Hudspeth, A.J. (1983, January). The hair cells of the inner ear. They are exquisitely sensitive transducers that in human beings mediate the senses of hearing and balance. A tiny force applied to the top of the cell produces an electrical signal at the bottom, Scientific American, 248(1), 54-64.
Konishi, M. (1993, April). Listening with two ears. Scientific American, 268(4), 66-73.
Loeb, G.E. (1985, February). The functional replacement of the ear. Scientific American, 252(2), 104-111.
Parker, D.E. (1980, November). The vestibular apparatus, Scientific American, 243(5), 118-135.
Oster, G. (1943, October). Auditory beats in the brain. Scientific American, 229(4), 94-102.
Rennie, J. (1993, July). Healing hearing. Regrowing damaged ear cells might eventually cure deafness. Scientific American, 269(1), 26-27.
Warren, R.M., Warren, R.P. (1970, December). Auditory illusions and confusions. Scientific American, 223(6), 30-36.
Yin, T.P. (1969, January). The control of vibration and noise. Scientific American, 220(1), 98-106.
(Auditory analysis and speech communication)
Hideki Kawahara - A summary of research conducted from 1993-1996 by ASC (see below) concerning (1) Auditory information representation; (2) Auditory analysis in the Perception of spoken language; (3) Interactions between speech perception and production.
Home page for ASC Auditory Analysis and Speech Communications (ASC) Group:
(The Skinny on Deaf Peoples Inner Voice)
Hannah Holmes, Discovery Channel Online - The inner voice of people with profound hearing loss is described.
(Tinnitus Location Found in the Brain)
N. Seppa, Science News Online, Jan.(98) - Neuropsychology of 'ringing in the ears' is discussed.
(Processing Deficits in Learning Disorders)
Sponsored by LD OnLine, a service of The Learning Project at WETA, Washington, D.C., in association with The Coordinated Campaign for Learning Disabilities. LD OnLine is made possible in part by support from The Emily Hall Tremaine Foundation. This site contains a variety of links to information on central auditory processing disorders and their relationship to learning disabilities.