ReviewEssays.com - Term Papers, Book Reports, Research Papers and College Essays
Search

Basis of Processing Sound Strategies

Essay by   •  September 27, 2010  •  Essay  •  2,299 Words (10 Pages)  •  1,623 Views

Essay Preview: Basis of Processing Sound Strategies

Report this essay
Page 1 of 10

Basis of Processing Sound Strategies

Introduction to Coding Strategies:

D.J. Allum

Coding strategies define the way in which acoustic sounds in our world are transformed into

electrical signals that we can understand in our brain. The normal-hearing person already has a

way to code acoustic sounds when the inner ear (cochlear) is functioning. The cochlea is the

sensory organ that transforms acoustic signals into electrical signals. However, a deaf person

does not have a functioning cochlea. The cochlear implant takes over its function. Technically,

it is relatively easy to send electrical current through implanted electrodes. The more difficult

part is to make the electrical signals carry the appropriate information about speech and other

sounds. This responsibility is taken over by coding strategies. The more efficient the coding

strategy, the better the possibility that the brain will interpret the information as having

meaning. Without meaning, sound is only unwanted noise.

Some basic vocabulary is useful in understanding coding strategies:

Frequency. Speech is composed of a range of frequencies from high-frequency sounds

(sss, piii) to low-frequency sounds (ah). These frequencies also occur for sounds in our

environment. The speech-frequency range is from about 250 Hz to 6,000 Hertz (Hz).

Amplitude. The amount of amplitude, or intensity, defines how loud a sound is heard.

The usual range from the softest to the loudest sound is about 30 dB. The normal range

for human hearing is around 120 dB.

Tonotopic. A special characteristic of the cochlea and the auditory nerve. It means that

the apical region of the cochlea (and the nerve near this region) is more sensitive to low

frequencies and that the basal region is more sensitive to high-frequencies. The

relationship between the most basal to the most apical region is a progression from

high-to-low frequency sensitivity.

Filters. Filters are used to divide, electronically, acoustic signals into different ranges.

For instance, for a speech-frequency range of 4,000 Hz, we could divide the total range

by 10 and each filter would hold 400 Hz.

Stimulation Rate. The number of times an electrode is turned on and off, i.e., activated

with electrical stimulation.

The normal cochlea is like a series of filters. Sounds that have high-frequencies will fall into

filters at the basal end of the cochlea and those with low-frequencies will fall into filters in the

apical end, i.e., in a tonotopic arrangement. Since the cochlea cannot accomplish this for a

deaf person, the cochlear implant takes its place. It is important to remember that the auditory

nerve is tonotopic even if the cochlea cannot transmit information because of deafness. The

auditory nerve lies in waiting for stimulation to arrive at a certain place in the cochlea. Thus, a

series of electrodes are placed in the cochlea. Each electrode is associated with a place in the

cochlea (base or apical) and with a filter (high-frequency to low-frequency). This is how the

auditory nerve receives information. Because speech is composed of different frequencies and,

therefore, normally analyzed in different parts of the cochlea (tonotopic order), a coding strategy

needs to divide speech electronically into different frequency bands and then send the

information to different places along the cochlea.

The normal cochlea is an amplitude analyzer. The relative amplitude (loudness) of sound is

very important. As mentioned earlier, the s-sound is always softer than ah. If we were to

change that relationship in a word, it would no longer be the same word. The speech coding

strategy must be able to analyze the different amplitudes of speech and other sounds.

The next step is to send the information to the brain. This is determined by the firing of the

nerve, or a group of nerves working together. The cochlear implant activates the nerve with its

stimulation rate. It is possible to stimulate the nerve so that it fires with every stimulation or to

over-drive the nerve so that it is forced to share the information with a group of nerves. While

one is resting, the other fires, and so forth, and in this way it begins to respond as the

normal-hearing ear does.

Let us summarize what is needed from a coding strategy:

Analysis

Divide speech into different frequency bands defined by filters

Determine the amplitude relationships of the sounds within the filters (i.e., /sss/ will always be

softer than /uuu/)

Transform

Define where to send stimulation (tonotopic location)

Define how

...

...

Download as:   txt (11.9 Kb)   pdf (123.7 Kb)   docx (14.5 Kb)  
Continue for 9 more pages »
Only available on ReviewEssays.com
Citation Generator

(2010, 09). Basis of Processing Sound Strategies. ReviewEssays.com. Retrieved 09, 2010, from https://www.reviewessays.com/essay/Basis-of-Processing-Sound-Strategies/2738.html

"Basis of Processing Sound Strategies" ReviewEssays.com. 09 2010. 2010. 09 2010 <https://www.reviewessays.com/essay/Basis-of-Processing-Sound-Strategies/2738.html>.

"Basis of Processing Sound Strategies." ReviewEssays.com. ReviewEssays.com, 09 2010. Web. 09 2010. <https://www.reviewessays.com/essay/Basis-of-Processing-Sound-Strategies/2738.html>.

"Basis of Processing Sound Strategies." ReviewEssays.com. 09, 2010. Accessed 09, 2010. https://www.reviewessays.com/essay/Basis-of-Processing-Sound-Strategies/2738.html.