What is the difference between timbre and pitch




















In the past, the tuning varied both with time and geographical location. In orchestras, there has a tendency for the frequency of A to rise. Most of us agree that changing the frequency by a given proportion gives the same pitch change, no matter what is the start frequency. For more about pitch, frequency and wavelengths, go to Frequency and pitch of sound.

Amplitude, intensity and loudness Your browser does not support the video tag. The top graph is the microphone signal proportional to the sound pressure as a function of time.

Notice the substantial differences here. The orchestral bell and the guitar share the property that they reach their maximum amplitude almost immediately when struck and plucked, respectively. No more energy is put into them, and energy is continuously lost as sound is radiated and much more is lost due to internal losses , so the amplitude decreases with time.

Of the others, the bassoon has the next fastest attack: it reaches its maximum amplitude rather quickly. We should add that, for the wind instruments and violin, the rapidity of the start depends on the details of how one plays a note. Nevertheless, the faster start of the bassoon is typical. This series shows us that the envelope is very important in determining the timbre. Another contribution to timbre comes from the spectrum, which is the distribution of amplitude or power or intensity as a function of frequency.

This is shown in the lower graph. With the exception of the bell, the spectra all show a series of equally spaced narrow peaks, which we call harmonics, about which more later. Note that the spectra are different, particularly at high frequencies. The spectrum contributes to the timbre, but usually to a lesser extent than the envelope.

These vibrations themselves can also be called sound. Acoustic instruments generally produce sound when some part of the instrument is either struck, plucked, bowed or blown into. Electronic instruments produce sound indirectly they produce variations in electrical current which are amplified and sent through a speaker. The three qualities of sound are: pitch , timbre tone color and loudness.

Pitch Pitch is the quality of sound which makes some sounds seem "higher" or "lower" than others. Pitch is determined by the number of vibrations produced during a given time period. The vibration rate of a sound is called its frequency the higher the frequency the higher the pitch.

Frequency is often measured in units called Hertz Hz. If a sound source vibrates at vibrations per second, it has a frequency of Hertz Hz.

Mels scale would be found here. The average person can hear sound from about 20 Hz to about 20, Hz. The upper frequency limit will drop with age. The human ear is very adept at filling gaps. There is a body of evidence to show that if the lowest frequency partial is missing from a complex tone, the ear will attempt to fill it in. Listening to speech or music on a poor quality transistor radio that cannot reproduce low frequencies provides another example.

The programme is rendered intelligible by the ear filling in the lowest partial tones. That rapid speech is intelligible suggests that the synthesis in the ear is practically instantaneous.

Timbre Timbre is a French word that means "tone color". It is pronounced: tam' ber. Timbre is the quality of sound which allows us to distinguish between different sound sources producing sound at the same pitch and loudness. The vibration of sound waves is quite complex; most sounds vibrate at several frequencies simultaneously. The association of other simple ratio tunings with the perceived smoothness of chords was first made by Persian musicians in the 13th century AD Partch, For the remainder of the second millennium AD music theorists debated the benefits of various tuning strategies in terms of the relative roughness of intervals created at different degrees of musical scales and of the possibilities that musical scales afforded for musical modulation , that is, for shifting melodic patterns across different starting notes Chalmers, ; Partch, Dissonance is a strong negative emotional response to stimuli, and during the 20th century many researchers produced behavioral and neurophysiological evidence supporting the Helmholtz roughness model of dissonance.

A particularly influential paper by Plomp and Levelt showed that dissonance ratings for pairs of pure tones were greater for intervals of less than a critical bandwidth. This finding was taken as evidence that beating or comodulation of tones that are not resolved by auditory filter channels causes dissonance. However, the same data could also be interpreted as showing that failure to spectrally resolve auditory information causes dissonance, regardless of whether it also causes roughness McLachlan et al.

However, rather than showing a preference for simple integer tunings, these studies could also be interpreted as showing that dissonance is greater for the stimuli with spectrally unresolved, one-semitone intervals.

Early in the 20th century Guernsey performed a series of experiments that presented substantial challenges for the roughness theory of dissonance. The roughness theory proposes that consonance is the absence of dissonance and should increase when there are fewer mistuned harmonics in the stimulus.

Guernsey found no evidence that reducing the number of harmonics in the stimulus increased consonance and instead found strong effects of music training, leading her to suggest that consonance was associated with the familiarity of commonly used musical chords. According to the OAM, recognition mechanisms start early in auditory processing pathways and prime fine pitch processing based on waveform periodicity.

Music training leads to increased familiarity for common chords, so that successful recognition occurs more often and activates more accurate long-term memory representations that enhance pitch processing McLachlan, They found that dissonance rating was inversely proportional to pitch-matching accuracy moderated by the level of music training.

This result led McLachlan et al. In this model, failure of spectral recognition mechanisms for unfamiliar chords causes incongruence between spectral and periodicity-based pitch estimates, leading to strong negative affect or dissonance in musicians. In contrast, nonmusicians with poor pitch-matching accuracy who presumably had not learned to associate periodicity cues with pitch reported no differences in the dissonance of chords with intervals greater than a critical bandwidth McLachlan et al.

In the Western music tradition, melodic and harmonic expectations are based on the use of functional harmony , a system of hierarchical tonal relationships based on Pythagorean tuning Piston, Deutsch and Feroe proposed that long-term memory templates for music scales are hierarchically encoded, with greater emphasis placed on more common musical intervals; McLachlan and Wilson subsequently proposed that memory templates for scales could prime pitch processing in primary auditory cortex to create music expectancies.

So in contrast to Terhardt , the cognitive incongruence model of dissonance suggests that sensory and musical dissonance may both arise from negative affect generated by the failure of pitch-priming mechanisms. Under this model, sensory dissonance is generated by the failure of recognition mechanisms for stimulus timbre, whereas musical dissonance is generated by the failure of recognition mechanisms for music melody and scales McLachlan et al.

The pons and the deep nucleus of the cerebellum project to the thalamus, which in turn projects to the amygdala Figure 2 ; Ramnani, ; Strick et al. The amygdala regulates autonomic arousal and, in conjunction with the hippocampal and parahippocampal cortices, the integration of sensory, semantic, and mnemonic operations Critchley, Neuroimaging research has shown increased amygdala, hippocampal, and parahippocampal activity associated with the experience of dissonance Blood et al. This is consistent with autonomic arousal occurring in association with increased stimulus ambiguity due to the failure of recognition mechanisms in the pons and cerebellum.

Zatorre and colleagues reported activation of the dopaminergic brainstem pathways when listeners reported feeling pleasure while listening to music Blood et al. Furthermore, Mitterschiffthaler et al. Naturally, consonance will occur only in the absence of dissonance, since dissonance is associated with failure of recognition mechanisms. Note that this definition of consonance could apply to the successful prediction of any musical pattern, such as rhythm or orchestration.

So the experience of consonance could occur within music traditions that do not use hierarchically defined pitch scales such as those found in Western music. However, primitive vertebrates without neocortex display behaviors very similar to human responses to music. Primitive animals such as frogs respond preferentially to specific spectrotemporal structures in mating calls Castellano et al.

Just as human pleasure in musical patterns involves dopaminergic neural pathways Blood et al. Finally, marine iguanas are capable of distinguishing the predator alarm calls of mockingbirds from other mockingbird songs for the purposes of initiating escape and alert behaviors Vitousek et al.

Such behaviors are generally associated with autonomic arousal to familiar aversive stimuli, such as the fear displayed by rats when they hear a tone that has been paired with a subsequent electrical shock Quirk et al.

Berridge and Kringelbach describe the affective keyboard , a neural mechanism by which modulation of sensitivity over an array of sites in the nucleus accumbens by the frontal cortex may initiate either intense dread or desire to the same stimulus. Aversive responses to familiar or recognized sounds likely engage the nucleus accumbens and amygdala. This brain network is different from that for dissonance, which as described above involves failure to recognize the stimulus and likely activates amygdala and hippocampal brain regions via the thalamus Blood et al.

Autonomic arousal coupled with feelings of empowerment also occurs in animals when they make aggressive territorial displays and vocalizations, which may explain why some people enjoy making loud and aggressive music Hsu et al.

Taken together, the findings discussed above suggest that emotional responses to music in humans arise primarily from ancient brain networks that link the limbic system with recognition mechanisms in the brainstem and cerebellum and with prefrontal cortical regions associated with emotional regulation Figure 2. Birds, frogs, and reptiles lack neocortex and the higher cerebral auditory processing centers found in humans.

Brainstem and cerebellar networks can learn implicitly by forming neural templates for stimuli Fiez et al. Templates for stimuli from one sensory modality may be paired with templates for stimuli from other modalities that co-occur with them with high statistical reliability—in other words, multiple sensory inputs become associated with the same object, event, or behavior. In particular, people who learn musical instruments have better pitch perception McLachlan et al.

In recent human evolution, speech has likely driven the rapid enlargement of the ventrolateral portion of the cerebellum in conjunction with the inferior frontal region of the cortex Leiner et al.

By enabling humans to process complex patterns of auditory stimuli in cerebral working memory in conjunction with primitive brainstem networks, this circuitry leads to diverse and dynamic music cultures that evoke powerful emotional responses. Similarly, cortico-cerebellar pathways to and from parietal cerebral regions can support associations of implicitly learned templates with visuospatial information McLachlan et al. These associations can support spatial representations of pitch height, as suggested by the ANSI pitch definition American National Standards Institute, In Western music the frequency of the starting or reference pitch in these spatial scales can be shifted or modulated , leading to a complex spatial language of pitch relationships known as relative pitch , which generates strong musical expectancies.

Relative pitch is quite distinct from the absolute pitch or fixed frequencies that are used to convey meaning in animal communications. So indeed music is a high art, in which complex expressions are shaped in the cerebrum in response to implicitly learned grammars that are likely stored in cortico-ponto-cerebellar pathways.

Successful prediction of musical gestures results in activation of reward networks in the limbic system, reinforcing learning Blood et al. In contrast, music can also generate high autonomic arousal by transgressing implicitly learned musical grammars Blood et al. Music appears to be common to all human societies, but humans are the only animals that make music. That is, humans are the only animals that spontaneously exhibit fine pitch discrimination and rhythmic synchronization Merker et al.

For example, people who learn to play inharmonic tuned percussion instruments learn to accurately associate pitch with the lowest-frequency partials of those instruments McLachlan et al.

The development of musical skill through extensive and repetitive training is consistent with cerebellar function McPherson, , in that the cerebellum implicitly learns to automate perceptual and motor skills, thereby reducing the conscious effort required to understand complex information and perform complex tasks Ramnani, Finally, the arousing, pleasurable, and often prosocial experiences that music affords in addition to exercising and sharpening auditory, cognitive, and motor acuity may have conferred an evolutionary advantage for musicality in humans that has contributed to its widespread distribution across human societies.

I would like to acknowledge the substantial intellectual input by my colleague Professor Sarah Wilson to the development of the new cerebellar model of auditory processing outlined in this chapter. Abdul-Kareem, I. Plasticity of the superior and middle cerebellar peduncles in musicians revealed by quantitative analysis of volume and number of streamlines based on diffusion tensor tractography.

Cerebellum , 10 , — Find this resource:. Ackermann, H. The contribution of the cerebellum to speech production and speech perception: Clinical and functional imaging data. Cerebellum , 6 , — Aitkin, L. Responses of single units in cerebellar vermis of the cat to monaural and binaural stimuli. Journal of Neurophysiology , 38 , — Acoustic input to the lateral pontine nuclei. Hearing Research , 1 , 67— American National Standards Institute American National Psychoacoustical Terminology.

Antovic, M. Musical metaphors in Serbian and Romani children: An empirical study. Metaphor and Symbol , 24 , — Arnott, S. Assessing the auditory dual-pathway model in humans. NeuroImage , 22 , — Assmann, P. Pitches of concurrent vowels, Journal of the Acoustical Society of America , , — Modeling the perception of concurrent vowels: Vowels with different fundamental frequencies. Journal of the Acoustical Society of America , 88 , — Beecham, R.

Spatial representations are specific to different domains of knowledge. Berridge, K. Neuroscience of affect: Brain mechanisms of pleasure and displeasure. Current Opinion in Neurobiology , 23 , — Blackburn, C. Regularity analysis in a compartmental model of chopper units in the anteroventral cochlear nucleus. Journal of Neurophysiology , 65 , — Blood, A. Emotional responses to pleasant and unpleasant music correlate with activity in paralimbic brain regions. Nature Neuroscience , 2 , — Cariani, P.

Temporal coding of periodicity pitch in the auditory system: An overview. Neural Plasticity , 6 , — Carlyon, R. Comparing the fundamental frequencies of resolved and unresolved harmonics: Evidence for two pitch mechanisms? Journal of the Acoustical Society of America , 95 , — Castellano, S. The mechanisms of sexual selection in a lek-breeding anuran, Hyla intermedia.

Animal Behavior , 77 , — Chalmers, J. Divisions of the tetrachord. Chiandetti, C. Chicks like consonant music. Psychological Science , 22 , — Covey, E. The monaural nuclei of the lateral lemniscus in an echolocating bat: Parallel pathways for analyzing temporal features of sound. Journal of Neuroscience , 11 , — Critchley, H. Neural mechanisms of autonomic, affective, and cognitive integration.

Journal of Comparative Neurology , , — Cupchik, G. Shared processes in spatial rotation and music permutation. Brain and Cognition , 46 , — Pitch perception models. Plack, A. Oxenham, R. Popper Eds. New York: Springer-Verlag. Multiple period estimation and pitch perception model. Speech Communication , 27 , — Delgutte, B.

Speech coding in the auditory nerve: II. Journal of the Acoustical Society of America , 75 , — Deterding, D. Deutsch, D. The internal representation of pitch sequences in tonal music. Psychological Review , 88 , — Ehret, G. Spectral and intensity coding. Schreiner Eds.

New York: Springer. Fastl, H. Pitch strength of pure tones. In Proceedings of the 13th International Congress on Acoustics pp. Frequency discrimination of pure tones at short durations. Acoustica , 56 , 41— Ferragamo, M. Octopus cells of the mammalian ventral cochlear nucleus sense the rate of depolarization. Journal of Neurophysiology , 87 , — Fiez, J. Impaired non-motor learning and error detection associated with cerebellar damage.

Brain , , — Fleshler, M. Adequate acoustic stimulus for startle reaction in the rat. Journal of Comparative and Physiological Psychology , 60 , — Fletcher, H. Relation between loudness and masking. Journal of the Acoustical Society of America , 9 , 1— Fletcher, N. The physics of musical instruments chapter 3. Gebhart, A. The role of the cerebellum in language. Thach Eds. Goldstein, J. An optimum processor theory for the central formation of the pitch of complex tones.

Journal of the Acoustical Society of America , 54 , — Guernsey, M. The role of consonance and dissonance in music. American Journal of Psychology , 40 , — Hall, J. Central processing of communication sounds in the anuran auditory system.

American Zoologist , 34 , — Handel, S. Timbre perception and auditory object identification. Moore Ed. New York: Academic Press. Sound source identification: The possible role of timbre transformations. Music Perception , 21 , — Hannon, E. Familiarity overrides complexity in rhythm perception: A cross-cultural comparison of American and Turkish listeners.

Holcomb, H. Cerebral blood flow relationships associated with a difficult tone recognition task in trained normal volunteers. Cerebral Cortex , 8 , — Hsu, D. The music of power: Perceptual and behavioral consequences of powerful music. Social Psychological and Personality Science , 6 , 75— Huang, C.

Projections from the cochlear nucleus to the cerebellum. Brain Research , , 1—8. Hutchinson, S. Cerebellar volume of musicians. Cerebral Cortex , 13 , — Helmholtz, H. On the sensation of tones as a physiological basis for the theory of music 2nd ed. Ellis, Trans. London: Dover. Izumi, A. Japanese monkeys perceive sensory consonance of chords.

Journal of the Acoustical Society of America , , — Johnsrude, I. Functional specificity in the right human auditory cortex for perceiving pitch direction. Kidd, G.



0コメント

  • 1000 / 1000