The concept of sound and its physical characteristics. Basic sound characteristics. Frequency ν of oscillations of various sound sources

Laboratory work No. 5

Audiometry

The student should know: what is called sound, the nature of sound, sources of sound; physical characteristics of sound (frequency, amplitude, speed, intensity, intensity level, pressure, acoustic spectrum); physiological characteristics of sound (height, volume, timbre, minimum and maximum vibration frequencies perceived by a given person, threshold of audibility, threshold of pain) their relationship with the physical characteristics of sound; human hearing system, theories of sound perception; sound insulation coefficient; acoustic impedance, absorption and reflection of sound, reflection and penetration coefficients of sound waves, reverberation; physical foundations of sound research methods in the clinic, the concept of audiometry.

The student must be able to: using a sound generator to remove the dependence of the hearing threshold on frequency; determine the minimum and maximum vibration frequencies you perceive, take an audiogram using an audiometer.

Brief theory

Sound. Physical characteristics of sound

Sound are called mechanical waves with a frequency of vibration of particles of an elastic medium from 20 Hz to 20,000 Hz, perceived by the human ear.



Physical name those characteristics of sound that exist objectively. They are not related to the peculiarities of a person’s sensation of sound vibrations. The physical characteristics of sound include frequency, amplitude of vibrations, intensity, intensity level, speed of propagation of sound vibrations, sound pressure, acoustic spectrum of sound, reflection and penetration coefficients of sound vibrations, etc. Let us briefly consider them.

1. Oscillation frequency. The frequency of sound vibrations is the number of vibrations of particles of an elastic medium (in which sound vibrations propagate) per unit time. The frequency of sound vibrations lies in the range of 20 - 20000 Hz. Each individual perceives a certain range of frequencies (usually slightly above 20 Hz and below 20,000 Hz).

2. Amplitude sound vibration is the greatest deviation of the oscillating particles of the medium (in which the sound vibration propagates) from the equilibrium position.

3. Sound wave intensity(or the power of sound) is a physical quantity that is numerically equal to the ratio of the energy transferred by a sound wave per unit time through a unit surface area oriented perpendicular to the speed vector of the sound wave, that is:

Where W- wave energy, t- time of energy transfer through a platform area S.

Intensity unit: [ I] = 1 J/(m 2 s) = 1 W/m 2.

Let us pay attention to the fact that the energy and, accordingly, the intensity of the sound wave is directly proportional to the square of the amplitude " A" and frequencies " ω » sound vibrations:

W ~ A 2 And I ~ A 2 ; W ~ ω 2 And I ~ ω 2.

4. Speed ​​of sound is called the speed of propagation of sound vibration energy. For a plane harmonic wave, the phase velocity (the speed of propagation of the oscillation phase (wave front), for example, maximum or minimum, i.e., a clot or rarefaction of the medium) is equal to the wave speed. For a complex oscillation (according to the Fourier theorem, it can be represented as a sum of harmonic oscillations), the concept is introduced group velocity– the speed of propagation of a group of waves with which energy is transferred by a given wave.

The speed of sound in any medium can be found using the formula:

Where E- modulus of elasticity of the medium (Young’s modulus), r- density of the medium.

With an increase in the density of the medium (for example, 2 times), the elastic modulus E increases to a greater extent (more than 2 times), therefore, with increasing density of the medium, the speed of sound increases. For example, the speed of sound in water is ≈ 1500 m/s, in steel - 8000 m/s.

For gases, formula (2) can be transformed and obtained in the following form:

(3)

where g = S R /C V- the ratio of the molar or specific heat capacities of a gas at constant pressure ( S R) and at constant volume ( C V).

R- universal gas constant ( R=8.31 ​​J/mol K);

T- absolute temperature on the Kelvin scale ( T=t o C+273);

M- molar mass of gas (for a normal mixture of air gases

M=29×10 -3 kg/mol).

For air at T=273K and normal atmospheric pressure, the speed of sound is υ=331.5 "332 m/s. It should be noted that wave intensity (vector quantity) is often expressed in terms of wave speed:

or ,(4)

Where S×l- volume, u=W/S×l- volumetric energy density. The vector in equation (4) is called Umov vector.

5.Sound pressure is a physical quantity that is numerically equal to the ratio of the pressure force modulus F vibrating particles of the medium in which sound propagates to the area S perpendicular to the oriented area relative to the pressure force vector.

P = F/S [P]= 1N/m2 = 1Pa (5)

The intensity of a sound wave is directly proportional to the square of the sound pressure:

I = P 2 /(2r υ), (7)

Where R- sound pressure, r- density of the medium, υ - speed of sound in a given environment.

6.Intensity level. The intensity level (sound intensity level) is a physical quantity that is numerically equal to:

L=log(I/I 0), (8)

Where I- sound intensity, I 0 =10 -12 W/m 2- the lowest intensity perceived by the human ear at a frequency of 1000 Hz.

Intensity level L, based on formula (8), is measured in bels ( B). L = 1 B, If I=10I 0.

Maximum intensity perceived by the human ear I max =10 W/m 2, i.e. I max / I 0 =10 13 or L max =13 B.

More often the intensity level is measured in decibels ( dB):

L dB =10 log(I/I 0), L=1 dB at I=1.26I 0.

The sound intensity level can be found through sound pressure.

Because I ~ P 2, That L(dB) = 10log(I/I 0) = 10 log(P/P 0) 2 = 20 log(P/P 0), Where P 0 = 2 × 10 -5 Pa (at I 0 = 10 -12 W/m 2).

7.tone is called sound, which is a periodic process (periodic oscillations of a sound source do not necessarily occur according to a harmonic law). If the sound source performs a harmonic oscillation x=ASinωt, then this sound is called simple or clean tone. A non-harmonic periodic oscillation corresponds to a complex tone, which can be represented, according to the Fourier theorem, as a set of simple tones with frequencies n about(root tone) and 2n o, 3n o etc., called overtones with corresponding amplitudes.

8.Acoustic spectrum sound is a set of harmonic vibrations with corresponding frequencies and vibration amplitudes into which a given complex tone can be decomposed. The spectrum of a complex tone is lined, i.e. frequencies n o, 2n o etc.

9. Noise( audible noise ) called sound, which is complex, non-repeating vibrations of particles of an elastic medium. Noise is a combination of randomly changing complex tones. The acoustic spectrum of noise consists of almost any frequency in the audio range, i.e. the acoustic spectrum of noise is continuous.

The sound can also be in the form of a sonic boom. Sonic boom- this is a short-term (usually intense) sound impact (clap, explosion, etc.).

10.Sound wave penetration and reflection coefficients. An important characteristic of the medium that determines the reflection and penetration of sound is wave impedance (acoustic impedance) Z=r υ, Where r- density of the medium, υ - speed of sound in the medium.

If a plane wave is incident, for example, normally to the interface between two media, then the sound partially passes into the second medium, and part of the sound is reflected. If the sound intensity falls I 1, passes - I 2, reflected I 3 =I 1 - I 2, That:

1) sound wave penetration coefficient b called b=I 2 /I 1;

2) reflection coefficient a called:

a= I 3 /I 1 =(I 1 -I 2)/I 1 =1-I 2 /I 1 =1-b.

Rayleigh showed that b =

If υ 1 r 1 = υ 2 r 2, That b=1(maximum value), while a=0, i.e. there is no reflected wave.

If Z 2 >>Z 1 or υ 2 r 2 >> υ 1 r 1 , That b » 4 υ 1 r 1 / υ 2 r 2. So, for example, if sound falls from air into water, then b=4(440/1440000)=0.00122 or 0,122% intensity of the incident sound penetrates from the air into the water.

11. The concept of reverberation. What is reverberation? In an enclosed space, sound is repeatedly reflected from the ceiling, walls, floor, etc. with gradually decreasing intensity. Therefore, after the sound source ceases to operate, sound is heard for some time due to multiple reflections (hum).

Reverberation is the process of gradual attenuation of sound in enclosed spaces after the cessation of radiation from the source of sound waves. Reverberation time is the time during which the intensity of sound during reverberation decreases by 10 6 times. When designing classrooms, concert halls, etc. take into account the need to obtain a certain time (time interval) of reverberation. So, for example, for the Column Hall of the House of Unions and the Bolshoi Theater in Moscow, the reverberation time for empty rooms is respectively 4.55 s and 2.05 s, for filled rooms – 1.70 s and 1.55 s.

Affiliate Material

Introduction

One of the five senses available to humans is hearing. With its help we hear the world around us.

Most of us have sounds that we remember from childhood. For some, it’s the voices of family and friends, or the creaking of wooden floorboards in grandma’s house, or maybe it’s the sound of train wheels on the railway that was nearby. Everyone will have their own.

How do you feel when you hear or remember sounds familiar from childhood? Joy, nostalgia, sadness, warmth? Sound can convey emotions, mood, encourage action or, conversely, calm and relax.

In addition, sound is used in a variety of spheres of human life - in medicine, in the processing of materials, in the exploration of the deep sea and many, many others.

Moreover, from the point of view of physics, this is just a natural phenomenon - vibrations of an elastic medium, which means, like any natural phenomenon, sound has characteristics, some of which can be measured, others can only be heard.

When choosing musical equipment, reading reviews and descriptions, we often come across a large number of these same characteristics and terms that authors use without appropriate clarification and explanation. And if some of them are clear and obvious to everyone, then others do not make any sense to an unprepared person. Therefore, we decided to tell you in simple language about these incomprehensible and complex, at first glance, words.

If you remember your acquaintance with portable sound, it began quite a long time ago, and it was this cassette player, given to me by my parents for the New Year.

He sometimes chewed the film, and then he had to unravel it with paper clips and strong words. He devoured batteries with an appetite that would have been the envy of Robin Bobin Barabek (who devoured forty people), and therefore my, at that time, very meager savings of an ordinary schoolboy. But all the inconveniences paled in comparison with the main advantage - the player gave an indescribable feeling of freedom and joy! So I became “sick” of a sound that I could take with me.

However, I will sin against the truth if I say that from that time I have always been inseparable from music. There were periods when there was no time for music, when the priority was completely different. However, all this time I tried to keep abreast of what was happening in the world of portable audio, and, so to speak, keep my finger on the pulse.

When smartphones appeared, it turned out that these multimedia processors could not only make calls and process huge amounts of data, but, what was much more important for me, store and play huge amounts of music.

The first time I got hooked on “telephone” sound was when I listened to the sound of one of the music smartphones, which used the most advanced sound processing components at that time (before that, I admit, I didn’t take the smartphone seriously as a device for listening to music ). I really wanted this phone, but I couldn't afford it. At the same time, I began to follow the model range of this company, which had established itself in my eyes as a manufacturer of high-quality sound, but it turned out that our paths constantly diverged. Since that time, I have owned various musical equipment, but I never stop looking for a truly musical smartphone that could rightfully bear such a name.

Characteristics

Among all the characteristics of sound, a professional can immediately stun you with a dozen definitions and parameters, which, in his opinion, you definitely, well, you absolutely must pay attention to and, God forbid, some parameter will not be taken into account - trouble...

I will say right away that I am not a supporter of this approach. After all, we usually choose equipment not for an “international audiophile competition,” but for our loved ones, for the soul.

We are all different, and we all value something different in sound. Some people like the sound “basier”, others, on the contrary, clean and transparent; for some, certain parameters will be important, and for others, completely different ones. Are all parameters equally important and what are they? Let's figure it out.

Have you ever encountered the fact that some headphones play on your phone so much that you have to turn it down, while others, on the contrary, force you to turn the volume up to full and still not enough?

In portable technology, resistance plays an important role in this. Often, it is by the value of this parameter that you can understand whether the volume will be enough for you.

Resistance

Measured in Ohms (Ohms).

Georg Simon Ohm - German physicist, derived and experimentally confirmed a law expressing the relationship between current strength in a circuit, voltage and resistance (known as Ohm's law).

This parameter is also called impedance.

The value is almost always indicated on the box or in the instructions for the equipment.

There is an opinion that high-impedance headphones play quietly, and low-impedance headphones play loudly, and for high-impedance headphones you need a more powerful sound source, but for low-impedance headphones a smartphone is enough. You can also often hear the expression - not every player will be able to “pump” these headphones.

Remember, low-impedance headphones will sound louder on the same source. Although from a physics point of view this is not entirely true and there are nuances, this is actually the simplest way to describe the value of this parameter.

For portable equipment (portable players, smartphones), headphones with an impedance of 32 Ohms and lower are most often produced, but it should be kept in mind that for different types of headphones, different impedances will be considered low. So, for full-size headphones, an impedance of up to 100 Ohms is considered low-impedance, and above 100 Ohms is considered high-impedance. For in-ear headphones (plugs or earbuds), a resistance value of up to 32 ohms is considered low-impedance, and above 32 ohms is considered high-impedance. Therefore, when choosing headphones, pay attention not only to the resistance value itself, but also to the type of headphones.

Important: the higher the impedance of the headphones, the clearer the sound will be and the longer the player or smartphone will work in playback mode, because High impedance headphones consume less current, which in turn means less signal distortion.

Frequency response (amplitude-frequency response)

Often in a discussion of a particular device, be it headphones, speakers or a car subwoofer, you can hear the characteristic “pumps/doesn’t pump”. You can find out whether a device, for example, will “pump” or is more suitable for vocal lovers without listening to it.

To do this, just find its frequency response in the description of the device.

The graph allows you to understand how the device reproduces other frequencies. Moreover, the fewer differences, the more accurately the equipment can convey the original sound, which means the closer the sound will be to the original.

If there are no pronounced “humps” in the first third, then the headphones are not very “bassy”, but if on the contrary, then they will “pump”, the same applies to other parts of the frequency response.

Thus, looking at the frequency response, we can understand what timbral/tonal balance the equipment has. On the one hand, you might think that a straight line would be considered the ideal balance, but is that true?

Let's try to figure it out in more detail. It just so happens that a person mainly uses medium frequencies (MF) to communicate and, accordingly, is best able to distinguish precisely this frequency band. If you make a device with a “perfect” balance in the form of a straight line, I am afraid that you will not like listening to music on such equipment very much, since most likely the high and low frequencies will not sound as good as the mids. The solution is to find your balance, taking into account the physiological characteristics of hearing and the purpose of the equipment. There is one balance for voice, another for classical music, and a third for dance music.

The graph above shows the balance of these headphones. Low and high frequencies are more pronounced, in contrast to the mid frequencies, which are less, which is typical for most products. However, the presence of a “hump” at low frequencies does not necessarily mean the quality of these very low frequencies, since they may appear, although in large quantities, but of poor quality - mumbling, buzzing.

The final result will be influenced by many parameters, starting from how well the geometry of the case was calculated, and ending with what materials the structural elements are made of, and you can often find out only by listening to the headphones.

In order to have an approximate idea of ​​how high quality our sound will be before listening, after the frequency response you should pay attention to such a parameter as the harmonic distortion coefficient.

Harmonic Distortion Factor


In fact, this is the main parameter that determines sound quality. The only question is what quality is for you. For example, the well-known Beats by Dr. headphones. Dre at 1kHz have a harmonic distortion coefficient of almost 1.5% (above 1.0% is considered a rather mediocre result). At the same time, oddly enough, these headphones are popular among consumers.

It is advisable to know this parameter for each specific frequency group, because the permissible values ​​differ for different frequencies. For example, for low frequencies 10% can be considered an acceptable value, but for high frequencies no more than 1%.

Not all manufacturers like to indicate this parameter on their products, because, unlike the same volume, it is quite difficult to comply with. Therefore, if the device you choose has a similar graph and in it you see a value of no more than 0.5%, you should take a closer look at this device - this is a very good indicator.

We already know how to choose headphones/speakers that will play louder on your device. But how do you know how loud they will play?

There is a parameter for this that you have most likely heard about more than once. It's a favorite of nightclubs to use in their promotional materials to show how loud the party will be. This parameter is measured in decibels.

Sensitivity (volume, noise level)

The decibel (dB), a unit of sound intensity, is named after Alexander Graham Bell.

Alexander Graham Bell is a scientist, inventor and businessman of Scottish origin, one of the founders of telephony, founder of Bell Labs (formerly Bell Telephone Company), which determined the entire further development of the telecommunications industry in the United States.

This parameter is inextricably linked with resistance. A level of 95-100 dB is considered sufficient (in fact, this is a lot).

For example, the loudness record was set by Kiss on July 15, 2009 at a concert in Ottawa. The sound volume was 136 dB. According to this parameter, the Kiss group surpassed a number of famous competitors, including such groups as The Who, Metallica and Manowar.

The unofficial record belongs to the American team The Swans. According to unconfirmed reports, at several concerts of this group the sound reached a volume of 140 dB.

If you want to repeat or surpass this record, remember that a loud sound can be regarded as a violation of public order - for Moscow, for example, the standards provide for a sound level equivalent to 30 dBA at night, 40 dBA during the day, maximum - 45 dBA at night, 55 dBA during the day .

And if the volume is more or less clear, then the next parameter is not as easy to understand and track as the previous ones. It's about dynamic range.

Dynamic range

Essentially, it is the difference between the loudest and softest sounds without clipping (overloading).

Anyone who has ever been to a modern cinema has experienced what wide dynamic range is. This is the very parameter thanks to which you hear, for example, the sound of a shot in all its glory, and the rustle of the boots of the sniper creeping on the roof who fired this shot.

A larger range of your equipment means more sounds that your device can transmit without loss.

It turns out that it is not enough to convey the widest possible dynamic range; you need to manage to do it in such a way that each frequency is not just audible, but audible with high quality. This is responsible for one of those parameters that almost everyone can easily evaluate when listening to a high-quality recording on the equipment they are interested in. It's about detail.

Detailing

This is the ability of the equipment to separate sound by frequency - low, medium, high (LF, MF, HF).


It is this parameter that determines how clearly individual instruments will be heard, how detailed the music will be, and whether it will turn into just a jumble of sounds.

However, even with the best detail, different equipment can provide completely different listening experiences.

It depends on the skill of the equipment localize sound sources.

In reviews of musical equipment, this parameter is often divided into two components - stereo panorama and depth.

Stereo panorama

In reviews, this setting is usually described as wide or narrow. Let's figure out what it is.

From the name it is clear that we are talking about the width of something, but what?

Imagine that you are sitting (standing) at a concert of your favorite band or performer. And the instruments are placed in a certain order on the stage in front of you. Some are closer to the center, others further away.


Introduced? Let them start playing.

Now close your eyes and try to distinguish where this or that instrument is located. I think you can do this without difficulty.

What if the instruments are placed in front of you in one line, one after the other?

Let's take the situation to the point of absurdity and move the instruments close to each other. And... let's put the trumpeter on the piano.

Do you think you'll like this sound? Will you be able to figure out which tool is where?

The last two options can most often be heard in low-quality equipment, the manufacturer of which does not care what sound his product produces (as practice shows, price is not an indicator at all).

High-quality headphones, speakers, and music systems should be able to build the correct stereo panorama in your head. Thanks to this, when listening to music through good equipment, you can hear where each instrument is located.

However, even with the ability of the equipment to create a magnificent stereo panorama, such sound will still feel unnatural, flat due to the fact that in life we ​​perceive sound not only in the horizontal plane. Therefore, no less important is such a parameter as sound depth.

Sound depth

Let's go back to our fictional concert. We will move the pianist and violinist a little deeper into our stage, and we will place the guitarist and saxophonist a little forward. The vocalist will take his rightful place in front of all the instruments.


Did you hear this on your music equipment?

Congratulations, your device can create a spatial sound effect through the synthesis of a panorama of imaginary sound sources. To put it simply, your equipment has good sound localization.

If we are not talking about headphones, then this issue is solved quite simply - several emitters are used, placed around, allowing you to separate sound sources. If we are talking about your headphones and you can hear this in them, congratulations to you a second time, you have very good headphones in this parameter.

Your equipment has a wide dynamic range, is perfectly balanced and successfully localizes sound, but is it ready for sudden changes in sound and the rapid rise and fall of impulses?

How is her attack?

Attack

From the name, in theory, it is clear that this is something swift and inevitable, like the impact of a Katyusha battery.

But seriously, here's what Wikipedia tells us about this: Sound attack is the initial impulse of sound production necessary for the formation of sounds when playing any musical instrument or when singing vocal parts; some nuanced characteristics of various methods of sound production, performance strokes, articulation and phrasing.

If we try to translate this into understandable language, then this is the rate of increase in the amplitude of the sound until it reaches a given value. And to make it even clearer - if your equipment has poor attack, then bright compositions with guitars, live drums and quick changes in sound will sound dull and dull, which means goodbye to good hard rock and others like it...

Among other things, in articles you can often find such a term as sibilants.

Sibilants

Literally - whistling sounds. Consonant sounds, when pronounced, a stream of air quickly passes between the teeth.

Remember this guy from the Disney cartoon about Robin Hood?

There are very, very many sibilants in his speech. And if your equipment also whistles and hisses, then, alas, this is not a very good sound.

Remark: by the way, Robin Hood himself from this cartoon looks suspiciously like the Fox from the recently released Disney cartoon Zootopia. Disney, you're repeating yourself :)

Sand

Another subjective parameter that cannot be measured. But you can only hear.


In its essence, it is close to sibilants; it is expressed in the fact that at high volumes, when overloaded, high frequencies begin to disintegrate into parts and the effect of pouring sand appears, and sometimes high-frequency rattling. The sound becomes somehow rough and at the same time loose. The sooner this happens, the worse it is, and vice versa.

Try it at home, from a height of a few centimeters, slowly pour a handful of granulated sugar onto a metal pan lid. Did you hear? This is it.

Look for a sound that doesn't have sand in it.

frequency range

One of the last direct parameters of sound that I would like to consider is the frequency range.

Measured in Hertz (Hz).

Heinrich Rudolf Hertz, the main achievement is the experimental confirmation of James Maxwell's electromagnetic theory of light. Hertz proved the existence of electromagnetic waves. Since 1933, the unit of measurement of frequency that is included in the international metric system of units (SI) has been named after Hertz.

This is the parameter that you are 99% likely to find in the description of almost any musical equipment. Why did I leave it for later?

You should start with the fact that a person hears sounds that are in a certain frequency range, namely from 20 Hz to 20,000 Hz. Anything above this value is ultrasound. Everything below is infrasound. They are inaccessible to human hearing, but accessible to our smaller brothers. This is familiar to us from school physics and biology courses.


In fact, for most people, the actual audible range is much more modest, and in women, the audible range is shifted upward relative to men’s, so men are better at distinguishing low frequencies, and women are better at distinguishing high frequencies.

Why then do manufacturers indicate on their products a range that goes beyond our perception? Maybe it's just marketing?

Yes and no. A person not only hears, but also feels and senses sound.

Have you ever stood close to a large speaker or subwoofer playing? Remember your feelings. The sound is not only heard, it is also felt by the whole body, it has pressure and strength. Therefore, the larger the range indicated on your equipment, the better.


However, you should not attach too much importance to this indicator - you rarely find equipment whose frequency range is narrower than the limits of human perception.

additional characteristics

All of the above characteristics directly relate to the quality of the reproduced sound. However, the final result, and therefore the pleasure of watching/listening, is also affected by the quality of your source file and what sound source you use.

Formats

This information is on everyone’s lips, and most already know about it, but just in case, let’s remind you.

There are three main groups of audio file formats:

  • Uncompressed audio formats such as WAV, AIFF
  • Lossless compressed audio formats (APE, FLAC)
  • lossy compressed audio formats (MP3, Ogg)

We recommend reading about this in more detail by referring to Wikipedia.

We note for ourselves that using APE and FLAC formats makes sense if you have professional or semi-professional level equipment. In other cases, the capabilities of the MP3 format, compressed from a high-quality source with a bitrate of 256 kbps or more, are usually sufficient (the higher the bitrate, the less loss there was during audio compression). However, this is rather a matter of taste, hearing and individual preference.

Source

Equally important is the quality of the sound source.

Since we were initially talking about music on smartphones, let’s look at this option.

Not so long ago, sound was analog. Remember reels, cassettes? This is analog sound.


And in your headphones you hear analog sound that has gone through two stages of conversion. First, it was converted from analog to digital, and then converted back to analog before being sent to the headphone/speaker. And the result – the sound quality – will ultimately depend on the quality of this transformation.

In a smartphone, a DAC (digital-to-analog converter) is responsible for this process.

The better the DAC, the better the sound you will hear. And vice versa. If the DAC in the device is mediocre, then no matter what your speakers or headphones are, you can forget about high sound quality.

All smartphones can be divided into two main categories:

  1. Smartphones with dedicated DAC
  2. Smartphones with built-in DAC

At the moment, a large number of manufacturers are engaged in the production of DACs for smartphones. You can decide what to choose by using the search and reading the description of a particular device. However, do not forget that among smartphones with a built-in DAC, and among smartphones with a dedicated DAC, there are samples with very good sound and not so good, because optimization of the operating system, firmware version and the application through which you listen to music play an important role. In addition, there are kernel software audio mods that can improve the final sound quality. And if engineers and programmers in a company do one thing and do it competently, then the result turns out to be worthy of attention.

It is important to know that in a direct comparison of two devices, one of which is equipped with a high-quality built-in DAC, and the other with a good dedicated DAC, the winner will invariably be with the latter.

Conclusion

Sound is an inexhaustible topic.

I hope that thanks to this material, many things in music reviews and texts have become clearer and simpler for you, and previously unfamiliar terminology has acquired additional meaning and significance, because everything is easy when you know it.

Both parts of our educational program about sound were written with the support of Meizu. Instead of the usual praise of devices, we decided to make useful and interesting articles for you and draw attention to the importance of the playback source in obtaining high-quality sound.

Why is this needed for Meizu? The other day, pre-orders for the new music flagship Meizu Pro 6 Plus began, so it is important for the company that the average user knows about the nuances of high-quality sound and the key role of the playback source. By the way, if you place a paid pre-order before the end of the year, you will receive a Meizu HD50 headset as a gift for your smartphone.

We have also prepared a music quiz for you with detailed comments on each question, we recommend you try your hand:

Basic sound characteristics. Transmits sound over long distances.

Main sound characteristics:

1. Sound tone(number of oscillations per second). Low-pitched sounds (such as a bass drum) and high-pitched sounds (such as a whistle). The ear easily distinguishes these sounds. Simple measurements (oscillation sweep) show that the sounds of low tones are low-frequency oscillations in a sound wave. A high-pitched sound corresponds to a high vibration frequency. The frequency of vibration in a sound wave determines the tone of the sound.

2. Sound volume (amplitude). The loudness of a sound, determined by its effect on the ear, is a subjective assessment. The greater the flow of energy flowing to the ear, the greater the volume. A convenient measurement is sound intensity - the energy transferred by a wave per unit time through a unit area perpendicular to the direction of wave propagation. The intensity of sound increases with increasing amplitude of oscillations and the area of ​​the body performing the oscillations. Decibels (dB) are also used to measure loudness. For example, the volume of sound from leaves is estimated at 10 dB, whispering - 20 dB, street noise - 70 dB, pain threshold - 120 dB, and lethal level - 180 dB.

3. Sound timbre. Second subjective assessment. The timbre of a sound is determined by a set of overtones. The different number of overtones inherent in a particular sound gives it a special coloring - timbre. The difference between one timbre and another is determined not only by the number, but also by the intensity of the overtones accompanying the sound of the fundamental tone. By timbre you can easily distinguish the sounds of various musical instruments and people's voices.

The human ear cannot perceive sound vibrations with a frequency of less than 20 Hz.

The sound range of the ear is 20 Hz – 20 thousand Hz.

Transmits sound over long distances.

The problem of transmitting sound over a distance was successfully solved through the creation of the telephone and radio. Using a microphone that imitates the human ear, acoustic vibrations in the air (sound) at a certain point are converted into synchronous changes in the amplitude of an electric current (electric signal), which is delivered through wires or using electromagnetic waves (radio waves) to the desired location and converted into acoustic vibrations , similar to the original ones.

Scheme of sound transmission over a distance

1. Converter “sound - electrical signal” (microphone)

2. Electrical signal amplifier and electrical communication line (wires or radio waves)

3. Electrical signal-sound converter (loudspeaker)

Volumetric acoustic vibrations are perceived by a person at one point and can be represented as a point source of a signal. The signal has two parameters related by a function of time: vibration frequency (tone) and vibration amplitude (loudness). It is necessary to proportionally convert the amplitude of the acoustic signal into the amplitude of the electric current, maintaining the oscillation frequency.

Sound sources- any phenomena causing local pressure changes or mechanical stress. Widespread sources Sound in the form of oscillating solids. Sources Sound vibrations of limited volumes of the medium itself can also serve (for example, in organ pipes, wind musical instruments, whistles, etc.). The vocal apparatus of humans and animals is a complex oscillatory system. Extensive class of sources Sound-electroacoustic transducers, in which mechanical vibrations are created by converting electric current oscillations of the same frequency. In nature Sound is excited when air flows around solid bodies due to the formation and separation of vortices, for example, when wind blows over wires, pipes, and crests of sea waves. Sound low and infra-low frequencies occurs during explosions and collapses. There are a variety of sources of acoustic noise, which include machines and mechanisms used in technology, gas and water jets. Much attention is paid to the study of sources of industrial, transport noise and noise of aerodynamic origin due to their harmful effects on the human body and technical equipment.

Sound receivers serve to perceive sound energy and convert it into other forms. To the receivers Sound This applies, in particular, to the hearing aids of humans and animals. In reception technology Sound Electroacoustic transducers, such as a microphone, are mainly used.
The propagation of sound waves is characterized primarily by the speed of sound. In a number of cases, sound dispersion is observed, i.e., the dependence of the speed of propagation on frequency. Dispersion Sound leads to a change in the shape of complex acoustic signals, including a number of harmonic components, in particular, to the distortion of sound pulses. When sound waves propagate, the phenomena of interference and diffraction that are common to all types of waves occur. In the case when the size of obstacles and inhomogeneities in the medium is large compared to the wavelength, sound propagation obeys the usual laws of wave reflection and refraction and can be considered from the standpoint of geometric acoustics.

When a sound wave propagates in a given direction, it gradually attenuates, i.e., a decrease in intensity and amplitude. Knowledge of the laws of attenuation is practically important for determining the maximum propagation range of an audio signal.

Communication methods:

· Images

The coding system must be understandable to the recipient.

Sound communications came first.

Sound (carrier – air)

Sound wave– air pressure differences

Encoded information – eardrums

Hearing sensitivity

Decibel– relative logarithmic unit

Sound properties:

Volume (dB)

Key

0 dB = 2*10(-5) Pa

Hearing threshold - pain threshold

Dynamic range- the ratio of the loudest sound to the smallest sound

Threshold = 120 dB

Frequency Hz)

Parameters and spectrum of the sound signal: speech, music. Reverberation.

Sound- vibration that has its own frequency and amplitude

The sensitivity of our ear to different frequencies is different.

Hz – 1 fps

From 20 Hz to 20,000 Hz – audio range

Infrasounds – sounds less than 20 Hz

Sounds above 20 thousand Hz and less than 20 Hz are not perceived

Intermediate encoding and decoding system

Any process can be described by a set of harmonic oscillations

Sound signal spectrum– a set of harmonic oscillations of the corresponding frequencies and amplitudes

Amplitude changes

Frequency is constant

Sound vibration– change in amplitude over time

Dependence of mutual amplitudes

Amplitude-frequency response– dependence of amplitude on frequency

Our ear has an amplitude-frequency response

The device is not perfect, it has a frequency response

frequency response– everything related to the conversion and transmission of sound

The equalizer regulates the frequency response

340 m/s – speed of sound in air

Reverberation– sound blurring

Reverberation time– time during which the signal will decrease by 60 dB

Compression– a sound processing technique where loud sounds are reduced and quiet sounds are louder

Reverberation– characteristic of the room in which sound propagates

Sampling frequency– number of samples per second

Phonetic coding

Fragments of an information image – coding – phonetic apparatus – human hearing

Waves cannot travel far

You can increase the sound power

Electricity

Wavelength - distance

Sound=function A(t)

Convert A of sound vibrations into A of electric current = secondary encoding

Phase– delay in angular measurements of one oscillation relative to another in time

Amplitude modulation– information is contained in the change in amplitude

Frequency modulation– in frequency

Phase modulation– in phase

Electromagnetic oscillation - propagates without cause

Circumference 40 thousand km.

Radius 6.4 thousand km

Instantly!

Frequency or linear distortions occur at every stage of information transmission

Amplitude transfer coefficient

Linear– signals with loss of information will be transmitted

Can be compensated

Nonlinear– cannot be prevented, associated with irreversible amplitude distortion

1895 Oersted Maxwell discovered energy - electromagnetic vibrations can propagate

Popov invented radio

1896 Marconi bought a patent abroad, the right to use Tesla's works

Real use at the beginning of the twentieth century

The fluctuation of electric current is not difficult to superimpose on electromagnetic fluctuations

The frequency must be higher than the information frequency

In the early 20s

Signal transmission using amplitude modulation of radio waves

Range up to 7,000 Hz

AM Longwave Broadcasting

Long waves having frequencies above 26 MHz

Medium waves from 2.5 MHz to 26 MHz

No limits of distribution

Ultrashort waves (frequency modulation), stereo broadcasting (2 channels)

FM – frequency

Phase is not used

Radio carrier frequency

Broadcast range

Carrier frequency

Reliable reception area– the territory over which radio waves propagate with energy sufficient for high-quality reception of information

Dkm=3.57(^H+^h)

H – transmitting antenna height (m)

h – reception height (m)

depending on the antenna height, provided there is sufficient power

Radio transmitter– carrier frequency, power and height of the transmitting antenna

Licensed

A license is required to distribute radio waves

Broadcasting network:

Source sound content (content)

Connection lines

Transmitters (Lunacharsky, near the circus, asbestos)

Radio

Power redundancy

Radio program– a set of audio messages

Radio station– radio program broadcast source

· Traditional: Radio editorial office (creative team), Radiodom (a set of technical and technological means)

Radiodom

Radio studio– a room with suitable acoustic parameters, soundproofed

Discretization by purity

The analog signal is divided into intervals in time. Measured in Hertz. The number of intervals needed to measure the amplitude at each segment

Quantization bit depth. Sampling frequency – dividing the signal in time into equal segments in accordance with Kotelnikov’s theorem

For undistorted transmission of a continuous signal occupying a certain frequency band, it is necessary that the sampling frequency is at least twice as high as the upper frequency of the reproduced frequency range

30 to 15 kHz

CD 44-100 kHz

Digital information compression

- or compression– the ultimate goal is to exclude redundant information from the digital flow.

Sound signal– random process. Levels are related during correlation time

Correlation– connections that describe events in time periods: previous, present and future

Long-term – spring, summer, autumn

Short-term

Extrapolation method. From digital to sine wave

Transmits only the difference between the next signal and the previous one

Psychophysical properties of sound - allows the ear to select signals

Specific weight in signal volume

Real\impulsive

The system is noise-resistant; nothing depends on the pulse shape. Momentum is easy to restore

Frequency response – dependence of amplitude on frequency

Frequency response regulates sound timbre

Equalizer – frequency response corrector

Low, mid, high frequencies

Bass, mids, treble

Equalizer 10, 20, 40, 256 bands

Spectrum Analyzer – Delete, Voice Recognize

Psychoacoustic devices

Forces - process

Frequency processing device – plugins– modules that, when the program is open source, are modified, sent

Dynamic signal processing

Applications– devices that regulate dynamic devices

Volume– signal level

Level regulators

Faders\mixers

Fade in \ Fade out

Noise reduction

Pico cutter

Compressor

Noise suppressor

Color vision

The human eye contains two types of light-sensitive cells (photoreceptors): highly sensitive rods, responsible for night vision, and less sensitive cones, responsible for color vision.

In the human retina there are three types of cones, the maximum sensitivity of which occurs in the red, green and blue parts of the spectrum.

Binocular

The human visual analyzer under normal conditions provides binocular vision, that is, vision with two eyes with a single visual perception.

Frequency ranges of radio broadcasting AM (LW, SV, HF) and FM (VHF and FM).

Radio- a type of wireless communication in which radio waves, freely propagating in space, are used as a signal carrier.

The transmission occurs as follows: a signal with the required characteristics (frequency and amplitude of the signal) is generated on the transmitting side. Further transmitted signal modulates a higher frequency oscillation (carrier). The resulting modulated signal is radiated into space by the antenna. On the receiving side of the radio wave, a modulated signal is induced in the antenna, after which it is demodulated (detected) and filtered by a low-pass filter (thus getting rid of the high-frequency component - the carrier). Thus, the useful signal is extracted. The received signal may differ slightly from that transmitted by the transmitter (distortion due to interference and interference).

In radio and television practice, a simplified classification of radio bands is used:

Ultra-long waves (VLW)- myriameter waves

Long waves (LW)- kilometer waves

Medium waves (SW)- hectometric waves

Short waves (HF) - decameter waves

Ultrashort waves (UHF) are high-frequency waves whose wavelength is less than 10 m.

Depending on the range, radio waves have their own characteristics and propagation laws:

Far East are strongly absorbed by the ionosphere; the main importance is ground waves that propagate around the earth. Their intensity decreases relatively quickly as they move away from the transmitter.

NE are strongly absorbed by the ionosphere during the day, and the area of ​​action is determined by the ground wave; in the evening, they are well reflected from the ionosphere and the area of ​​action is determined by the reflected wave.

HF propagate exclusively through reflection by the ionosphere, so there is a so-called around the transmitter. radio silence zone. During the day, shorter waves (30 MHz) travel better, and at night, longer waves (3 MHz). Short waves can travel long distances with low transmitter power.

VHF They propagate in a straight line and, as a rule, are not reflected by the ionosphere, but under certain conditions they are able to circle the globe due to the difference in air densities in different layers of the atmosphere. They easily bend around obstacles and have high penetrating ability.

Radio waves propagate in vacuum and in the atmosphere; the earth's surface and water are opaque to them. However, due to the effects of diffraction and reflection, communication is possible between points on the earth's surface that do not have a direct line of sight (in particular, those located at a great distance).

New TV broadcasting bands

· MMDS range 2500-2700 GHz 24 channels for analog TV broadcasting. Used in cable television system

· LMDS: 27.5-29.5 GHz. 124 TV analogue channels. Since the digital revolution. Mastered by cellular operators

· MWS – MWDS: 40.5-42.4 GHz. Cellular television broadcasting system. High 5KM frequencies are quickly absorbed

2. Decompose the image into pixels

256 levels

Key frame, then its changes

Analog-to-digital converter

The input is analog, the output is digital. Digital compression formats

Uncompensated video – three colors in pixels 25 fps, 256 megabits/s

dvd, avi – has a stream of 25 mb/s

mpeg2 – additional compression 3-4 times in satellite

Digital TV

1. Simplify, reduce the number of points

2. Simplify color selection

3. Apply compression

256 levels – dynamic brightness range

Digital is 4 times larger horizontally and vertically

Flaws

· A sharply limited signal coverage area within which reception is possible. But this territory, with equal transmitter power, is larger than that of an analog system.

· Freezing and scattering of the picture into “squares” when the level of the received signal is insufficient.

· Both “disadvantages” are a consequence of the advantages of digital data transmission: data is either received with 100% quality or restored, or received poorly with the impossibility of restoration.

Digital radio- technology for wireless transmission of a digital signal using electromagnetic radio waves.

Advantages:

· Higher sound quality compared to FM radio broadcasts. Currently not implemented due to low bit rate (typically 96 kbit/s).

· In addition to sound, texts, pictures and other data can be transmitted. (More than RDS)

· Mild radio interference does not change the sound in any way.

· More economical use of frequency space through signal transmission.

· Transmitter power can be reduced by 10 - 100 times.

Flaws:

· If the signal strength is insufficient, interference appears in analogue broadcasting; in digital broadcasting, the broadcast disappears completely.

· Audio delay due to the time required to process the digital signal.

· Currently, “field trials” are being carried out in many countries around the world.

· Now the transition to digital is gradually beginning in the world, but it is much slower than television due to its shortcomings. So far there are no mass shutdowns of radio stations in analogue mode, although their number in the AM band is decreasing due to more efficient FM.

In 2012, SCRF signed a protocol according to which the radio frequency band 148.5-283.5 kHz is allocated for the creation of digital radio broadcasting networks of the DRM standard on the territory of the Russian Federation. Also, in accordance with paragraph 5.2 of the minutes of the SCRF meeting dated January 20, 2009 No. 09-01, research work was carried out “Research on the possibility and conditions of using digital radio broadcasting of the DRM standard in the Russian Federation in the frequency band 0.1485-0.2835 MHz (long waves)".

Thus, for an indefinite period, FM broadcasts will be carried out in analogue format.

In Russia, the first multiplex of digital terrestrial television DVB-T2 broadcasts federal radio stations Radio Russia, Mayak and Vesti FM.

Internet radio or web radio- a group of technologies for transmitting streaming audio data over the Internet. Also, the term Internet radio or web radio can be understood as a radio station that uses Internet streaming technology for broadcasting.

The technological basis of the system consists of three elements:

Station- generates an audio stream (either from a list of audio files, or by direct digitization from an audio card, or by copying an existing stream on the network) and sends it to the server. (The station consumes minimal traffic because it creates one stream)

Server (stream repeater)- receives an audio stream from the station and redirects its copies to all clients connected to the server; in essence, it is a data replicator. (Server traffic is proportional to the number of listeners + 1)

Client- receives an audio stream from the server and converts it into an audio signal, which is heard by the listener of the Internet radio station. It is possible to organize cascade radio broadcasting systems using a stream repeater as a client. (The client, like the station, consumes a minimum of traffic. The traffic of the client-server of the cascade system depends on the number of listeners of such a client.)

In addition to the audio data stream, text data is usually also transmitted so that the player displays information about the station and the current song.

The station can be a regular audio player program with a special codec plug-in or a specialized program (for example, ICes, EzStream, SAM Broadcaster), as well as a hardware device that converts an analog audio stream into a digital one.

As a client, you can use any media player that supports streaming audio and is capable of decoding the format in which the radio is broadcast.

It should be noted that Internet radio, as a rule, has nothing to do with broadcast radio broadcasting. But rare exceptions are possible, which are not common in the CIS.

Internet Protocol Television(Internet television or on-line TV) is a system based on two-way digital transmission of a television signal via Internet connections via a broadband connection.

The Internet television system allows you to implement:

·Manage each user's subscription package

· Broadcast channels in MPEG-2, MPEG-4 format

· Presentation of television programs

TV registration function

· Search for past TV shows to watch

· Pause function for TV channel in real time

· Individual package of TV channels for each user

New media or new media- a term that at the end of the 20th century began to be used for interactive electronic publications and new forms of communication between content producers and consumers to denote differences from traditional media such as newspapers, that is, this term denotes the process of development of digital, network technologies and communications. Convergence and multimedia newsrooms have become commonplace in today's journalism.

We are talking primarily about digital technologies and these trends are associated with the computerization of society, since until the 80s the media relied on analogue media.

It should be noted that according to Ripple's law, more highly developed media are not a replacement for previous ones, so the task new media This includes recruiting your consumer, searching for other areas of application, “an online version of a printed publication is unlikely to replace the printed publication itself.”

It is necessary to distinguish between the concepts of “new media” and “digital media”. Although both here and there practice digital means of encoding information.

Anyone can become a publisher of a “new media” in terms of process technology. Win Crosby, who describes "mass media" as a tool for broadcasting "one to many", considers new media as communication “many to many”.

The digital era is creating a different media environment. Reporters are getting used to working in cyberspace. As noted, previously “covering international events was a simple matter”

Speaking about the relationship between the information society and new media, Yasen Zasursky focuses on three aspects, highlighting new media as an aspect:

· Media opportunities at the present stage of development of information and communication technologies and the Internet.

· Traditional media in the context of “internetization”

· New media.

Radio studio. Structure.

How to organize a faculty radio?

Content

What to have and be able to do? Broadcasting zones, equipment composition, number of people

No license required

(Territorial body "Roskomnadzor", registration fee, ensure frequency, at least once a year, certificate to a legal entity, radio program is registered)

Creative team

Chief editor and legal entity

Less than 10 people – agreement, more than 10 – charter

The technical basis for the production of radio products is a set of equipment on which radio programs are recorded, processed and subsequently broadcast. The main technical task of radio stations is to ensure clear, uninterrupted and high-quality operation of technological equipment for radio broadcasting and sound recording.

Radio houses and television centers are an organizational form of the program generation path. Employees of radio and television centers are divided into creative specialists (journalists, sound and video directors, workers in production departments, coordination departments, etc.) and technical specialists - hardware and studio complex (studios, hardware and some support services workers).

Hardware and studio complex- these are interconnected blocks and services, united by technical means, with the help of which the process of formation and release of audio and television broadcasting programs is carried out. The hardware-studio complex includes a hardware-studio unit (for creating parts of programs), a broadcasting unit (for radio broadcasting) and a hardware-software unit (for TV). In turn, the hardware-studio block consists of studios and technical and director's control rooms, which is due to various technologies for direct broadcasting and recording.

Radio studios- these are special rooms for radio broadcasts that meet a number of acoustic treatment requirements in order to maintain a low noise level from external sound sources and create a uniform sound field throughout the room. With the advent of electronic devices for controlling phase and timing characteristics, small, completely “silenced” studios are increasingly used.

Depending on the purpose, studios are divided into small (on-air) (8-25 sq. m), medium-sized studios (60-120 sq. m), large studios (200-300 sq. m).

In accordance with the sound engineer’s plans, microphones are installed in the studio and their optimal characteristics (type, polar pattern, output signal level) are selected.

Mounting hardware are intended for preparing parts of future programs, from simple editing of musical and speech phonograms after the initial recording to reduction of multi-channel sound to mono or stereo sound. Next, in the hardware preparation of programs, parts of the future transmission are formed from the originals of individual works. Thus, a fund of ready-made phonograms is formed. The entire program is formed from individual transmissions and enters the central control room. The production and coordination departments coordinate the actions of editorial staff. In large radio houses and television centers, in order to ensure that old recordings comply with modern technical broadcasting requirements, there are hardware restorations of phonograms, where the level of noise and various distortions is edited.

After the program is completely formed, the electrical signals enter the broadcasting room.

Hardware-studio block is equipped with a director's console, a control and loud-speaking unit, tape recorders and sound effects devices. Illuminated signs are installed in front of the studio entrance: “Rehearsal”, “Get ready”, “Microphone on”. The studios are equipped with microphones and an announcer's console with microphone activation buttons, signal lamps, and telephone sets with a light ringing signal. Announcers can contact the control room, the production department, the editorial office, and some other services.

Main device director's control room is a sound engineer's console, with the help of which both technical and creative tasks are solved simultaneously: editing, signal conversion.

IN broadcast hardware At a radio home, a program is formed from various programs. Parts of the program that have undergone sound editing and editing do not require additional technical control, but require the combination of various signals (speech, musical accompaniment, sound prompts, etc.). In addition, modern broadcast control rooms are equipped with equipment for automated program release.

The final control of programs is carried out in the central control room, where additional regulation of electrical signals and their distribution to consumers takes place on the sound engineering console. Here frequency processing of the signal is carried out, its amplification to the required level, compression or expansion, introduction of program call signs and precise time signals.

Composition of the radio station hardware complex.

The main expressive means of radio broadcasting are music, speech and service signals. To bring together in the correct balance (mixing) all sound signals, the main element of the radio broadcasting hardware complex is used - Mixer(mixing console). The signal generated on the remote control from the output of the remote control passes through a number of special signal processing devices (compressor, modulator, etc.) and is supplied (via a communication line or directly) to the transmitter. The console inputs receive signals from all sources: microphones transmitting the speech of presenters and guests on air; sound reproduction devices; signal playback devices. In a modern radio studio, the number of microphones can vary - from 1 to 6 and even more. However, for most cases 2-3 is enough. Microphones of many different types are used.
Before being fed to the console input, the microphone signal can be subjected to various processing (compression, frequency correction, in some special cases - reverberation, tonal shift, etc.) in order to increase speech intelligibility, level the signal level, etc.
The sound reproduction devices at most stations are CD players and tape recorders. Range of tape recorders used depends on the specifics of the station: these can be digital (DAT - digital cassette recorder; MD - digital minidisc recording and playback device) and analog devices (reel-to-reel studio tape recorders, as well as professional cassette decks). Some stations also play from vinyl discs; For this purpose, either professional “gram tables” are used, or, more often, simply high-quality players, and sometimes special “DJ” turntables, similar to those used in discotheques.
Some stations that widely use song rotation play music directly from the computer's hard drive, where a specific set of songs being rotated that week are pre-recorded as wave files (usually in WAV format). Devices for reproducing service signals are used in a variety of types. As in foreign radio broadcasting, analogue cassette devices (jingles) are widely used, the sound carrier in which is a special cassette with tape. As a rule, one signal is recorded on each cassette (intro, jingle, beat, backing, etc.); The tape in jingle drive cassettes is looped, therefore, immediately after use it is ready for playback again. At many radio stations that use traditional types of broadcasting organizations, signals are reproduced from reel-to-reel tape recorders. Digital devices are either devices where the carrier of each individual signal is floppy disks or special cartridges, or devices where the signals are played directly from the computer's hard drive.
The radio broadcasting hardware complex also uses various recording devices: these can be both analog and digital tape recorders. These devices are used both for recording individual fragments of the broadcast in the archive of a radio station or for the purpose of subsequent repetition, and for continuous control recording of the entire broadcast (the so-called police tape). In addition, the radio broadcasting hardware complex includes monitor speaker systems both for listening to the program signal (mix at the output from the console) and for preliminary listening (“eavesdropping”) on the signal from various media before broadcasting this signal, as well as headphones ( headphones) into which the program signal is supplied, etc. Part of the hardware complex may also include an RDS (Radio Data System) device - a system that allows a listener with a special receiving device to receive not only an audio signal, but also a text signal (the name of the radio station, sometimes the name and performer of the sounding work, other information) displayed on a special display.

Classification

By sensitivity

· Highly sensitive

Medium sensitive

Low sensitive (contact)

By dynamic range

· Speech

· Service communications

By direction

Each microphone has a frequency response

· Not directed

· Unidirectional

Stationary

Friday

TV studio

· Special light – studio lighting

Sound-absorbing underfoot

· Scenery

· Means of communication

· Soundproof room for sound engineer

· Director

· Video monitors

· Sound control 1 mono 2 stereo

· Technical staff

Mobile TV station

Mobile reporting station

Video recorder

Sound path

Camcorder

TS time code

Color– brightness of three points of red, green, blue

Clarity or resolution

Bitrate– digital stream

· Sampling 2200 lines

· Quantization

TVL (Ti Vi Line)

Broadcast

Line– unit of measurement of resolution

A/D converter - digital

VHS up to 300 TVL

Broadcast over 400 TVL

DPI – dots per inch

Gloss=600 DPI

Photos, portraits=1200 DPI

TV image=72 DPI

Camera resolution

Lens – megapixels – electric quality. block

720 by 568 GB/s

Digital video DV

HD High Definition 1920\1080 – 25MB\s

The main physical characteristics of sound are the frequency and intensity of vibrations. They influence people's auditory perception.

The period of oscillation is the time during which one complete oscillation occurs. An example can be given of a swinging pendulum, when it moves from the extreme left position to the extreme right and returns back to its original position.

Oscillation frequency is the number of complete oscillations (periods) in one second. This unit is called the hertz (Hz). The higher the vibration frequency, the higher the sound we hear, that is, the sound has a higher pitch. According to the accepted international system of units, 1000 Hz is called kilohertz (kHz), and 1,000,000 is called megahertz (MHz).

Frequency distribution: audible sounds - within 15Hz-20kHz, infrasounds - below 15Hz; ultrasounds - within 1.5104 - 109 Hz; hypersounds - within the range of 109 - 1013 Hz.

The human ear is most sensitive to sounds with frequencies between 2000 and 5000 kHz. The greatest hearing acuity is observed at the age of 15-20 years. With age, hearing deteriorates.

The concept of wavelength is associated with the period and frequency of oscillations. The sound wavelength is the distance between two successive condensations or rarefactions of the medium. Using the example of waves propagating on the surface of water, this is the distance between two crests.

Sounds also differ in timbre. The main tone of the sound is accompanied by secondary tones, which are always higher in frequency (overtones). Timbre is a qualitative characteristic of sound. The more overtones are superimposed on the main tone, the “juicier” the sound is musically.

The second main characteristic is the amplitude of oscillations. This is the largest deviation from the equilibrium position during harmonic vibrations. Using the example of a pendulum, its maximum deviation is to the extreme left position, or to the extreme right position. The amplitude of the vibrations determines the intensity (strength) of the sound.

The strength of sound, or its intensity, is determined by the amount of acoustic energy flowing in one second through an area of ​​one square centimeter. Consequently, the intensity of acoustic waves depends on the magnitude of the acoustic pressure created by the source in the medium.

Loudness is in turn related to the intensity of sound. The greater the intensity of the sound, the louder it is. However, these concepts are not equivalent. Loudness is a measure of the strength of the auditory sensation caused by a sound. A sound of the same intensity can create auditory perceptions of different volumes in different people. Each person has his own hearing threshold.

A person stops hearing sounds of very high intensity and perceives them as a feeling of pressure and even pain. This sound intensity is called the pain threshold.


53. Sound wave path. Sound conduction. Sound perception.

The function of sound conduction is the transmission of sound vibrations by the constituent elements of the outer, middle and inner ear to the auditory receptors.

The auricle, external auditory canal, tympanic membrane, auditory ossicles, annular ligament of the oval window, secondary tympanic membrane, perilymph, and basal membrane take part in sound conduction.

When the hair cells of the organ of Corti are irritated, the physical energy of sound vibrations is converted into the physiological process of nervous excitation. This is the beginning of the process of auditory perception.

The area of ​​auditory perception is 16-20000 Hz.

54. Area of ​​sound perception. Sensitivity of the hearing organ.

AREA OF AUDITORY PERCEPTION

16 – 20,000 Hz

Sounds with a frequency below 16 Hz are infrasounds

Sounds with frequencies above 20,000 Hz – ultrasounds

The peripheral section of the auditory analyzer performs the primary analysis and converts the physical energy of sound into the electrical energy of a nerve impulse. The pathways transmit impulses to the brain centers. In the cerebral cortex, the energy of nervous excitation is converted into sensation. The cortex plays a leading role in the functioning of the auditory analyzer.

The human ear is most sensitive to sounds from 500 to 4000 Hz - this is the speech frequency range (1000-3000 Hz).

The minimum sound intensity that can cause the sensation of a barely audible sound is the threshold of audibility.

The lower the hearing threshold, the higher the sensitivity of the ear to a given sound. With normal hearing, the threshold of auditory sensation is 0 dB. As the sound intensity increases, the sensation of sound volume increases, but when a certain value is reached, the increase in volume stops and a sensation of pain appears—the pain threshold. The distance between the threshold of audibility and the threshold of unpleasant sensations in the mid-frequency region is 130 dB.

· The frequency difference threshold is the minimum increase in sound frequency to its original frequency - 3 Hz.

· The difference threshold of sound intensity is the minimum increase in sound intensity that gives an increase in the initial volume of 1 dB.

Thus, the area of ​​human auditory perception is limited in the height and strength of sound.

55. Theories of sound perception.

The perception of sounds of different heights (frequencies), according to Helmholtz's resonance theory,

due to the fact that each fiber of the main membrane is tuned to a sound of a certain frequency.

Thus, low frequency sounds are perceived by long waves of the main membrane located

closer to the apex of the cochlea, high-frequency sounds are perceived by short fibers of the main

membranes located closer to the base of the cochlea. When a complex sound is applied,

vibrations of various membrane fibers.

In the modern interpretation, the resonance mechanism underlies the theory of place, according to

with which the entire membrane enters a state of vibration. However, the maximum deviation of the main

the cochlear membrane occurs only in a specific location. With increasing frequency of sound

vibrations, the maximum deviation of the main membrane shifts to the base of the cochlea, where

shorter fibers of the main membrane are located - with short fibers, more

high vibration frequency. Excitation of hair cells of this particular section of the membrane when

through a mediator is transmitted to the fibers of the auditory nerve in the form of a certain number of impulses,

the repetition frequency of which is lower than the frequency of sound waves (the lability of nerve fibers does not exceed

800 – 1000 Hz). The frequency of perceived sound waves reaches 20,000 Hz. In this manner

a spatial type of coding of the height and frequency of sound signals is carried out.

When tones operate up to approximately 800 Hz, in addition to spatial coding,

temporary (frequency) coding, in which information is also transmitted over certain

fibers of the auditory nerve, but in the form of impulses (volleys), the repetition frequency of which repeats

frequency of sound vibrations. Individual neurons at different levels of the auditory sensory system

tuned to a specific sound frequency, i.e. each neuron has its own specific frequency

threshold, its specific sound frequency to which the neuron’s response is maximum. Thus,

Each neuron perceives only certain rather narrow sounds from the entire set of sounds.

sections of the frequency range that do not coincide with each other, but aggregates of neurons perceive

the entire frequency range of audible sounds, which ensures full auditory perception.

The validity of this provision is confirmed by the results of human hearing prosthetics, when

electrodes were implanted into the auditory nerve, and its fibers were stimulated by electrical impulses

different frequencies that corresponded to sound combinations of certain words and phrases, providing

semantic perception of speech.

The first theory was created by British physicist Rutherford in 1886. He suggested that: a) a sound wave causes the entire basilar membrane to vibrate and the frequency of vibration corresponds to the frequency of sound; b) the frequency of vibration of the membrane sets the frequency of nerve impulses transmitted along the auditory nerve. Thus, a tone with a frequency of 1000 hertz causes the basilar membrane to vibrate 1000 times per second, causing the auditory nerve fibers to discharge at a frequency of 1000 impulses per second, and the brain interprets this as a certain pitch. Because this theory assumes that pitch depends on changes in sound over time, it is called the time theory (also called the frequency theory).

Rutherford's hypothesis soon encountered serious problems. It has been proven that nerve fibers can transmit no more than 1000 impulses per second, and then it is unclear how a person perceives pitches with a frequency of more than 1000 hertz. Weaver (1949) proposed a way to save the temporal theory. He proposed that frequencies above 1000 hertz are encoded by different groups of nerve fibers, each of which fires at slightly different rates. If, for example, one group of neurons fires 1000 spikes per second, and then 1 millisecond later another group of neurons starts firing 1000 spikes per second, then the combination of the two groups' spikes will produce 2000 spikes per second. This version of the temporal theory was supported by the discovery that the pattern of nerve impulses in the auditory nerve follows the waveform of the stimulus tone, despite the fact that individual cells do not respond to every vibration (Rose et al., 1967).

However, the ability of nerve fibers to track waveforms stops at about 4000 hertz; however, we can hear the pitch of sounds containing much higher frequencies. It follows that there must be another means of encoding the pitch quality of sound, at least at high frequencies.

Another theory of pitch perception dates back to 1683, when French anatomist Joseph Guichard Duvernier proposed that frequency is encoded by pitch mechanically, through resonance (Green & Wier, 1984). To understand this assumption, it is useful to first consider the example of resonance. When a tuning fork that is located next to the piano is struck, the piano string, tuned to the frequency of the tuning fork, begins to vibrate. If we say that the ear works on the same principle, it means that it has some structure similar in construction to a stringed instrument, different parts of it being tuned to different frequencies, so that when a certain frequency is presented to the ear, the corresponding part of that the structure begins to vibrate. This idea was generally correct: the basilar membrane turned out to be such a structure.

Exactly how the basilar membrane oscillates was not known until 1940, when Georg von Bekesy measured its movements using small holes drilled into the cochleas of guinea pigs and human cadavers. Taking into account Bekesy's results, it was necessary to modify the theory of locality; the basilar membrane behaved not like a piano with separate strings, but like a sheet that was shaken at one end. In particular, Bekesy showed that at most frequencies the entire basilar membrane moves, but the location of the most intense movement depends on the specific frequency of sound. High frequencies cause vibration at the near end of the basilar membrane; as the frequency increases, the vibration pattern shifts toward the oval window (Bekesy, 1960). For this and other research on hearing, Bekesy received the Nobel Prize in 1961.

Like temporal theories, locality theory explains many, but not all, phenomena of pitch perception. The main difficulties with locality theory are related to low-frequency tones. At frequencies below 50 hertz, all parts of the basilar membrane vibrate approximately equally. This means that all receptors are activated equally, which means that we have no way of distinguishing between frequencies below 50 hertz. In fact, we can distinguish a frequency of only 20 hertz.

Thus, local theories find it difficult to explain the perception of low-frequency sounds, and temporal theories find it difficult to explain the perception of high frequencies. All this led to the idea that pitch perception is determined by both temporal patterns and localization patterns, with the temporal theory explaining the perception of low frequencies, and the local theory explaining the perception of high frequencies. It is clear, however, that where one mechanism recedes, another begins to predominate. In fact, it is possible that frequencies between 1000 and 5000 hertz are served by both mechanisms (Coren, Ward & Enns, 1999).

Since our ears and eyes play such an important role in our daily lives, significant efforts have been made to replace them with artificial ones in individuals suffering from incurable defects of these organs. Some of these efforts are described in the section "On the Cutting Edge of Psychological Research."

56. Stages of sleep. EEG rhythms at different stages of sleep. Types of sleep. The need for sleep in different periods of ontogenesis. Sleep disorders.

General characteristics. Sleep is a special brain activity in which consciousness is turned off and

mechanisms for maintaining a natural posture, the sensitivity of the analyzers is reduced. Falling asleep

A number of factors contribute: adherence to sleep patterns, i.e. sleeping at the same time (circadian)

biorhythm), fatigue of nerve cells, weakening of the activity of analyzers (closing eyes, silence),

comfortable position. A person can sleep even during noise (noise from cars on the street, unturned

radio, etc.). However, it should be remembered that noise negatively affects sleep, disrupting its depth,

sequence of phases and thereby worsening overall well-being. Therefore, you need a bedroom as much as possible

it is possible to isolate from external stimuli.

Signs of sleep: 1) decreased level of consciousness; 2) yawning; 3) decreased sensitivity

analyzers; 4) decreased heartbeat and breathing, decreased secretory activity of the glands

(salivary – dryness of the oral mucosa, lacrimal – burning of the eyes, sticking of the eyelids).

The duration of sleep for adults is 7–8 hours per day. However, there are cases where people have long-term

slept significantly less time and maintained high performance. For example, Napoleon I and T.

Edison slept for 2 hours. It is now known that people who sleep 7–8 hours a day live longer

others, all other things being equal. The duration of sleep in children depends on age.

A newborn sleeps about 20 hours a day, at the age of 6 months -15 hours. The natural need for sleep is

has been decreasing for years. By the end of the first year of life, sleep duration is reduced to 13 hours per day.

The average duration of sleep in children 2 years old is 12 hours, 9 years old – 10 hours, 13 – 15 years old – 9 hours, 16 – 19 years old – 8 hours

Sleep structure. The entire sleep period is divided into two phases: slow and fast sleep. Sleepy state

brain is characterized by the appearance of “sleep spindles” in the EEG (12–16 oscillations per 1 s) and

synchronized large slow EEG waves in the -band. This phase of sleep received

name for slow wave (orthodox) sleep. This is a brain state that occurs periodically throughout the night.

is replaced by fast low-amplitude desynchronized activity (up to 30 oscillations per 1 s),

which resembles the EEG of humans and animals during wakefulness. Since in this case the dream is not

is interrupted, and according to some indicators becomes even deeper, then this phase of sleep, unlike

the previous one was called paradoxical (rapid eye movement) sleep. Changing fast and slow

sleep occurs at regular intervals with an average duration of about 90 minutes (one

cycle). At the same time, slow-wave sleep accounts for about 80%, and fast sleep – 20% of the entire sleep period.

One of the characteristic features of REM sleep is the occurrence of rapid eye movements, more

severe decrease in muscle tone. Against this background, animals experience various movements: whiskers, ears,

tail, paw twitching, licking and sucking movements, becomes more frequent and irregular

breathing, irregular and rapid pulse occurs, blood pressure rises, intensifies

hormonal activity. It is very significant that in this case the activity of spinal cord motor neurons

sharply slowed down. During slow-wave sleep, there is a decrease in breathing, heart rate, and decreased

blood pressure, general body movements. Depriving animals of paradoxical sleep makes

them excitable, irritable.

Rice. 9.2. Classification of sleep stages (A – E) in humans, taking into account the characteristics of the EEG (according to Loomis et al.;

Klaitman and others). The three bottom curves represent simultaneous recordings of EEG, EOG and EMG

index finger during REM sleep (dreaming). Episodes usually occur at the end of each sleep cycle

An electroencephalogram (EEG) is commonly used to assess sleep depth. According to the characteristics of the EEG,

Based on generally accepted standard criteria, four or five stages of slow-wave sleep are distinguished. IN

in a state of relaxed wakefulness, the α-rhythm with variable amplitude predominates (Fig. 9.2). IN

Stage A sleep -rhythm gradually disappears, between its episodes appear longer and longer

intervals with very small -waves. This corresponds to the transition from wakefulness to sleep

(drowsiness), it lasts several minutes, and some authors attribute stage A sleep to

wakefulness. Stage B sleep (falling asleep and the most superficial sleep) is characterized by -waves. IN

at the end of the stage, high-amplitude “vertex-

“teeth” lasting 3–5 s, foreshadowing the onset of stage C sleep (superficial sleep). After

their appearance, the sleeping person no longer distinguishes between weak external stimuli. Characteristic

a feature of the bioelectrical activity of the brain in this phase are spindle-shaped bursts of -

rhythm (“sleep spindles”) and K-complexes. In cmadia D sleep (moderately deep sleep) are recorded

fast -waves with a frequency of 3.0–3.5 Hz, and in stage E sleep (deep sleep) - slow

(synchronized) oscillations, which are almost exclusively extremely slow -

waves (frequency 0.7 - 1.2 Hz), on which small α-waves are occasionally superimposed.

Rice. 9.3. The relationship between sleep and wakefulness, as well as REM and slow-wave sleep at different periods of a person’s life. (after H.P. Roffward et al., 1966)

The most significant change at an early age is a decrease in the total duration of sleep and a significant decrease in the proportion of REM sleep in it.

Then the REM sleep phase develops, characterized by EEG desynchronization (as in stage B)

and episodes of rapid eye movements (REM), which can be observed from the side through closed eyes

eyelids of the sleeping person or recorded using electrooculography methods (see EOG curve in Fig. 9.2).

The ratio of the stages of fast and slow sleep and changes in their ratio in ontogenesis

are presented in Fig. 9.3. The rest of the muscles in the REM sleep phase, as well as during slow sleep,

atonic, with the exception of occasional convulsive contractions of the muscles of the face or fingers (see.

EMG in Fig. 9.2), accompanied by an increase in respiratory rate and constriction of the blood vessels of the fingers.

Dreams are figurative ideas that arise in a dream and are perceived as real.

reality. It is much easier for children and adults to remember the content of a dream they have just seen,

if they are awakened during the REM phase or immediately after its end; waking up in phase

slow wave sleep, a person often does not remember dreams. There is a high frequency

memories in the first case (60 – 90%) and significantly lower, and significantly

fluctuating (from 1 to 74%), in the second. At the same time, in slow-wave sleep there is a conversation,

sleepwalking and night terrors in children. According to some data, 64% of awakenings from slow-wave sleep

a person talks about mental experiences. Moreover, they rather resemble not dreams, but

thoughts, reasoning. Between sleep experiences in slow-wave sleep and paradoxical sleep there are

significant differences. In slow-wave sleep, during dreams, visual patterns are less clear, less

affective, less lasting and more real. It has been discovered that even when people or animals are in

for a long time were deprived of REM sleep, and therefore dreams, despite

pre-existing assumptions, no long-term physical or mental

they did not experience any disorders.

Factors that induce dreams. 1. Pre-sleep activities (children continue

“play” in a dream, a researcher conducts experiments, etc.). For example, the famous physiologist O. Levi

dreamed of a model of experience with the help of which he discovered a mediator mechanism for the transfer of influences from

sympathetic and parasympathetic nerves to the heart. Mendeleev's dream helped him create his own

the famous table of chemical elements. 2. Irritants acting on the body during sleep.

So, if you apply a hot heating pad to your feet, a sleeping person may have a dream that he is walking along

hot sand. 3. Excessive impulses from overcrowded or diseased internal organs

may cause nightmares. 4. Biological needs can cause

corresponding dreams, for example in the case of deviations in homeostasis indicators.

N.I. Kasatkin (1973) believes that dreams during REM sleep play the role of a “guardian”,

signaling internal dangers, because diseases can be predicted in dreams

1 - 3 months earlier than their appearance. Dreams are predominantly visual in nature. U

For people born blind, visual images are absent in their dreams and tactile images predominate. By now

It has been established over time that there are no people who do not have dreams, which occur on average 4 to 6 times a night.

If awakening occurs in the stage of REM sleep, then 70–90% of people have detailed and sufficient

emotionally talk about their dreams, and if in a slow dream - only 7 - 10%. Part

dreams are associated with sexual life. This nature of dreams (in young and single people

or with prolonged sexual abstinence) is accompanied by wet dreams. On average 70% women

They also have sexual dreams, during which orgasm may occur. Sexual motives in a dream

occur in girls during menstruation.

57. State of wakefulness.

Wakefulness is a mental state characterized by a fairly high level of electrical activity of the brain, characteristic of the individual’s active interaction with the outside world. Wakefulness is the functional state against which any mental activity unfolds. The significance of this state for ensuring the effectiveness of activity at its optimal physiological cost is extremely high. The waking state is not uniform. It distinguishes between active wakefulness and quiet wakefulness.

One of the most important roles in maintaining a state of wakefulness is played by the reticular formation of the midbrain, from whose neurons ascending influences go to the nonspecific nuclei of the thalamus, and from them to all zones of the cerebral cortex. Wakefulness forms a field of all possible combinations of functions of consciousness - from a state of calm wakefulness through active, intense wakefulness to pronounced affects.

In general terms, the diagram of our psyche in the waking state, based on the data of objective psychology, looks like this.

The nature of stimuli reaching the brain, and at the same time perceptions, are dual in nature. Some irritations enter the brain from the internal regions of the body and are caused by various organic processes. They excite various kinds of organic impressions in the brain, leaving certain traces in it capable of revitalization.

Another order of irritations penetrates the brain from influences that come from outside the body and affect the brain through the so-called external receptive organs. They are the material basis of external impressions, the subjective indicator of which is sensations. Some of the external impressions and the traces they form come into relationship with the sphere of personality and become its property.

Other external impressions and their traces, for the time being, remain outside the sphere of the personality, nevertheless, they excite certain external motor or other reactions, which in most cases do not come into relation with the personality - in other words, they remain unnoticed by us. This includes a whole range of psycho-reflex motor reactions, such as walking, facial movements and many other movements that are considered automatic. But from the moment when these movements excite the reaction of concentration, they already come into relationship with the sphere of the personality and become directly dependent on it. Thus, unconscious associative activity, entering into relationship with the sphere of personality through internal concentration, becomes, as it were, its property and becomes dependent on it in the sense that it can be revived under the influence of personal needs

58. Mechanisms of regulation of sleep and wakefulness.

The transition from wakefulness to sleep involves two possible paths. First of all, it is possible

that the mechanisms that maintain the state of wakefulness gradually become “tired.” According

from this point of view, sleep is a passive phenomenon, a consequence of a decrease in the level of wakefulness. However

It is also possible that there is active inhibition of the mechanisms that ensure wakefulness. I.P. Pavlov

identified two mechanisms of sleep development, which, in essence, confirm the validity of the positions

supporters of both passive and active theories of sleep. On the one hand, dreams arise as phenomena

protective inhibition as a result of strong and prolonged irritation of any

a separate area of ​​the cerebral cortex. On the other hand, sleep arises as a result

internal inhibition, i.e. active process of formation of negative conditional

reflex. The reticular formation of the trunk plays an important role in regulating the sleep-wake cycle

brain, where there are many diffusely located neurons, the axons of which go almost to

all areas of the brain, with the exception of the neocortex. The role of RF in the sleep-wake cycle

was investigated in the late 1940s by scientists G. Moruzzi and N. Magun, who discovered that

high-frequency electrical stimulation of this structure in sleeping cats leads to their

instant awakening. On the contrary, damage to the reticular formation causes permanent

sleep reminiscent of coma; cutting only the sensory tracts passing through the brain stem,

does not give such an effect. The earliest theories of sleep were humoral. Sleep factor deprived

species-specific, was isolated from the cerebrospinal fluid of goats subjected to sleep deprivation. According to

vascular (circulatory or hemodynamic) theory of sleep, the onset of sleep is associated with

decreased or increased blood flow in the brain. Modern research has shown that within

During sleep, the blood supply to the brain actually fluctuates. R. Legendre and X. Pieron (1910)

believed that sleep occurs as a result of the accumulation of toxic metabolic products due to

fatigue (hypotoxins). The dogs were not allowed to sleep for a long time, and then they were killed and extracted

substances from the brain and injected into other dogs. The latter developed signs of extreme fatigue and

deep sleep occurred. The same was observed during the “transfer” of blood serum or spinal cord

liquids.

In the upper parts of the brain stem there are two areas - the raphe nuclei and the locus coeruleus - whose neurons

the same extensive projections as those of the neurons of the reticular formation, i.e. reaching many

areas of the central nervous system. The raphe nuclei involve the median part of the medulla oblongata, pons and midbrain.

Destroying them eliminates EEG synchronization and slow-wave sleep. Using a special technique

fluorescence histochemists have shown that neurons of the raphe nuclei synthesize serotonin and send

it through its axons to the reticular formation, hypothalamus, and limbic system. Serotonin –

inhibitory transmitter of the monoaminergic system of the brain. Blockade of serotonin synthesis eliminates

slow-wave sleep cats, which retain only paradoxical sleep.

In the midbrain (tegmentum), a cluster of neurons synthesizing norepinephrine was found

(blue spot). Stimulation of the locus coeruleus causes inhibition of neural activity in many

brain structures with an increase in the animal’s motor excitation and EEG desynchronization. It is believed

that the activating influence of the locus coeruleus is carried out through the mechanism of inhibition of brake

interneurons. The raphe nuclei and locus coeruleus act as antagonists. Mediator in cell nuclei

The raphe is serotonin (5-hydroxytryptamine, 5-HT), and the locus coeruleus is norepinephrine. Destruction

suture nuclei in a cat leads to complete insomnia for several days; but for a few

Over the next few weeks, sleep returns to normal. Partial insomnia may also be caused

suppression of 5-HT synthesis by p-chlorophenylalanine. It can be eliminated by introducing 5-

hydroxytryptophan, a precursor of serotonin (the latter does not penetrate the hematoen-

cephalic barrier). Bilateral destruction of the locus coeruleus leads to complete disappearance

REM sleep without affecting slow wave sleep. Depletion of serotonin and norepinephrine reserves

the influence of reserpine causes, as one would expect, insomnia. However, it turned out that neurons

The raphe nuclei are most active and release maximum serotonin not during sleep, but during wakefulness.

In addition, the occurrence of REM seems to be caused by the activity of neurons not so much in the blue

spots, how much more diffuse sub-blue core. Based on the results of recent experiments,

serotonin serves as both a mediator in the process of awakening and a “sleep hormone” in the waking state

state, stimulating the synthesis or release of “sleep substances” (sleep factors), which in turn

turn induce sleep. The structures of the thalamus perform the function of a “pacemaker” for calling

rhythmic potentials of spindles in sleep and -rhythm in wakefulness. Thalamocortical mechanism

can be considered as a mechanism of internal inhibition that can change brain activity

partially or globally in such a way that sensory, motor and higher brain functions

are suppressed.

The structures responsible for slow-wave sleep are located in the caudal part of the brain stem,

mainly in the medulla oblongata. The presence of similar hypnogenic structures has been established

also at the rear of the bridge. Motor and EEG manifestations of the paradoxical sleep phase are associated with

activation of structures in the bridge area. This sleep phase is shortened during emotional stress, while

the period of falling asleep is prolonged.

Near the locus coeruleus there is a group of giant reticular neurons that direct

their axons up and down to various brain structures. In wakefulness and slow sleep these

neurons are low-active, but their activity is very high during paradoxical sleep.

Attempts have been made to detect specific substances either after prolonged sleep deprivation or in

sleeping person. The first of these approaches is based on the assumption that sleep factor(s)

waking time accumulates to a sleep-inducing level, and the second - on the hypothesis, according to

which they are formed or released during sleep.

Both approaches produced some results. So, when testing the first hypothesis from urine and

A small glucopeptide, factor S, was isolated from the cerebrospinal fluid of humans and animals.

inducing slow-wave sleep when administered to other animals. Apparently there is also

REM sleep factor. The second approach led to the discovery of a deep sleep-inducing nonapeptide (in

currently it has already been synthesized), the so-called -sleep peptide (SIP, delta-sleep inducing

peptide). However, it is not yet known whether these and many other “sleep substances” discovered in

testing of both hypotheses, any role in its physiological regulation. Moreover, dedicated

peptides often induce sleep only in certain species of animals; in addition, it also occurs under

the action of other substances.

However, conjoined twin girls could sleep separately, which indicates a secondary

the role of humoral factors and the decisive role in the development of sleep in the nervous system.

The idea is developing that the wakefulness-sleep cycle is ensured by a system of two

centers. K. Economo based on clinical observations of patients with injuries of various

areas of the hypothalamus suggested that the center of wakefulness is localized in the posterior, and the center of sleep - in

its anterior sections. S. Ranson, producing local damage to various parts of the hypothalamus,

confirmed this opinion. It is currently believed that the hypothalamus is a critical area for

regulation of the wakefulness-sleep cycle. This opinion is confirmed by the fact that both high-frequency,

Likewise, low-frequency electrical stimulation of the preoptic area of ​​the hypothalamus causes

electroencephalogram synchronization and behavioral sleep. The opposite effect, namely

behavioral and electroencephalographic awakening T.N. Oniani observed when irritated

posterior hypothalamus. This suggests a reciprocal relationship between

anterior and posterior areas of the hypothalamus and its importance for regulating the alternation of different phases

wakefulness-sleep cycle. According to T.N. Oniani, multineuronal in the wakefulness–sleep cycle

activity of the reticular formation.

Goals:

  • Introduce the concept of sound vibrations, find out the characteristics and properties of sound vibrations.
  • Show the unity of nature, the relationship of physics, biology, music.
  • Cultivating a caring attitude towards your health.

Equipment: a computer with a multimedia projector, a tuning fork, a ruler clamped in a vice, a sound generator.

Lesson plan.

  1. Org. Moment
  2. Learning new material.
  3. House. Exercise.

Man lives in a world of sounds. What is sound? How does it arise? How does one sound differ from another? Today in the lesson we will try to answer these and many other questions related to sound phenomena.

The branch of physics that studies sound phenomena is called acoustics.

Elastic waves that can cause auditory sensations in humans are called sound waves.

The human ear is capable of perceiving mechanical vibrations occurring with a frequency of 20 to 20,000 Hz. (Demonstration on a sound wave generator with a frequency from 20 to 20000 Hz)

Anything that vibrates at an audio frequency is a source of sound. But not only oscillating bodies can be sources of sound: the flight of a bullet in the air is accompanied by a whistle, the rapid flow of water is accompanied by noise.

The very fact of isolating from a fairly large set of frequencies, called sound, is associated with the ability of human hearing to perceive precisely these waves.

Different living beings have different boundaries for the perception of sound.

All sound sources can be divided into natural and artificial.

(demonstrations: the sound of a tuning fork and a ruler clamped between a vice.)

Let's consider the properties of sound.

  1. Sound is a longitudinal wave.
  2. Sound propagates in elastic media (air, water, various metals)
  3. Sound has a finite speed.
Substance Temperature 0 C Speed ​​of sound m/s Substance Temperature 0 C Speed ​​of sound m/s
Nitrogen 300 487 Water vapor 100 405
Nitrogen 0 334 Helium 0 965
Liquid nitrogen -199 962 Graphite 20 1470
Aluminum 20 18 350 Gold 20 3200
Diamond 20 6260 Mercury 20 1450
Petrol 17 1170 Alcohol 20 1180
Water 20 1483 Alcohol vapor 0 230
Water 74 1555 Steel 20 5000-6100
Ice -1-4 3980 Ether 25 985

Let's listen to a message about how the speed of sound in water and other substances was determined.

(Student message)

Check yourself.

  1. The clock is set by the sound of a signal from a remote radio receiver. In which case will the clock be set more accurately: in summer or winter?
    (In summer, since the speed of sound in air increases with temperature)
  2. Can astronauts communicate with each other using audio speech during spacewalks?
    (At a distance, no, because in the vacuum of space there are no conditions for the propagation of sound waves. However, if the astronauts touch their spacesuit helmets, they can hear each other.)
  3. Why do power poles hum when there is wind?
    (When there is wind, the wires perform chaotic oscillatory movements, affecting insulators mounted on poles. Standing sound waves are excited in the poles.)

Sound characteristics.

  1. Sound volume.
  2. Pitch
  3. Timbre of sound.

Sound volume is a characteristic of the amplitude of a sound wave.
(show experiment with tuning fork and generator)

The volume of sound depends on the amplitude of the vibrations: the greater the amplitude, the louder the sound.

But if we compared sounds of different frequencies, then in addition to amplitude we would also have to compare their frequencies. With the same amplitudes, we perceive as louder frequencies that lie in the range from 1000 to 5000 Hz.

The unit of sound volume is called dream.

In practical problems, the volume of sound is usually characterized by volume level, measured in backgrounds, or sound pressure level, measured in belah(B) or decibels(dB), constituting a tenth of a white.

Quiet whisper, rustling leaves - 20 dB

Normal speech - 60 dB

Rock concert - 120 dB

When the volume increases by 10 dB, the sound intensity increases 10 times.

Task: Calculate how many times the sound intensity at a rock concert is greater than normal speech?

(1000000 times)

A volume of 120 dB is called the pain threshold. With prolonged exposure to such sound, irreversible hearing loss occurs: a person accustomed to rock concerts will never hear a quiet whisper or rustle of leaves.

Height sound - a characteristic of the frequency of a sound wave; the higher the vibration frequency of the sound source, the higher the sound it produces.

Who flaps its wings faster in flight - a fly, a bumblebee or a mosquito?

Frequency of vibrations of the wings of insects and birds in flight, Hz

Storks 2
Cabbage butterflies until 9
Sparrows up to 13
Crows 3-4
May beetles 45
Hummingbird 35-50
Mosquitoes 500-600
House flies 190-330
Bees 200-250
Bumblebee 220
Horseflies 100
Dragonflies 38-100

Which birds and insects do we hear and which ones do we not?

Which insect has the highest sound? (At the mosquito)

The frequency of sound vibrations corresponding to the human voice ranges from 80 to 1400 Hz.

When the frequency is doubled, the sound rises by an octave - it is for these reasons that the octave was chosen. Each octave is divided into 12 intervals of half a tone each.

Timbre sound is determined by the shape of sound vibrations.

We know that the branches of a tuning fork perform harmonic (sinusoidal) oscillations. Such oscillations have only one strictly defined frequency. Harmonic vibrations are the simplest type of vibration. The sound of a tuning fork is in a clear tone.

In a clear tone is the sound of a source that performs harmonic oscillations of the same frequency.

Sounds from other sources (for example, the sounds of various musical instruments, people's voices, the sound of a siren and many others) represent a set of harmonic vibrations of different frequencies, i.e. a set of pure tones.

The lowest (i.e. smallest) frequency of such a complex sound is called fundamental frequency, and the corresponding sound of a certain height is main tone(sometimes simply called tone). The pitch of a complex sound is determined precisely by the pitch of its fundamental tone.

All other tones of a complex sound are called overtones. The frequencies of all overtones of a given sound are an integer number of times greater than the frequency of its fundamental tone (therefore they are also called higher harmonic tones).

Overtones determine the timbre of a sound, that is, its quality that allows us to distinguish the sounds of some sources from the sounds of others. For example, we easily distinguish the sound of a piano from the sound of a violin, even if these sounds have the same pitch, i.e. the same fundamental frequency. The difference between these sounds is due to a different set of overtones (the set of overtones from different sources may differ in the number of overtones, their amplitudes, the phase shift between them, and the frequency spectrum).

Check yourself.

  1. How can you tell by the sound whether a drill is running idle or under load?
  2. How are musical sounds different from noise?
    (Noise differs from a musical tone in that it does not correspond to any specific pitch. Noise contains vibrations of all possible frequencies and amplitudes.)
  3. The projection of the velocity of one of the points on the sounding cello string changes over time as shown in the graph. Determine the oscillation frequency of the velocity projection.

A person has such a unique organ as the ear - a sound receiver. Let's look at how a person hears.

Sound waves traveling through the air travel a complex path before we perceive them. First, they penetrate the auricle and cause the eardrum, which closes the external auditory canal, to vibrate. The auditory ossicles carry these vibrations to the oval window of the inner ear. The film that covers the window transmits vibrations to the liquid that fills the cochlea. Finally the vibrations reach the auditory cells of the inner ear. The brain perceives these signals and recognizes noises, sounds, music, and speech.

One of the most important characteristics of a voice is its timbre, i.e. a set of spectral lines, among which one can distinguish peaks consisting of several overtones - the so-called formants. It is the formants that determine the secret of the individual sound of the voice and make it possible to recognize speech sounds, since in different people the formants of even the same sound differ in frequency, width and intensity. The timbre of the voice is strictly individual, since in the process of sound formation an important role is played by the resonator cavities of the pharynx, nose, paranasal sinuses, etc., specific to each individual. The uniqueness of the human voice can only be compared to the uniqueness of the fingerprint pattern. In many countries around the world, a tape recording of a human voice is considered an indisputable legal document that cannot be forged.

The spectrum of singers’ voices differs from the spectrum of the voice of an ordinary person: they have a highly expressed high singing formant, i.e. overtones with frequencies of 2500-3000 Hz, giving the voice a ringing tone. For outstanding singers, they constitute up to 35 percent or more in the spectrum (Fig. on the left), while among experienced singers - 15-30%, and among beginners - 3-5% (Fig., on the right).

It is customary to distinguish three types of voices for both sexes: for men - bass, baritone, tenor; for women - alto, mezzo-soprano and soprano. This division is largely artificial: it does not take into account a large number of “intermediate” voices, since there is no objective method for assessing the quality of a voice due to the unlimited combination of its properties.

When considering sound vibrations, one cannot help but pay attention to the effect of noise on the human body.

Long-term exposure to noise leads to damage to the central nervous system, increased blood and intracranial pressure, disruption of normal heart function, and dizziness. The harmful effects of loud noise on humans have been noticed for a long time. Even 2,000 years ago in China, prisoners were subjected to continuous exposure to the sounds of flutes, drums and screamers until they dropped dead as punishment. At a noise power of 3 kW and a frequency of 800 Hz, the ability of the eye to focus is impaired. A noise power of 5-8 kW disrupts the functioning of skeletal muscles, causes paralysis and memory loss. Noise power of about 200 kW leads to death. Therefore, in large cities the use of sharp and loud signals is prohibited. Trees and shrubs that absorb them significantly reduce noise. Therefore, green spaces are needed along roads with heavy traffic. Silence significantly improves hearing acuity.

D/Z §34-38 ex. 31(1), exercise 32 (2,3) practical task: determining the dependence of pitch on vibration frequency, using a piece of rubber thread.

I would like to end the lesson with these words. N. Roerich has a painting he called “Human Forefathers”. A young shepherd boy plays the flute, and large brown bears converge on him from all sides. What attracts them? Music? Legend says that the ancestors of some Slavic tribes were bears. It seems that they are going to hear the most wonderful music in the world - the voice of a kind human heart.

Literature:

  1. A. V. Peryshkin, E. M. Gutnik Physics 9th grade Bustard 2003
  2. S. V. Gromov, N. A. Rodina Physics 8th grade M. Education 2001
  3. V. N. Moshchansky Physics 9th grade M. Education 1994
  4. A.V. Aganov, R.K. Safiullin, A.I. Skvortsov, D.A.
  5. Tayursky Physics around us. Qualitative problems in physics.M. House of Pedagogy 1998
  6. S. A. Chandaeva Physics and man.M. JSC Aspect Press 1994