Sound quality

From Wikipedia the free encyclopedia

Microphone covers are occasionally used to improve sound quality by reducing noise from wind.

Sound quality is typically an assessment of the accuracy, fidelity, or intelligibility of audio output from an electronic device. Quality can be measured objectively, such as when tools are used to gauge the accuracy with which the device reproduces an original sound; or it can be measured subjectively, such as when human listeners respond to the sound or gauge its perceived similarity to another sound.[1]

The sound quality of a reproduction or recording depends on a number of factors, including the equipment used to make it,[2] processing and mastering done to the recording, the equipment used to reproduce it, as well as the listening environment used to reproduce it.[3] In some cases, processing such as equalization, dynamic range compression or stereo processing may be applied to a recording to create audio that is significantly different from the original but may be perceived as more agreeable to a listener. In other cases, the goal may be to reproduce audio as closely as possible to the original.

When applied to specific electronic devices, such as loudspeakers, microphones, amplifiers or headphones sound quality usually refers to accuracy, with higher quality devices providing higher accuracy reproduction. When applied to processing steps such as mastering recordings, absolute accuracy may be secondary to artistic or aesthetic concerns. In still other situations, such as recording a live musical performance, audio quality may refer to proper placement of microphones around a room to optimally use room acoustics.

Digital audio[edit]

Digital audio is stored in many formats. The simplest form is uncompressed PCM, where audio is stored as a series of quantized audio samples spaced at regular intervals in time.[4] As samples are placed closer together in time, higher frequencies can be reproduced. According to the sampling theorem, any bandwidth-limited signal (that does not contain a pure sinusoidal component), bandwidth B, can be perfectly described by more than 2B samples per second, allowing perfect reconstruction of the bandwidth-limited analog signal.[5] For example, for human hearing bandwidth between 0 and 20 kHz, audio must be sampled at above 40 kHz. Due to the need for filtering out ultrasonic frequencies resulting from the conversion to an analog signal, in practice slightly higher sample rates are used: 44.1 kHz (CD audio) or 48 kHz (DVD).

In PCM, each audio sample describes the sound pressure at an instant in time with a limited precision. The limited accuracy results in quantization error, a form of noise that is added to the recording. To reduce quantization error, more precision can be used in each measurement at the expense of larger samples (see audio bit depth). With each additional bit added to a sample, quantization error is reduced by approximately 6 dB. For example, CD audio uses 16 bits per sample, and therefore it will have quantization noise approximately 96 dB below the maximum possible sound pressure level (when summed over the full bandwidth)

The amount of space required to store PCM depends on the number of bits per sample, the number of samples per second, and the number of channels. For CD audio, this is 44,100 samples per second, 16 bits per sample, and 2 channels for stereo audio leading to 1,411,200 bits per second. However, this space can be greatly reduced using audio compression. In audio compression, audio samples are processed using an audio codec. In a lossless codec audio samples are processed without discarding information by packing repetitive or redundant samples into a more efficiently stored form. A lossless decoder then reproduces the original PCM with no change in quality. Lossless audio compression typically achieves a 30-50% reduction in file size. Common lossless audio codecs include FLAC, ALAC, Monkey's Audio and others.

If additional compression is required, lossy audio compression such as MP3, Ogg Vorbis or AAC can be used. In these techniques, lossless compression techniques are enhanced by processing audio to reduce the precision of details that are unlikely or impossible for human hearing to perceive using principles from psychoacoustics. After the removal of these details, lossy compression can be applied to the remainder to greatly reduce the file size. Lossy audio compression therefore allows a 75-95% reduction in file size, but runs the risk of potentially reducing audio quality if important information is mistakenly discarded.

See also[edit]

Listen to this article (7 minutes)
Spoken Wikipedia icon
This audio file was created from a revision of this article dated 7 May 2016 (2016-05-07), and does not reflect subsequent edits.

References[edit]

  1. ^ "Sound Quality or Timbre". hyperphysics.phy-astr.gsu.edu. Retrieved 2017-04-13.
  2. ^ "Quality of sound and the tech behind it: What to look for when choosing a speaker - Pocket-lint". www.pocket-lint.com. Retrieved 2017-04-13.
  3. ^ "Pitch, Loudness and Quality of Musical Notes - Pass My Exams: Easy exam revision notes for GSCE Physics". www.passmyexams.co.uk. Retrieved 2017-04-13.
  4. ^ "What is pulse code modulation (PCM)? - Definition from WhatIs.com". SearchNetworking. Retrieved 2017-04-13.
  5. ^ "The Sampling Theorem". www.dspguide.com. Retrieved 2017-04-13.