Digital Basics

Overview
On the final level of this area, you must defeat the minions of the basics of digital tribe. Travel carefully, as this is the final level in this area and it can be filled with traps!

How analog and digital recordings work:
Thomas Edison is credited with creating the first device for recording and playing back sounds in 1877. His approach used a very simple mechanism to store an analog wave mechanically. In Edison's original phonograph, a diaphragm directly controlled a needle, and the needle scratched an analog signal onto a tinfoil cylinder.

The storage and playback of an analog wave can be very simple -- scratching onto tin is certainly a direct and straightforward approach. The problem with the simple approach is that the fidelity is not very good. For example, when you use Edison's phonograph, there is a lot of scratchy noise stored with the intended signal, and the signal is distorted in several different ways. Also, if you play a phonograph repeatedly, eventually it will wear out -- when the needle passes over the groove it changes it slightly (and eventually erases it).

Digital recording converts the analog wave into a stream of numbers and records the numbers instead of the wave. The conversion is done by a device called an analog-to-digital converter (ADC). To play back the music, the stream of numbers is converted back to an analog wave by a digital-to-analog converter (DAC). The analog wave produced by the DAC is amplified and fed to the speakers to produce the sound.

What are the elements of how the brain works that makes conversion work?
1. External and physical: the propagation of electromagnetic waves from the object to the eye.

2. The physical visual apparatus, from eye to brain, consisting of nervous tissue, although some important preliminary processing takes place

3. The third, and most complex, is the interpretation of the visual stimulus and the creation of the internal model of the world that is used by the consciousness. Rods and cones do initial processing: rodes see low light and cones see color and detail. Memory is as important as the eye for final “vision” ==

How sampling works:
Sampling: Take measurements of sin wave at fixed points in time'''. '''

Instantaneous amplitude of signal determined at fixed points in time samples of the signal waveform

Under sampling: taking too few measurements at too few points

Over sampling: taking measurements at too many points

Sampling Rate:
Nyquist sampling theorem - if sampled at least twice maximum frequency, then waveform can be reconstructed without distortion

Ex. max frequency = 10 kHz à sampling rate = 2 x 10 kHz = 20,000 samples/second

How quantizing works:
Quantizing = y-axis, how many pieces you divided the y-axis into.

“Rounding” à assign sampled values to fixed number of amplitude levels

Two approaches: Linear = step size of levels are equal, non-linear = step size of levels are different

Quantizing Error:
Quantizing Error = Noise

Difference between actual samples and quantized values

How do we reduce it? Have more levels

What is the cost of reducing it? Need higher bit rate

Binary:
Computers represent all data using variable voltage electrical signals.

We can think of them as representing this data with on-off switches.

Bit: A binary digit - it has two possible states à On or off, true or false, 1 or 0

Bit-string: A consecutive string of bits

Byte: An 8-bit bit-string

Word: A bit-string that can be transferred and stored as a unit.

Ex. 01000011 à = 0*27 + 1*26 + 0*25 +0*24+0*23+0*22 + 1*21 + 1*20 = 67 ==

Bit Rates:
Number of bits per sample (n)

x number of samples (2Fmax) à = 2nFmax

Bitrate depends on:

Sampling frequency of original material

Quantization of sampling

How the data is encoded

Any compression algorithms

Audio


 * 32 kbit/s — MW (AM) quality


 * 96 kbit/s — FM quality


 * 128 - 192 kbit/s — Typical "acceptable" quality, may be considered low-end-med by those with discerning ears


 * 224 - 320 kbit/s — Medium-high quality to near audio CD quality

Other audio


 * 4 kbit/s — minimum necessary for recognizable speech (using special-purpose speech codecs)


 * 8 kbit/s — telephone quality (using speech codecs) \


 * 56 kbit/s – MP3 audio


 * 500 kbit/s - 1 Mbit/s — lossless audio as used in formats such as FLAC, WavPack or Monkey's Audio

Video (MPEG2)


 * 16 kbit/s — videophone quality (minimum necessary for a consumer-acceptable "talking head" picture)


 * 128 – 384 kbit/s — business-oriented videoconferencing system quality


 * 1 Mbit/s — VHS quality


 * 5 Mbit/s — DVD quality


 * 15 Mbit/s — HDTV quality

Bandwidth:
Bandwidth is the difference between the upper and lower frequencies in a contiguous set of frequencies. It is typically measured in hertz, and may sometimes refer to passband bandwidth, sometimes to baseband bandwidth, depending on context.

How to estimate bandwidth of digital signal:

Must be able to pass the fastest varying signals (alternating 1s and 0s) à one cycle could represent a single 1 and 0, hence estimate 2 bits per cycle

Thus, if n bits/sec needed then need n/2 Hz

so bit rate = 2nFmax

but bandwidth = nFmax

Compression:
Remove redundancy in coding

e.g. replacing long string of identical bits with shorter code

111111111111111 becomes 15x1

Don’t send bits that do not change in successive sampling intervals (e.g., video frames)

Compression saves transmission time or capacity and saves storage space on hard disks.

Need to distinguish between compression that degrades signal (lossy) from coding schemes that don’t lose information (lossless)

Common compression schemes:

MP3


 * Motion Picture Experts Group Layer 3


 * If the volume of one frequency is much higher than simultaneously occurring sounds, the MP3 algorithm will discard the lower volume sound.

AAC


 * Advanced Audio Coding


 * Commonly used by iTunes

WMA


 * Optimized for Windows Media Player

WAV


 * Uncompressed


 * High fidelity

AIFF


 * Audio Interchange File Format


 * Uncompressed


 * For MacOS

 NEXT AREA... 

 MENU