信息科学与电子工程专业英语(第2版)
上QQ阅读APP看书,第一时间看更新

Text

Part I: Telecommunication

Telecommunication is the transmission of signals over a distance for the purpose of communication. In modern times, this process typically involves the sending of electromagnetic waves by electronic transmitters, but in earlier times telecommunication may have involved the use of smoke signals, drums or semaphore. Today, telecommunication is widespread and devices that assist the process such as television, radio and telephone are common in many parts of the world. There are also many networks that connect these devices, including computer networks, public telephone networks, radio networks and television networks. Computer communication across the Internet is one of many examples of telecommunication.

Telecommunication systems are generally designed by telecommunication engineers. Early inventors in the field include Alexander Graham Bell, Guglielmo Marconi and John Logie Baird. Telecommunication is an important part of the world economy with the telecommunication industry's revenue being placed at just under 3 percent of the gross world product.

Basic elements

Each telecommunication system consists of three basic elements: a transmitter that takes information and converts it to a signal, a transmission medium over which the signal is transmitted, and a receiver that receives the signal and converts it back into usable information.

Consider a radio broadcast for example. The broadcast tower is the transmitter, the radio is the receiver and the transmission medium is free space. Often telecommunication systems are two-way, and a single device acts as both a transmitter and receiver, or transceiver. For example, a mobile phone is a transceiver.

Telecommunication over a phone line is called point-to-point communication because it is between one transmitter and one receiver. Telecommunication through radio broadcasts is called broadcast (or point-to-multipoint) communication because it is between one powerful transmitter and numerous receivers.

Analog or digital

Signals can either be analog or digital. In an analog signal, the signal is varied continuously with respect to the information. In a digital signal, the information is encoded as a set of discrete values (for example, ones and zeros). During transmission, the information contained in analog signals will be degraded by noise. Conversely, unless the noise exceeds a certain threshold, the information contained in digital signals will remain intact. This represents a key advantage of digital signals over analog signals.

Networks

A collection of transmitters, receivers or transceivers that communicate with each other is known as a network.1 Digital networks may consist of one or more routers that route data to the correct user. An analog network may consist of one or more switches that establish a connection between two or more users. For both types of network, repeaters may be necessary to amplify or recreate the signal when it is being transmitted over long distances. This is to combat attenuation that can render the signal indistinguishable from noise.2

Channels

A channel is a division in a transmission medium so that it can be used to send multiple streams of information.3 For example, a radio station may broadcast at 96 MHz while another radio station may broadcast at 94.5 MHz. In this case, the medium has been divided by frequency and each channel received a separate frequency to broadcast on. Alternatively, one could allocate each channel a recurring segment of time over which to broadcast — this is known as time-division multiplexing and is sometimes used in digital communication.4

Modulation

The shaping of a signal to convey information is known as modulation. Modulation can be used to represent a digital message as an analog waveform. This is known as keying and several keying techniques exist (these include phase-shift keying, frequency-shift keying and amplitude-shift keying). Bluetooth, for example, uses phase-shift keying to exchange information between devices.

Modulation can also be used to transmit the information of analog signals at higher frequencies. This is helpful because low-frequency analog signals cannot be effectively transmitted over free space. Hence the information from a low-frequency analog signal must be superimposed on a higher-frequency signal (known as a carrier wave) before transmission. There are several different modulation schemes available to achieve this (two of the most basic being amplitude modulation and frequency modulation). An example of this process in action is a DJ's voice being superimposed on a 96 MHz carrier wave using frequency modulation (the voice would then be received on a radio as the channel “96 FM”).

Part Ⅱ: Data Transmission

Data transmission is the conveyance of any kind of information from one space to another. Historically this could be done by a courier, a chain of bonfires or semaphores, and later by Morse code over copper wires.

In recent computer terms, it means sending a stream of bits or bytes from one location to another using any number of technologies, such as copper wire, optical fiber, laser, radio, or infra-red light. Practical examples include moving data from one storage device to another and accessing a website, which involves data transfer from web servers to a user's browser.

A related concept to data transmission is the data transmission protocol used to make the data transfer legible. Current protocols favor packet based communication.

Types of data transmission

Serial transmission: Bits are sent over a single wire individually. Whilst only one bit is sent at a time, high transfer rates are possible. This can be used over longer distances as a check digit or parity bit can be sent along it easily.

Parallel transmission: Multiple wires are used to transmit bits simultaneously. It is much faster than serial transmission as one byte can be sent rather than one bit. This method is used internally within the computer, for example the internal buses, and sometimes externally for such things as printers. However this method of transmission is only available over short distances as the signal will degrade and become unreadable since there is more interference between many wires than between one.

Asynchronous and synchronous data transmission

Asynchronous transmission uses start and stop bits to signify the beginning and end of a transmission. This means that an 8 bit ASCII character would actually be transmitted using 10 bits, e.g., A “0100 0001” would become “1 0100 0001 0”. The extra one (or zero depending on parity bit) at the start and end of the transmission tells the receiver first that a character is coming and secondly that the character has ended. This method of transmission is used when data is sent intermittently as opposed to in a solid stream.1 In the previous example the start and stop bits are in bold. The start and stop bits must be of opposite polarity. This allows the receiver to recognize when the second packet of information is being sent.

Synchronous transmission uses no start and stop bits but instead synchronizes transmission speeds at both the receiving and sending end of the transmission using clock signals built into each component.2 A continual stream of data is then sent between the two nodes. Due to the absence of start and stop bits the data transfer rate is quicker although more errors will occur as the clocks will eventually get out of sync, and the receiving device would have the wrong time that had been agreed in the protocol for sending/receiving data, so some bytes could become corrupted by losing bits.3 Ways to get around this problem include re-synchronization of the clocks and use of check digits to ensure the byte is correctly interpreted and received.

Protocols and handshaking

Protocol: A protocol is an agreed-upon format for transmitting data between two devices, e.g., computer and printer. All communications between devices require that the devices agree on the format of the data. The set of rules defining a format is called a protocol.

The protocol determines the following:

·The type of error checking to be used if any, e.g., check digit (and what type/formula to be used).

·Data compression method, if any, e.g., zipped files if the file is large, like transfer across the Internet, LANs and WANs.

·How the sending device indicates that it has finished sending a message, e.g., in a communications port a spare wire would be used for serial (USB) transfer start and stop digits may be used.4

·How the receiving device indicates that it has received a message.

·Rate of transmission (in baud or bit rate).

·Whether transmission is to be synchronous or asynchronous.

In addition, protocols can include sophisticated techniques for detecting and recovering from transmission errors and for encoding and decoding data.

Handshaking is the process by which two devices initiate communications, e.g., a certain ASCII character or an interrupt signal/request bus signal to the processor along the control bus.5 Handshaking begins when one device sends a message to another device indicating that it wants to establish a communications channel. The two devices then send several messages back and forth that enable them to agree on a communications protocol. Handshaking must occur before data transmission as it allows the protocol to be agreed.

Part Ⅲ: Information Theory

Information theory is a branch of applied mathematics and engineering involving the quantification of information to find fundamental limits on compressing and reliably communicating data.1 A key measure of information that comes up in the theory is known as information entropy, which is usually expressed by the average number of bits needed for storage or communication. Intuitively, entropy quantifies the uncertainty involved in a random variable. For example, a fair coin flip will have less entropy than a roll of a die.2

Applications of fundamental topics of information theory include lossless data compression (e.g. ZIP files), lossy data compression (e.g. MP3s), and channel coding (e.g. for DSL lines). The field is at the crossroads of mathematics, statistics, computer science, physics, neurobiology, and electrical engineering. Its impact has been crucial to success of the Voyager missionsVoyager 1 is a space probe launched September 5, 1977. It visited Jupiter and Saturn and was the first to provide detailed images of the moons of these planets. It is the farthest human-made object, traveling away from both the Earth and the Sun. Voyager 2 was launched earlier on August 20, 1977. The Voyager mission was supposed to last just five years, and is now celebrating its 30th anniversary. Scientists continue to receive data from the spacecrafts as they approach interstellar space. to deep space, the invention of the CD, the feasibility of mobile phones, the development of the Internet, the study of linguistics and of human perception, the understanding of black holes, and numerous other fields. Important sub-fields of information theory are source coding, channel coding, algorithmic complexity theory, algorithmic information theory, and measures of information.

Overview

The main concepts of information theory can be grasped by considering the most widespread means of human communication: language. Two important aspects of a good language are as follows: First, the most common words (e.g., “a, ” “the, ” “I”) should be shorter than less common words (e.g., “benefit, ” “generation, ” “mediocre”), so that sentences will not be too long. Such a tradeoff in word length is analogous to data compression and is the essential aspect of source coding. Second, if part of a sentence is unheard or misheard due to noise (e.g., a passing car), the listener should still be able to collect the meaning of the underlying message. Such robustness is as essential for an electronic communication system as it is for a language. Properly building such robustness into communications is done by channel coding. Source coding and channel coding are the fundamental concerns of information theory.

Note that these concerns have nothing to do with the importance of messages. For example, a platitude such as “Thank you; come again” takes about as long to say or write as the urgent plea, “Call an ambulance! ” while clearly the latter is more important and more meaningful. Information theory, however, does not involve message importance or meaning, as these are matters of the quality of data rather than the quantity of data, the latter of which is determined solely by probabilities.3

Information theory is generally considered to have been founded in 1948 by Claude Shannon (Figure 4.1) in his seminal work, “A Mathematical Theory of Communication.”The central paradigm of classical information theory is the engineering problem of the transmission of information over a noisy channel. The most fundamental results of this theory are Shannon's source coding theorem, which establishes that, on average, the number of bits needed to represent the result of an uncertain event is given by its entropy; and Shannon's noisy-channel coding theorem, which states that reliable communication is possible over noisy channels provided that the rate of communication is below a certain

threshold called the channel capacity.4 The channel capacity can be approached by using appropriate encoding and decoding systems.

Figure 4.1 Claude Shannon

Information theory is closely associated with a collection of pure and applied disciplines that have been investigated and reduced to engineering practice throughout the world under a variety of titles over the past half century or more: adaptive systems, anticipatory systems, artificial intelligence, complex systems, complexity science, cybernetics, informatics, machine learning, along with systems sciences of many descriptions. Information theory is a broad and deep mathematical theory, with equally broad and deep applications, amongst which is the vital field of coding theory.

Coding theory is concerned with finding explicit methods, called codes, of increasing the efficiency and reducing the net error rate of data communication over a noisy channel to near the limit that Shannon proved is the maximum possible for that channel.5 These codes can be roughly subdivided into data compression (source coding) and error-correction (channel coding) techniques. In the latter case, it took many years to find the methods Shannon's work proved were possible. A third class of information theory codes is cryptographic algorithms (both codes and ciphers). Concepts, methods and results from coding theory and information theory are widely used in cryptography and cryptanalysis.

Information theory is also used in information retrieval, intelligence gathering, gambling, statistics, and even in musical composition.

Quantities of information

Information theory is based on probability theory and statistics. The most important quantities of information are entropy, the information in a random variable, and mutual information, the amount of information in common between two random variables. The former quantity indicates how easily message data can be compressed while the latter can be used to find the communication rate across a channel.6

The choice of logarithmic base determines the unit of information entropy that is used. The most common unit of information is the bit, based on the binary logarithm.

Coding theory

Coding theory is the most important and direct application of information theory. It can be subdivided into source coding theory and channel coding theory. Using a statistical description for data, information theory quantifies the number of bits needed to describe the data, which is the information entropy of the source.

Data compression (source coding). There are two formulations for the compression problem:

·Lossless data compression — the data must be reconstructed exactly;

·Lossy data compression—allocates bits needed to reconstruct the data, within a specified fidelity level measured by a distortion function. This subset of Information theory is called rate-distortion theory.7

Error-correcting codes (channel coding). While data compression removes as much redundancy as possible, an error correcting code adds just the right kind of redundancy (i.e. error correction) needed to transmit the data efficiently and faithfully across a noisy channel.

This division of coding theory into compression and transmission is justified by the information transmission theorems, or source-channel separation theorems that justify the use of bits as the universal currency for information in many contexts. However, these theorems only hold in the situation where one transmitting user wishes to communicate to one receiving user. In scenarios with more than one transmitter (the multiple-access channel), more than one receiver (the broadcast channel) or intermediary “helpers” (the relay channel), or more general networks, compression followed by transmission may no longer be optimal. Network information theory refers to these multi-agent communication models.