Switching theory

  • Authors:
  • Edward J. McCluskey

  • Affiliations:
  • -

  • Venue:
  • Encyclopedia of Computer Science
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

Switching theory is the abstract mathematical formalization used in the logic design of digital networks. It is so called because, when it was first developed by Claude Shannon (q.v.) in 1938, most logic networks were implemented using switches and electromechanical devices such as relays. Modern logic networks are usually constructed using electronic integrated circuits comprising networks of logical elements such as inverters, AND gates, and OR gates. These elements operate on binary signals; they are constrained to take on only two different voltage values (such as 0 or 5 volts). Switching theory used a two-valued Boolean algebra (sometimes called Switching algebra) as a notation to represent the operation of such logic networks. The two algebraic values are most often represented as "0" and "1," although "T" and "F" are sometimes used to emphasize the relation to propositional logic. The correspondence between the algebraic symbol used to represent a signal and the voltage present is arbitrary, although the positive logic convention in which the algebraic 1 represents the more positive voltage signal is now most common. Each input or output signal of a logic network is represented by a Boolean variable. Boolean algebra has three basic operations: inversion, logical addition, and logical multiplication; these operations are implemented directly by logic gates called inverters, OR gates, and AND gates. The symbols most often used to represent these gates are shown in Fig. 1. The output of an inverter always takes on the value opposite to the value of its input., The output of an OR gate is always equal to 1 unless all of its inputs are equal to 0, in which case the output is 0. The output of an AND gate is always equal to 0 unless all of its inputs are equal to 1, in which case the output is 1.