Monthly Archives: March 2014

HISTORICAL DEVELOPMENT OF THE ATM SYSTEM

The present-day ATM system in the United States has evolved in response to the needs of the several different groups of users and providers of the ATM services (2). These groups include air carrier, air taxi, military, general aviation, business aviation, pilots association, and air traffic controllers association. The ATM system has changed with technological advancements in the areas of communication, navigation, surveillance, computer hardware, and computer software. De­tailed historical accounts of ATM development are available in Refs. 1 and 3. In the history of ATM development, five peri­ods are easily identifiable. Early aviation developments took place during the period from 1903 to 1925. This period saw the development of aircraft construction methods, use of radio as a navigation aid, nighttime navigation using ground light­ing, and the development of airmail service. The important legislative action that marks this period is the Airmail Act of 1925, which enabled the Postmaster General to contract with private individuals and corporations for transporting mail. An important consequence of this Act was that companies like Boeing, Douglas, and Pratt and Whitney got into the business of supplying aircraft and engines to the budding airmail in­dustry. With the increase in air traffic activity, a need for regulation was felt to unify the industry through common sets of rules and procedures. An advisory board made its recom­mendation in the Morrow Report which led to the signing of the Air Commerce Act into law in 1926. This Act marks the beginning of the second period of ATM development.

The period between 1926 and 1934 saw Charles Lind­bergh’s flight across the Atlantic, installation of ground-to-air radio in aircraft, development of ground-based radio naviga­tion aids, airline aircraft equipped with two-way radio tele­phone, radio-equipped air traffic control tower, and the devel­opment of a new generation of faster higher-flying transport aircraft capable of being flown solely with reference to cockpit instrumentation. The third phase of the ATM development is marked by the creation of the Bureau of Air Commerce in 1934.

During the third phase that lasted until 1955, numerous changes took place that shaped the ATM system to its present form. The principal airlines established interline agreements in 1935 to coordinate traffic into the Newark, Chicago, and Cleveland airports. The center established at Newark became the first airway traffic control unit (ATCU) in the world. In 1938, the US Congress created the Civil Aeronautics Author­ity which in 1940 was reorganized as the Civil Aeronautics Administration (CAA). This period saw the development of visual flight rules (VFR) and instrument flight rules (IFR). The civil airways system, controlled airports, airway traffic control areas, even and odd altitude levels, and radio fixes for mandatory position reporting by IFR aircraft were estab­lished during this phase. By 1942, 23 ARTCCs (former AT – CUs) provided coverage of the complete continental airways system. During the World War II years between 1941 and 1945, the CAA set up approach control facilities at the busiest airports to separate arriving and departing aircraft out to 20 miles. In 1947, the International Civil Aviation Organization (ICAO) was formed. It adopted the US navigation and com­munication standard as the worldwide standard and English as the common language for air traffic control. The most im­portant development of this period was the radio detection and ranging (radar) device. The postwar era saw the develop­ment of direct controller/pilot interaction, implementation of the VHF omnidirectional range (VOR) and distance measur­ing equipment (DME), installation of the instrument landing system (ILS) for pilot aiding during landing, and application of radar for surveillance in the airport areas.

The fourth phase of ATM development occurred during 1955 to 1965. A short-range air navigation system known as the VORTAC system was developed by colocating the civilian VOR and the US Navy developed tactical air navigation (TA – CAN) system in common facilities. Experience with radar use during the postwar era eventually led to the development of air route surveillance radar (ARSR). The first such system was installed at the Indianapolis Center in 1956. In the same year, the first air traffic control computer was also installed at the Indianapolis Center. Research and development efforts were begun by the CAA for a secondary radar system that would use a ground interrogator to trigger transponders on­board the aircraft and obtain replies to display the aircraft identification and altitude on the controller’s radar screen. An experimental version of this system known as the air traffic control radar beacon system (ATCRBS) was implemented in 1957. In 1958 the US Congress passed the Federal Aviation Act which created the Federal Aviation Agency as the new independent agency to succeed the CAA. Due to the accep­tance of radar surveillance as the principal tool for control of air traffic, new separation standards were needed. Other significant changes during this period were the introduction of high-speed commercial jet aircraft and increase in traffic volume. To accommodate these developments and to keep the task of ATM manageable, smaller segments of airspace known as sectors were developed based on air traffic flow pat­terns and controller workload considerations. To reduce the workload associated with bookkeeping caused by sectoriza – tion, a computerized flight information system for updating flight information and automatically printing flight progress strips was developed. By 1963 several of the flight data pro­cessing (FDP) computers were placed into operational ATM service. The first prototype of a computerized radar system for arrival and departure control called the automated radar terminal system (ARTS) was installed in the Atlanta, Geor­gia, air traffic control tower in 1964. In addition to the steady

overwhelmed the system at some airports. Flow control mea­sures such as ground holding and airborne holding were put into practice for matching the traffic rate with airport accep­tance rate.

HISTORICAL DEVELOPMENT OF THE ATM SYSTEM

Figure 2. Air traffic activity historical data.

Fiscal year

The traffic growth starting from the middle of the fourth phase of the ATM development to the present is shown in Fig.

2. The graphs in the figure are based on the data provided in the FAA Air Traffic Activity report (4), FAA Aviation Fore­casts publication (5), and the FAA Administrator’s Fact Book (6). It should be noted that the number of airport operations is representative of usage by all aircraft operators including general aviation while the aircraft handled is representative of higher-altitude traffic reported by the ARTCCs. Several in­teresting trends can be observed from the graphs: traffic growth subsequent to the Airline Deregulation Act of 1978, traffic decline after the PATCO strike in 1981, and the even­tual recovery after approximately 3 years. All the graphs ex­cept the one for flight service usage show an increasing trend. The decreasing trend in the flight service usage since 1979 is due to (a) improved cockpit equippage, with part of the service being provided by the airline operations centers (AOCs), and (b) consolidation of the FAA flight service facilities.

Properties of High-Temperature Superconductors

Magnetic Properties. The critical temperature (Tc) is defined as the temperature below which a superconductor possesses no dc electrical resistivity or the temperature below which a superconductor exhibits diamagnetic behavior. The useful range of operation of a superconductor is typically below 0.6 Tc, as other

Fig. 2. Temperature-current-magnetic field (T-J-H) three-dimensional surface defining the operating limits for a su­perconductor.

important superconducting properties are enhanced below this temperature. Critical current density (Jc) is one of the most important superconducting properties for engineering applications. It is an estimate of maximum current density (current per cross-sectional area of the conductor) a superconductor can support before becoming a normal conductor. The critical field (Hc) for a superconductor is the maximum magnetic field below which a superconductor exhibits diamagnetic behavior and above which the superconductivity is quenched. The Tc, Hc, and Jc parameters define a point in three-dimensional space, and for superconductors these points span a volume as shown in Fig. 2.Tc, Hc, and Jc values are relatively low in type I superconductors. Type II superconductors are generally suitable for most electrical and electronic applications because ofhigher Tc, Hc and Jc values (11). In type I superconductors, the flow of shielding current in the superconductor is restricted to a thin layer from the surface, called the penetration depth (A.), when the magnetic field is below Hc. The penetration depth is very small near 0 K, and increases dramatically as the temperature approaches Tc. The penetration depth at 0 K ranges from 100 A to 1500 A for type I materials (11). Above the critical field, the magnetic field completely penetrates a type I superconductor, quenching superconductivity, as shown in Fig. 3. Figure 3 also illustrates the T-dependence of resistivity in a superconductor at T > Tc.

In type II superconductors, Hc1 represents the lower critical field above which magnetic flux penetrates a superconductor to form a mixed state in which superconducting and normal electrons coexist. When H > HC2, the upper critical field, superconductivity is largely confined to the surface of the material. In the mixed state, magnetic flux penetrates through small tubular regions on the order of the coherence length (f) (a length scale that characterizes superconducting electron pair coupling), called vortices (or flux tubes), with each vortex containing one quantum of flux, ф0 (12). Abrikosov, in his study of type II superconductors determined that ф0 = h/2e, where h is Planck’s constant and e is the electronic charge (12). The vortices form a periodic lattice called the Abrikosov vortex lattice. The resistivity of a superconductor may be vanishing in the mixed state, provided the vortices are pinned or trapped. As the applied magnetic field (Ha) approaches Hc2, the number of vortices increases until there can no longer be any more addition of vortices, at which point the material becomes a normal conductor. Figure 4 shows the magnetic properties of type II superconductors. In the mixed state, each vortex resides in a normal region, which is separated by superconducting regions. The vortices experience three different types of forces: one is the Lorentz force due to the flow of external current, experienced in the direction of and proportional to the vector product of the current and the vortex field. Second is the force of repulsion from other vortices, and third is the pinning force from metallurgical defects. The Lorentz force causes motion of vortices (also called the flux flow). The vortex motion produces an opposing electric field to the flow of current,

Fig. 3. A type I superconductor’s magnetic flux density versus applied magnetic field. A type I superconductor will not have any flux enclosed in the bulk below the critical temperature. Above the critical field, the applied field completely permeates in the bulk of the material.

essentially contributing to ohmic losses. However, vortices can become trapped or pinned (called flux pinning) at metallurgical defects, in secondary phases or at impurity sites. Owing to the repulsive forces between each vortex, pinning a few vortices may lead to a frozen vortex lattice, or a lossless state, true only for direct currents and low-frequency alternating currents. The frozen vortex lattice occurs only below a critical field called the irreversibility field (Hit). Some of the trapped vortices could remain in the superconductor, contributing to a hysteretic behavior when an alternating current is applied, which produces further dissipation of energy. The trapped flux is analogous to remanent flux in ferromagnetic materials. While pinning centers tend to prevent the movement of vortices, there is a tendency for vortices to jump over the pinning defects. This phenomenon is called flux creep. At Ha > Hirr, vortices will move, causing additional energy dissipation. Resistive transitions in a superconducting sample in applied magnetic fields are shown in Fig. 5. Transitions at higher fields clearly show additional ohmic losses (13). Although the upper critical field is generally higher in type II materials, the limiting field is the irreversibility field, which is an order of magnitude lower than Hc2. Owing to the complex nature of the cuprate ceramic superconductors, and operation at higher temperatures, the ac losses in HTS materials are generally higher than in LTS materials (14,15).

Structural Properties. The presence of one or more copper oxide (CuO2) planes in the unit cell is a common feature of all HTS materials, also referred as cuprate superconductors. The most popular cuprate materials are YiBa2Cu3O 8 (henceforth referred to as YBcO), Bi2Sr2Can_ iCu„O2n+4 (where n = 2, 3) BSCCO (henceforth referred to as Bi2212, Bi2223for n = 2, and n = 3, respectively), Tl2Ba2Cam _ iCumO2m+4 (henceforth referred to as Tl2201, Tl2212, and Tl2223), and HgBa2Cam _ iCumO2m+2 (where m = 1, 2, 3). Table 1. lists the well developed cuprate superconductors, their superconducting properties, and important applications demonstrated to date. In YBCO, there are two square planar CuO2 planes stacked in the odirection, separated by an intercalating layer of barium and copper atoms and a variable number of oxygen atoms. The conventional wisdom is that the CuO2 planes are the conduction channels of superconductivity, whereas the intercalating layers provide carriers or act as charge reservoirs necessary for superconductivity, although this view is not shared universally (16). The charge density, the number of superconducting charge carriers per unit volume, is determined by the overall chemistry of the system and by the charge transfer between the CuO2 planes and the CuO chains. The charge density in a HTS material (1019/cm3) is two orders of magnitude lower than conventional LTS (1021/cm3). Remarkably, the oxygen content in the system changes the oxidation

1

1

1 /

V/

1

1

1

1

Diamagnetic

Vortex state

Surface

or Meissner

or mixed

superconductivity

і slate

state

I!

C3

Applied magnetic field, II (T) WC1 Hc2

=3

E

ф

с

о

nd

N

С.

CD

CO

E

Fig. 4. The magnetic behavior of a type II superconductor, showing the Meissner state behavior below Hc1, mixed state behavior between Hc1 and Hc2, and the surface superconductivity between Hc2 and Hc3.

150

70 &0 90 100 110

Temperature (K)

Fig. 5. Temperature dependence of resistivity for an epitaxial c-axis-oriented YBCO thin film at different magnetic fields, with the field parallel to the c-axis. Courtesy of 13.

states of the copper chain atoms, which, in turn, affects their ability for charge transfer, charge density, and superconducting properties. Depending on the oxygen content, the YBCO 123 material could have a nonsuperconducting tetragonal (a = b = c) phase or a 90 K superconducting orthorhombic (a = b = c) phase. When fully oxygenated, YBCO possesses an orthorhombic unit cell with typical dimensions of a = 3.85 A, b = 3.88 A, and c = 12.0 A, and a Tc = 90 K. Figure 6 shows the crystal structure of YBCO 123 and BSCCO 2223 superconductors, showing the conduction layers and the binding layers in each case.

BSCCO superconductor contains a weakly bonded double BiO layer that separates the CuO2 planes. The Bi2223 structure is part of a family of several other HTS compounds in which Bi is replaced by Tl or Hg (with different oxygen coordination) and partially by lead. In some cases, the double layer of these metal-oxide layers can be reduced to a single layer-yielding another family of superconductors such as 1212 or 1223 [e. g.,

Table 1. Promising HTS Materials, Properties, and Applications

Мнігґілі

T,

JAOi

(A/cm*)

Applications

Bulk VBijCUA-i H23> YBCO tmcJi grown)

90 К

10’nt 77 K7.F

JITS cavities

10- at 77 К 10T

Rearing^ EM shit>ld

Thin’Him YUt’t> on crystalline substrates

90 К

I0r at 77 К ZF

irt)Ull>3. microwave

І01 at 77 К ST

Electronic*

TLBSiCaCuiOi f2212) thin film

97-lDO К

Ї x iff1 at 77 KZF

Microwave

1 x 10′ at 77 К CUT

Elrctronics

TtiB.-ijCajCu. O (1 th in fLlm

117-123 К

2Xl№nt 77 К ZF

Micro lvavp

5 x 10’ at 77 К IT

Electronics

BijSrCajCu. Oi w іг<ї^іл|)і"і

00 к

10- at 77 К ZF

bw fidd ІТТЯ wirM Гог

!U’at77 К IT

2O-30 к s[)|ilicjitions

10′ at А К XV

10і «і A К LT

wj n -«f и і [ir’s

по к

5-7 X 10′ jit 77 К 7.F

Magnet*, cui rmt Іг:иІ5,

5 X 10s at 77 К IT

KMKS

BSCCQ 2223

YBCO y*; V–‘

• Ba/Sr ■ Cu s Ca * Bi oY

Fig. 6. The crystal structure of YBCO 123 and BSCCO 2223 compounds, showing the conducting layers, and the binding layers.

(Tl-Pb)1Ba2Ca2Cu3O10 henceforth is referred as Tl1223]. The single-layer compounds offer strong flux pinning and low intrinsic defect structure compared to the double-layer compounds. The presence of a weakly bonded BiO layer is crucial for their superconducting properties in the Bi2212 and 2223 systems. The mechanical properties of Bi2212 and 2223 are micaceous, (mica-or clay-like), and they have highly anisotropic growth rates along the ab – plane and odirection. The latter is important to process long-length wires and to enhance
the electromagnetic connectivity of the grains, thus making high transport current densities over long lengths possible (17). This is mainly due to the weak interlayer bonding of the BiO layer with the CuO2 planes. On the other hand, the weak interlayer bonding of the BiO layer also leads to intermixing of Bi2212 and Bi2223 phases. In spite of this weakness, better grain connectivity and micaceous crystalline morphology are attractive for developing long-length HTS wires. Tl2212 and 2223 compounds have a structure similar to the Bi2212 and 2223 compounds with (TlO) double layers replacing the BiO layers.

Other Important Properties. All the popular HTS materials possess a fundamental limitation, crys­talline anisotropy, (i. e., they possess different structural and electrical properties in different directions). Super­conducting properties such as critical current density (Jc) and critical magnetic field (Hc) along the ab-planes (xy) are superior to those along the c-axis (z). A major challenge for researchers has been to develop textured samples to take advantage of high Jc in the ab-planes. The HTS materials also exhibit higher penetration depths compared with LTS materials. The penetration depth is an important parameter for high-frequency applications of superconductors. It is defined as the depth through which a magnetic field penetrates into a superconducting sample and decays to 1/e of the field at the surface. The magnetic field decays in the form of H = H0e – x/X, where H0 is the field at the surface, x is the depth through the sample, and X is the penetration depth of the superconductor, analogous to the skin depth in conventional electrical conductors. Penetration depthX (T) increases with temperature. Penetration depth is a frequency independent parameter in contrast to the frequency-dependent skin-depth of normal conductors. This means that little or no dispersion will be introduced in superconducting components, and that it will be negligible up to frequencies as high as tens of gigahertz, in contrast to dispersion present in normal metals. Furthermore, lower losses in superconductors lead to a reduction in physical size, and this feature represents another advantage for HTS thin-film based circuits. Compact delay lines, filters, and resonators are possible with a high-quality factor (Q) due to low – conductor losses (see Superconducting filters and passive components). The challenge in processing involves developing HTS materials with smooth-surface morphology to minimize high-frequency ac conductor losses. The extremely short coherence length in HTS materials (<30 A along the ab-plane) increases the difficulty of making Josephson junctions (see (Tunneling and josephson junctions).

Logic Design Representations

A logic function can be represented in different ways. Both behavioral (also called functional) representations and struc­tural representations are used in logic designs. The represen­tations can be used at all different levels of abstraction: archi­tecture level, register-transfer level, and logic level.

Waveforms. Waveforms are normally used for viewing sim­ulation results and specifying stimulus (input) to the simula­tor. Recently they are also being used increasingly as one pos­sible input data design specification, especially for designing asynchronous circuits and circuits that cooperate with buses. Figure 2 shows the waveforms of a full adder.

Logic Gate Networks. Standard design uses basic logic gates: AND, OR, NOT, NAND, and NOR. Recently EXOR and XNOR gates were incorporated into tools and designs. Several algorithms for logic design that take into account EXOR and XNOR gates have been created. For certain designs, such as arithmetic datapath operations, EXOR based logic can de­crease area, improve speed and power consumption, and im­prove significantly the testability. Such circuits are thus used in design for test. Other gate models include designing with EPLDs, which realize AND-OR and OR-AND architectures, corresponding to sum-of-products and product-of-sums ex­pressions, respectively. In standard cell technologies more powerful libraries of cells are used, such as AND-OR – INVERT, or OR-AND-INVERT gates. In FPGAs different combinations of multiplexers, cells that use positive Davio (AND-EXOR) and negative Davio (NOT-AND-EXOR) expansion gates, or similar cells with a small number of in­puts and outputs are used. The lookup-table model assumes that the arbitrary function of some small number of variables (3, 4, or 5) and small number of outputs, usually 1 or 2, can be realized in a programmable cell. Several design optimization methodologies have been developed for each of these models.

Boolean Expressions. Boolean expressions use logic functors (operators) such as AND, OR, NOR, NOT, NAND, EXOR, and MAJORITY, as well as literals, to specify the (multioutput) function. In order to specify netlists that correspond to DAGs, intermediate variables need to be introduced to the expres­sions. Every netlist or decision diagram can be specified by a set of Boolean expressions with intermediate variables. Bool­ean expressions can use infix (or standard), prefix (or Polish), or postfix (or reverse Polish) notations. Most modern specifi­cation languages use infix notation for operators such as AND or OR. Operator AND can sometimes be omitted, as in stan­dard algebraic notations. In conjunction with operators such as NAND, both infix and prefix notations are used. For in­stance, (NAND a b c) in prefix and (a NAND b NAND c) in infix. Care is recommended when reading and writing such expressions in hardware description languages and input for­mats to tools. It is always good to use parentheses in case of doubt about operators’ precedence. In some languages, arbi­trary operators can be defined by users and then can be used in expressions on equal terms with well-known operators. Ex­pressions can be created for SOP (sum-of-products), POS (product-of-sums), factorized SOPs and POSs, and other rep­resentations as a result of logic synthesis and optimization algorithms. Some of these algorithms will be described in the section on combinational logic design.

Behavioral Descriptions. A logic system can be described by hardware description languages (HDL). The most popular ones are Verilog and VHDL. Both Verilog and VHDL can de­scribe a logic design at different levels of abstraction, from gate-level to architectural-level representations. Both are now industrial standards, but VHDL seems to gain its popularity faster, especially outside the United States. In recent years

Logic Design Representations

Figure 2. Waveforms for the full adder.

several languages at higher level than VHDL have been pro­posed, as well as preprocessors to VHDL language from these new representations, but so far none of them enjoyed wide acceptance (e. g., State Charts, SpecCharts, SDL, and VAL). State Charts and SpecCharts are graphical formalisms that introduce hierarchy to state machines. SDL stands for the Specification and Description Language. It is used mainly in telecommunication. VHDL Annotation Language (VAL) is a set of extensions to VHDL to increase its capabilities for ab­stract specification, timing specification, hierarchical design, and design validation. Other known notations and corre­sponding data languages include regular expressions, Petri nets, and path expressions.

Design Implementation

A design can be targeted to different technologies: full custom circuit design, semicustom circuit design (standard cell and gate array), FPGAs, EPLDs, CPLDs, and standard compo­nents.

In the full custom circuit designs, the design effort and cost are high. This design style is normally used when high-qual­ity circuits are required. Semicustom designs use a limited number of circuit primitives, and therefore have lower design complexity and may be less efficient when compared to the full custom designs.

Design Verification

A design can be tested by logic simulation, functional testing, timing simulation, logic emulation, and formal verification. All these methods are called validation methods.

Logic Simulation. Logic simulation is a fast method of ana­lyzing a logic design. Logic simulation models a logic design as interconnected logic gates but can also use any of the mathematical characterizations specified previously (for in­stance, binary decision diagrams). The simulator applies test vectors to the logic model and calculates logic values at the output of the logic gates. The result of the logic simulation can be either logic waveforms or truth tables.

Timing Simulation. Timing simulation is similar to logic simulation, but it also considers delays of electronic compo­nents. Its goal is to analyze the timing behavior of the circuit. The results from the timing simulation can be used to achieve target circuit timing characteristics (e. g., operational fre­quency).

Formal Verification. While simulation can demonstrate that a circuit is defective, it is never able to formally prove that a large circuit is totally correct, because of the excessive number of input and state combinations. Formal verification uses mathematical methods to verify exhaustively the func­tionality of a digital system. Formal verification can reduce the search space by using symbolic representation methods and by considering many input combinations at once. Cur­rently, there are two methods that are widely used: model checking and equivalence checking. Model checking is used at the architectural level or register-transfer level to check if the design holds certain properties. Equivalence checking com­pares two designs at the gate level or register-transfer level. It is useful when the design is transformed from one level to another level, or when the design functionality has changed at the same level. Equivalence checking can verify if the origi­nal design and the modified design are functionally equiva­lent. For instance, two Boolean functions F1 and F2 are equiv­alent when they constitute a tautology F1 о F2, = 1, which means the function G = F1 о F2 is equal to 1 (or function F1 ® F2 is equal to zero) for any combination of its input vari­able values. A more restricted version of tautology may in­volve equality only on combinations of input values that actu­ally may happen in actual operation of the circuit (thus ‘‘don’t care’’ combinations are not verified). Verification of state ma­chines in the most narrow sense assumes that the two ma­chines generate exactly the same output signals in every pulse and for every possible internal state. This is equivalent to creating, for machines M1 and M2 with outputs z1 and z2, respectively, a new combined machine with output zcom = z1 ® z2 and shared inputs, and proving that output zcom = 0 for all combinations of state and input symbols (9). A more restricted equivalence may require the identity of output sig­nals for only some transitions. Finally, for more advanced state machine models, only input-output relations may be re­quired to be equivalent in some sense. Methods based on au­tomatic theorem proving in predicate calculus and higher or­der logic have been also developed for verification and formal design correct from specification, but are not yet much used in commercial EDA tools. Computer tools for formal verifica­tion are available from EDA companies and from universities (e. g., VIS from UC Berkeley (5), and HOL (10) available from the University of Utah).

Design Transformation

High-level design descriptions make it convenient for design­ers to specify what they want to achieve. Low-level design descriptions are necessary for design implementation. Design transformations are therefore required to convert a design from a higher level of abstraction to lower levels of abstrac­tion. Examples of design transformations include removal of dead code from the microcode, removal of dead register vari­ables, minimization of the number of generalized registers, cost minimization of combined operations units (SUM/SUB­TRACT, MULTIPLY, etc.), Mealy-to-Moore and Moore-to – Mealy transformations of state machines (which can change the system’s timing by one pulse), transformation of a nonde- terministic state machine to an equivalent deterministic ma­chine, transformation of a parallel state machine to an equiv­alent sequential machine, and mapping of a BDD to a netlist of multiplexers.

Logic Design Process

A logic design is a complex process. It starts from the design specification, where the functionality of the system is speci­fied. Design is an iterative process involving design descrip­tion, design transformation, and design verification. Through each iteration, the design is transformed from a higher level of abstraction to a lower level. To ensure the correctness of the design, verification is needed when the design is trans­formed from one level to another level. Each level may involve some kind of optimization (for instance, the reduction of the description size). The logic design process is shown in Fig. 3.

Logic Design Representations

Figure 3. The logic design process. COMBINATIONAL LOGIC DESIGN

A combinational logic design involves a design of a combina­tional circuit. For instance, the design may assume two levels of logic. A two-level combinational logic circuit consists of two levels of logic gates. In the sum-of-products two-level form, the first (from the inputs) level of gates are AND gates and the second level of gates are OR gates. In the product-of-sums two-level form, the first level of gates are OR gates and the second level of gates are AND gates.

The reason of logic minimization is to improve the perfor­mance and decrease the cost by decreasing the area of the silicon, decreasing the number of components, increasing the speed of the circuit, making the circuit more testable, making it use less power, or achieving any combination of the above design criteria. The optimization problem can be also speci­fied to minimize certain weighted cost functions under certain constraints (for instance, to decrease the delay under the con­straint of not exceeding certain prespecified silicon area).

There are usually two logic minimization processes; the first one is generic and technology-independent minimization, the next one is technology-dependent minimization, called also technology mapping. This second stage may also take into account some topological or geometrical constraints of the device.