Monthly Archives: February 2014

Nb-Ti-Ta

The addition of Ta to Nb-Ti alloys suppresses the param­agnetic limitation of Hc2 by the large orbital moment of the alloys (53). Although Ta is only of benefit below 4.2 K (54), it has a relatively long history of study because it should extend the useful field range of ductile superconductors by 1 T or more (55). So far, however, improved Hc2 has not translated effectively into improvements in Jc, except very near to Hc2 (above 11 T). Lazarev et al. (56) were able to

Figure 12. Partial cross-section of a strand designed for the Large Hadron Collider at CERN by IGC Advanced Superconduc­tors (now Luvata Waterbury, Inc.). 250,000 km of Nb-Ti strand were required in order to produce magnets for the 27 kilome­ter LHC ring, including 1232 dipoles and 858 quadrupoles. Each dipole was 15 m in length and weighed 35 tonnes. The LHC uses 1.9 K operation to push the Nb-Ti based magnets beyond 8 T. In­set is the full strand cross-section showing the individual filament stacking units. Each LHC strand has 6425 or 8800 filaments of 6 or 7 xm diameter respectively.

attain a critical current density 1000 A/mm2 at a field of

11.5 T (2.05 K) using an Nb-37 wt. % Ti-22 wt. % Ta alloy. Ta has an even higher melting point than Nb, making the fabrication of chemically homogeneous ternary alloys par­ticularly difficult. The behavior of Nb-Ti-Ta alloys under the conventional process is similar to that of binary alloys, but the precipitates do not appear to pin as efficiently (57).

Truth Tables and Karnaugh Maps

A truth table for a logic function is a list of input combina­tions and their corresponding output values. Truth tables are suitable to present functions with small numbers of inputs (for instance, single cells of iterative circuits). Truth tables can be easily specified in hardware description languages such as VHSIC hardware description language (VHDL).

Table 1 shows the truth table of a full adder. A full adder is a logic circuit with two data inputs A and B, a carry-in input Cin, and two outputs Sum and carry-out Cout.

Karnaugh maps are two-dimensional visual representa­tions of truth tables. In a Karnaugh map, input variables are partitioned into vertical and horizontal variables, and all value combinations of input variables are expressed in Gray codes. The Gray code expressions allow the geometrically ad­jacent cells to become cpmbinable using the law AB + AB = A. For instance, cells abcd and abcd are combined as a prod­uct acd. For functions with large numbers of inputs, the corre­sponding truth tables or Karnaugh maps are too large.

Table 1. Truth Table of Full Adder

A

B

Cin

Sum

Cout

0

0

0

0

0

0

0

1

1

0

0

1

0

1

0

0

1

1

0

1

1

0

0

1

0

1

0

1

0

1

1

1

0

0

1

1

1

1

1

1

Cube Representation. An array of cubes is a list of cubes, which is usually interpreted as a sum of products of literals, where a cube corresponds to a product of literals. A (binary) lit­eral is a variable or a negated variable. In binary logic, symbol 0 corresponds to a negated variable, symbol 1 to a positive (af­firmative, nonnegated) variable, symbol X to the absence of a variable in the product, and symbol e to a contradiction. A cube is a sequence of symbols 0, 1, X, and e, corresponding to their respective ordered variables. For instance, assuming the order of variables: x1, x2, x3, x4, the cube 01X1 corresponds to the prod­uct of literals x1x2x4, and the cube 0eX0 is an intermediate data generated to show contradiction or a nonexisting result cube of some cube operation. A minterm (a cell of a Karnaugh map and a row of a truth table) is thus a sequence of symbols 1 and 0. Arrays of cubes can also correspond to exclusive sums of prod­ucts, products of sums, or others. For instance, the array of cubes {01X1, 11XX} describes the sum-of-products expression x1x2x4 + x1x2 called also the cover of the function with product implicants (usually, with prime implicants). Depending on the context, the same array of cubes can also describe the exclu – sive-sum-of-products expression x1x2x4 ® x1x2, or a product-of – sums expression (x1 + x2 + x4) • (x1 + x2). The correct meaning of the array is taken care of by applying respective cube calculus operators to it.

An algebra of cube calculus has been created with cubes and arrays of cubes and operations on them. The most impor­tant operators (operations) are negation of a single cube or nondisjoint sharp, disjoint sharp, consensus, crosslink, inter­section, and supercube of two cubes. The cube operators most often used in EDA programs are presented briefly below. The nondisjoint sharp, A#B, creates a set of the largest cubes in function A • B. Disjoint sharp,^.#dB, creates a set of disjoint cubes covering function A • B. Sharp operations perform graphical subtraction and can be used in algorithms to re­move part of the function that has been already taken care of. Consensus of cubes A and B is the largest cube that in­cludes part of cube A and part of cube B. Supercube of cubes A and B is the smallest cube that includes entirely both cubes A and B. Consensus and supercube are used to create new product groups. Intersection of cubes A and B is the largest common subcube of cubes A and B. It is perhaps the most commonly used cube calculus operation, used in all practically known algorithms. These operations are used mostly in the inclusive (AND-OR) logic. Crosslink is the chain of cubes be­tween two cubes. The chain of cubes covers the same min – terms as the two cubes, and does not cover the minterms not covered by the two cubes. Since A ® A = 0, an even number of coverings is treated as no covering, and an odd number of coverings is treated as a single covering. It is used mostly in the exclusive (AND-EXOR) logic, for instance, in exclusive – sum-of-products minimization (21). Positive cofactor fa is function f with variable a substituted to 1. Negative cofactor fa is function f with variable a substituted to 0.

Cube calculus is used mostly for optimization of designs with two or three levels of logic gates. It is also used in test generation and functional verifications. Multivalued cube cal­culus extends these representations and operations to multi­valued variables. In multivalued logic, each variable can have several values from a set of values. For a n-valued variable, all its literals are represented by n-element binary vectors where value 0 in the position corresponds to the lack of this value in the literal, and value 1 to the presence of this value. For instance, in 4-valued logic, the literal X0,1,2 is represented as a binary string 1110, which means the following assign­ment of values: X0 = 1, X1 = 1, X2 = 1, Xs = 0. It means, the literal X{01’21 is a 4-valued-input binary-output function de­fined as follows: X{0’1,21 = 1 when X = 1, or X = 2, or X = 3, X10’1’21 = 0 when X = 4. Such literals are realized in binary circuits by input decoders, literal generators circuits, or small PLAs. Thus, multivalued logic is used in logic design as an intermediate notation to design multilevel binary networks. For instance, in 4-valued model used in programmable logic array (PLA) minimization, a 4-valued set variable corre­sponds to a pair of binary variables. PLA with decoders allow to decrease the total circuit area in comparison with standard PLAs. This is also the reason of using multivalued logic in other types of circuits. Well known tools like MIS and SIS from the University of California at Berkeley (UC Berkeley) (23) use cube calculus format of input/output data.

A variant of cube calculus representation are the factored forms (for instance, used in MIS), which are multilevel compo­sitions of cube arrays (each array specifies a two level logic block). Factored form is thus represented as a multi-DAG (di­rected acyclic graph with multiedges). It has blocks as its nodes and logic signals between them as multiedges. Each component block specifies its cube array and additionally its input and output signals. Input signals of the block are pri­mary inputs of the multilevel circuit, or are the outputs from other blocks of this circuit. Output signals of the block are primary outputs of the multi-level circuit, or are the inputs to other blocks of this circuit. Initial two-level cube calculus description is factorized to such multi-level circuit described as a factored form. Also, a multilevel circuit can be flattened back to a two level cube representation.

Binary Decision Diagrams.

Decision diagrams represent a function by a directed acyclic graph (DAG). In the case of the most often used binary decision diagrams (BDDs), the nodes of the graph correspond to Shannon expansions (realized by multiplexer gates), controlled by the variable a associated with this node: F = a • Fa + a • Fa. Shared BDDs are those in which equivalent nodes of several output functions are shared. Equivalent nodes g and h are those whose cofactor functions are mutually equal: ga = ha and ga = ha. Ordered BDDs are those in which the order of nodes in every branch from the root is the same. A diagram can be obtained from arbitrary function specifications, such as arrays of cubes, fac­tored forms, expressions, or netlists. The diagram is obtained by recursive application of Shannon expansion to the func­tion, next its two cofactors, four cofactors of its two cofactors, and so on, and by combination of any isomorphic (logically equivalent) nodes. The function corresponds to the root of the diagram. There are two terminal nodes of a binary decision diagram, 0 and 1, corresponding to Boolean false and true. If two successor nodes of a node Sj point to the same node, then node Sj can be removed from the DAG. There are other simi­lar reduction transformations in those diagrams which are more general than BDDs. Decision diagrams with such reduc­tions are called reduced ordered decision diagrams.

In addition, negated (inverted) edges are introduced in BDDs. Such edges describe negation of its argument function. In Kronecker decision diagrams (KDDs) three types of expan­sion nodes exist: Shannon nodes (realizing function f = a • fa + a • fa), positive Davio nodes [realizing function f = a • (fa ® fa) ® fa], and negative Davio nodes [realizing function f = a • (fa ® fa) ® fa]. All of the three possible canonical expan­sions of Boolean functions are thus included in KDD. Other known decision diagrams include zero-suppressed binary de­cision diagrams (ZSBDDs) and moment diagrams. They are used primarily in verification or technology mapping. Multi­valued decision diagrams have more than two terminal nodes and multivalued branchings with more than two successors of a node. These diagrams allow one to describe and verify some circuits (such as large multipliers) that are too large to be described by standard BDDs. Some diagrams may also be better for logic synthesis to certain technologies.

There are two types of decision diagrams: canonical dia­grams and noncanonical diagrams. Canonical diagrams are used for function representation and tautology checking. ZSBDDs and KDDs are examples of canonical representa­tions. An example of noncanonical decision diagrams is a free pseudo-Kronecker decision diagram. In this type of diagram, any types of Shannon and Davio expansions can be mixed in levels and all orders of variables are allowed in branches. Free pseudo-Kronecker decision diagrams are used in synthe­sis and technology mapping (21,22). Decision diagrams can be also adapted to represent state machines. By describing a state machine as a relation, the (logic) characteristic function of the machine can be described by a decision diagram.

APERTURE ANTENNAS

Aperture antennas are most commonly used in the micro­wave- and millimeter-wave frequencies. There are a large number of antenna types for which the radiated electromag­netic fields can be considered to emanate from a physical ap­erture. Antennas that fall into this category include several types of reflectors: planar (flat plate) arrays, lenses, and horns. The geometry of the aperture geometry may be square, rectangular, circular, elliptical, or virtually any other shape. Aperture antennas are very popular for aerospace applica­tions because they can be flush mounted onto the spacecraft or aircraft surface. Their opening can be covered with an elec­tromagnetic (dielectric) window material or radome to protect the antenna from environmental conditions (1). This installa­tion will not disturb the aerodynamic profile of the vehicle, which is of critical importance in high-speed applications.

In order to evaluate the distant (far-field) radiation pat­terns, it is necessary to know the internal currents that flow on the radiating surfaces. However, these current distribu­tions may not be exactly known and only approximate or ex­perimental measurements can provide estimates for these data. To expedite the process, it is necessary to have alterna­tive methods to compute the radiation patterns of the aper­ture antennas. A technique based on the equivalence princi­ple allows one to make a reasonable approximation to the fields on, or in the vicinity of, the physical antenna structure and subsequently to compute the radiation patterns.

Field equivalence, first introduced by Schelkunoff (2), is a principle by which the actual sources on an antenna are re­placed by equivalent sources on an external closed surface that is physically outside of the antenna. The fictitious sources are said to be equivalent within a region because they produce the same fields within that region. Another key con­cept is Hugens’s principle (3), which states that the equiva­lent source at each point on the external surface is a source of a spherical wave. The secondary wave front can be con­structed as the envelope of these secondary spherical waves

(4).

Using these principles, the electrical and/or magnetic fields in the equivalent aperture region can be determined with straightforward approximate methods. The fields else­where are assumed to be zero. In most applications, the closed surface is selected so that most of it coincides with the con­ducting parts of the physical structure. This is preferred be­cause the vanishing of the tangential electrical components over the conducting parts of the surface reduces the physical limits of integration. The formula to compute the fields radi­ated by the equivalent sources is exact, but it requires inte­gration over the closed surface. The degree of accuracy de­pends on the knowledge of the tangential components of the fields over the closed surface.

Aperture techniques are especially useful for parabolic re­flector antennas, where the aperture plane can be defined im­mediately in front of the reflector. Parabolic reflectors are usually large, electrically. More surprisingly, aperture tech­niques can be successfully applied to small aperture wave­guide horns. However, for very small horns with an aperture dimension of less than approximately one wavelength, the as­sumption of zero fields outside the aperture fails unless the horn is surrounded by a planar conducting flange (5). In this section, the mathematical formulas will be developed to ana­lyze the radiation characteristics of aperture antennas. Em­phasis will be given to the rectangular and circular configu­rations because they are the most commonly used geometries. Due to mathematical complexities, the results will be re­stricted to the far-field region.

One of the most useful concepts to be discussed is the far – field radiation pattern that can be obtained as a Fourier transform of the field distribution over the equivalent aper­ture, and vice versa. The existing relationship of the Fourier transforms theory is extremely important since it makes all of the operational properties of the Fourier transform theory available for the analysis and synthesis of aperture antennas. Obtaining analytical solutions for many simple aperture dis­tributions in order to design aperture antennas is useful. More complex aperture distributions, which do not lend them­selves to analytical solutions, can be solved numerically. The increased capabilities of the personal computer (PC) have re­sulted in its acceptance as a conventional tool of the antenna designer. The Fourier-transform integral is generally well be­haved and does not present any fundamental computational problems.

Considering the use of the Fourier transform, first consider rectangular apertures in which one aperture dimension is large in wavelength and the other is small in terms of wave­length. This type of aperture is approximated as a line source and is easily treated with a one-dimensional Fourier trans­form (6). For many kinds of rectangular aperture antennas such as horns, the aperture distributions in the two principal plane dimensions are independent. These types of distribu­tions are said to be separable. The total radiation pattern is obtained for separable distributions as the product of the pat­tern functions obtained from a one-dimensional Fourier transform, which corresponds to the two principal plane dis­tributions.

If the rectangular aperture distribution is not able to be separated, the directivity pattern is found in a similar man­ner to the line-source distribution except that the aperture

field is integrated over two dimensions rather than one di­mension (7). This double Fourier transform can also be ap­plied to circular apertures and can be easily evaluated on a PC.

APERTURE ANTENNAS

Outgoing

wave

For all aperture distributions, the following observations are made (8):

1. A uniform amplitude distribution yields the maximum directivity (nonuniform edge-enhanced distributions for supergain being considered impractical), but at high side-lobe levels.

2. Tapering the amplitude at the center, from a maximum to a smaller value at the edges, will reduce the side-lobe level compared with the uniform illumination, but it re­sults in a larger (main-lobe) beam width and less direc­tivity.

3. An inverse-taper distribution (amplitude depression at the center) results in a smaller (main-lobe) beam width but increases the side-lobe level and reduces the direc­tivity when compared with the uniform illumination case.

4. Depending on the aperture size in wavelengths and phase error, there is a frequency (or wavelength) for which the gain peaks, falling to smaller values as the frequency is either raised or lowered.

Lastly, we consider aperture efficiencies. The aperture effi­ciency is defined as the ratio of the effective aperture area to the physical aperture area. The beam efficiency is defined as the ratio of the power in the main lobe to the total radiated power. The maximum aperture efficiency occurs for a uniform aperture distribution, but maximum beam efficiency occurs for a highly tapered distribution. The aperture phase errors are the primary limitation of the efficiency of the antenna.

A bridge toward non-equilibrium: fluctuation-dissipation relation

In order to unveil such a link we need to introduce a more formal description of the dynamics of the movable set. This problem has been addressed and solved by Albert Einstein (1879 – 1955) in his 1905 discussion of the Brownian motion and subsequently by Paul Langevin (1872 – 1946) who proposed the following equation:

mx= – myx – + £(t) (8)

As before x represents the movable set position. Here у represents the viscous damping constant, U is the elastic potential energy due to the spring and £(t) is the random force that accounts for the incessant impact of the gas particles on the set, assumed with zero mean, Gaussian distributed and with a flat spectrum or, delta-correlated in time (white noise assumption):

ДО£(*2)>=2гс GR6(t1- t2) (9)

where the <> indicates average over the statistical ensemble.

Now, as we noticed before, by the moment that the gas is responsible at the same time for the fluctuating part of the dynamics (i. e. the random force £(t) ) and the dissipative part (i. e. the damping constant y) there must be a relation between these two. This relation has been established within the linear response theory (that satisfies the equipartition of the energy

among all the degrees of freedom) initially by Harry Theodor Nyquist (1889 – 1976) in 1928[7],

and demonstrated by Callen and Welton in 1951. This relation is:

Gr = ^7 (10)

and represents a formulation of the so-called Fluctuation-Dissipation Theorem (FDT)[1,2]. There exist different formulations of the FDT. As an example we mention that it can be generalized to account for a different kind of dissipative force, i. e. internal friction type where Y is not a simple constant but shows time dependence (work done in the sixties by Mori and Kubo). In that case the random force shows a spectrum that is not flat anymore (non-white noise assumption).

Why is FDT important? It is important because it represent an ideal bridge that connects the equilibrium properties of our thermodynamic system (represented by the amplitude and character of the fluctuations) with the non-equilibrium properties (represented here by the dissipative phenomena due to the presence of the friction). Thus there are basically two ways of using the FDT: it can be used to predict the characteristics of the fluctuation or the noise intrinsic to the system from the known characteristics of the dissipative properties or it can be used to predict what kind of dissipation we should expect if we know the equilibrium fluctuation properties. Its importance however goes beyond the practical utility. Indeed it shows like dissipative properties, meaning the capacity to produce entropy, are intrinsically connected to the equilibrium fluctuations.

Finite State Machines

Finite state machines (FSMs) are usually of Mealy or Moore types. Both Moore and Mealy machines have the following: the set of input symbols, the set of internal states (symbols), and the set of output symbols. They also have two functions: the transition function S and the output function A. The tran­sition function S specifies the next internal state as a function of the present internal state and the present input state. The output function A describes the present output state. Moore machines have output states which are functions of only the present internal states. Mealy machines have output states which are functions of both present internal states and pres­ent input states. Thus state machines can be described and realized as composition of purely combinational blocks S and A with registers that hold their states.

Parallel state machines are less commonly used compared to Moore machines and Mealy machines. In a parallel state machine several states can be successors of the same internal state and input state. In other words, the parallel state ma­chine is concurrently in several of its internal states. This is similar in principle to concurrently having many tokens in places of the Petri net graph description.

Nondeterministic state machines are another model. In a nondeterministic state machine, there are several transitions to next internal states from the same present input state and the same present internal state. From this aspect, nondeter – ministic state machines are syntactically similar to parallel state machines. However, the interpretation between these two machines is different. In a nondeterministic state ma­chine, the several transitions to a next internal state is inter­preted that any of these transitions is possible, but only one is actually selected for next stages of design. The selection may occur at the state minimization, the state assignment, the state machine decomposition, or the circuit realization of the excitation and output logic. The transition is selected in order to simplify the circuit at the next design stage, or to improve certain property of the circuit. The above selection is done either automatically by the EDA tools, or manually by a human. Nondeterminism expands the design space, and thus gives the designer more freedom to improve the design. How­ever, this can also lead to a more complex or a longer design process.

There are several other generalizations of FSMs, such as Buechi or Glushkov machines, which in general assume more relaxed definitions of machine compatibility. For instance, machines can be defined as compatible even if their output sequences are different for the same starting internal states and the same input sequences given to them, but the global input-output relations of their behaviors are equivalent in some sense. All these machines can be described in tabular, graphical, functional, HDL language, or netlist forms, and re­alized in many listed below technologies.

Boolean Functions Characterizations

Boolean functions are characterized usually as truth tables, arrays of cubes, and decision diagrams. Representations can be canonical or noncanonical. Canonical means that the rep­resentation of a function is unique. If the order of the input variables is specified, then both truth tables and binary deci­sion diagrams are canonical representations. Cube represen­tations are not canonical, but can be made canonical under certain assumptions (for instance, all prime implicants of a completely specified function). In a canonical representation the comparison of two functions is simple. This is one of the advantages of canonical representations. This advantage has found applications in the verification and synthesis algo­rithms.

Good understanding of cube calculus and decision dia­grams is necessary to create and program efficient algorithms for logic design, test generation and formal verification.