## Monthly Archives: February 2014

## Nb-Ti-Ta

The addition of Ta to Nb-Ti alloys suppresses the paramagnetic limitation of Hc2 by the large orbital moment of the alloys (53). Although Ta is only of benefit below 4.2 K (54), it has a relatively long history of study because it should extend the useful field range of ductile superconductors by 1 T or more (55). So far, however, improved Hc2 has not translated effectively into improvements in Jc, except very near to Hc2 (above 11 T). Lazarev et al. (56) were able to

Figure 12. Partial cross-section of a strand designed for the Large Hadron Collider at CERN by IGC Advanced Superconductors (now Luvata Waterbury, Inc.). 250,000 km of Nb-Ti strand were required in order to produce magnets for the 27 kilometer LHC ring, including 1232 dipoles and 858 quadrupoles. Each dipole was 15 m in length and weighed 35 tonnes. The LHC uses 1.9 K operation to push the Nb-Ti based magnets beyond 8 T. Inset is the full strand cross-section showing the individual filament stacking units. Each LHC strand has 6425 or 8800 filaments of 6 or 7 xm diameter respectively. |

attain a critical current density 1000 A/mm2 at a field of

11.5 T (2.05 K) using an Nb-37 wt. % Ti-22 wt. % Ta alloy. Ta has an even higher melting point than Nb, making the fabrication of chemically homogeneous ternary alloys particularly difficult. The behavior of Nb-Ti-Ta alloys under the conventional process is similar to that of binary alloys, but the precipitates do not appear to pin as efficiently (57).

## APERTURE ANTENNAS

Aperture antennas are most commonly used in the microwave- and millimeter-wave frequencies. There are a large number of antenna types for which the radiated electromagnetic fields can be considered to emanate from a physical aperture. Antennas that fall into this category include several types of reflectors: planar (flat plate) arrays, lenses, and horns. The geometry of the aperture geometry may be square, rectangular, circular, elliptical, or virtually any other shape. Aperture antennas are very popular for aerospace applications because they can be flush mounted onto the spacecraft or aircraft surface. Their opening can be covered with an electromagnetic (dielectric) window material or radome to protect the antenna from environmental conditions (1). This installation will not disturb the aerodynamic profile of the vehicle, which is of critical importance in high-speed applications.

In order to evaluate the distant (far-field) radiation patterns, it is necessary to know the internal currents that flow on the radiating surfaces. However, these current distributions may not be exactly known and only approximate or experimental measurements can provide estimates for these data. To expedite the process, it is necessary to have alternative methods to compute the radiation patterns of the aperture antennas. A technique based on the equivalence principle allows one to make a reasonable approximation to the fields on, or in the vicinity of, the physical antenna structure and subsequently to compute the radiation patterns.

Field equivalence, first introduced by Schelkunoff (2), is a principle by which the actual sources on an antenna are replaced by equivalent sources on an external closed surface that is physically outside of the antenna. The fictitious sources are said to be equivalent within a region because they produce the same fields within that region. Another key concept is Hugens’s principle (3), which states that the equivalent source at each point on the external surface is a source of a spherical wave. The secondary wave front can be constructed as the envelope of these secondary spherical waves

(4).

Using these principles, the electrical and/or magnetic fields in the equivalent aperture region can be determined with straightforward approximate methods. The fields elsewhere are assumed to be zero. In most applications, the closed surface is selected so that most of it coincides with the conducting parts of the physical structure. This is preferred because the vanishing of the tangential electrical components over the conducting parts of the surface reduces the physical limits of integration. The formula to compute the fields radiated by the equivalent sources is exact, but it requires integration over the closed surface. The degree of accuracy depends on the knowledge of the tangential components of the fields over the closed surface.

Aperture techniques are especially useful for parabolic reflector antennas, where the aperture plane can be defined immediately in front of the reflector. Parabolic reflectors are usually large, electrically. More surprisingly, aperture techniques can be successfully applied to small aperture waveguide horns. However, for very small horns with an aperture dimension of less than approximately one wavelength, the assumption of zero fields outside the aperture fails unless the horn is surrounded by a planar conducting flange (5). In this section, the mathematical formulas will be developed to analyze the radiation characteristics of aperture antennas. Emphasis will be given to the rectangular and circular configurations because they are the most commonly used geometries. Due to mathematical complexities, the results will be restricted to the far-field region.

One of the most useful concepts to be discussed is the far – field radiation pattern that can be obtained as a Fourier transform of the field distribution over the equivalent aperture, and vice versa. The existing relationship of the Fourier transforms theory is extremely important since it makes all of the operational properties of the Fourier transform theory available for the analysis and synthesis of aperture antennas. Obtaining analytical solutions for many simple aperture distributions in order to design aperture antennas is useful. More complex aperture distributions, which do not lend themselves to analytical solutions, can be solved numerically. The increased capabilities of the personal computer (PC) have resulted in its acceptance as a conventional tool of the antenna designer. The Fourier-transform integral is generally well behaved and does not present any fundamental computational problems.

Considering the use of the Fourier transform, first consider rectangular apertures in which one aperture dimension is large in wavelength and the other is small in terms of wavelength. This type of aperture is approximated as a line source and is easily treated with a one-dimensional Fourier transform (6). For many kinds of rectangular aperture antennas such as horns, the aperture distributions in the two principal plane dimensions are independent. These types of distributions are said to be separable. The total radiation pattern is obtained for separable distributions as the product of the pattern functions obtained from a one-dimensional Fourier transform, which corresponds to the two principal plane distributions.

If the rectangular aperture distribution is not able to be separated, the directivity pattern is found in a similar manner to the line-source distribution except that the aperture

field is integrated over two dimensions rather than one dimension (7). This double Fourier transform can also be applied to circular apertures and can be easily evaluated on a PC.

Outgoing wave |

For all aperture distributions, the following observations are made (8):

1. A uniform amplitude distribution yields the maximum directivity (nonuniform edge-enhanced distributions for supergain being considered impractical), but at high side-lobe levels.

2. Tapering the amplitude at the center, from a maximum to a smaller value at the edges, will reduce the side-lobe level compared with the uniform illumination, but it results in a larger (main-lobe) beam width and less directivity.

3. An inverse-taper distribution (amplitude depression at the center) results in a smaller (main-lobe) beam width but increases the side-lobe level and reduces the directivity when compared with the uniform illumination case.

4. Depending on the aperture size in wavelengths and phase error, there is a frequency (or wavelength) for which the gain peaks, falling to smaller values as the frequency is either raised or lowered.

Lastly, we consider aperture efficiencies. The aperture efficiency is defined as the ratio of the effective aperture area to the physical aperture area. The beam efficiency is defined as the ratio of the power in the main lobe to the total radiated power. The maximum aperture efficiency occurs for a uniform aperture distribution, but maximum beam efficiency occurs for a highly tapered distribution. The aperture phase errors are the primary limitation of the efficiency of the antenna.

## A bridge toward non-equilibrium: fluctuation-dissipation relation

In order to unveil such a link we need to introduce a more formal description of the dynamics of the movable set. This problem has been addressed and solved by Albert Einstein (1879 – 1955) in his 1905 discussion of the Brownian motion and subsequently by Paul Langevin (1872 – 1946) who proposed the following equation:

mx= – myx – + £(t) (8)

As before x represents the movable set position. Here у represents the viscous damping constant, U is the elastic potential energy due to the spring and £(t) is the random force that accounts for the incessant impact of the gas particles on the set, assumed with zero mean, Gaussian distributed and with a flat spectrum or, delta-correlated in time (white noise assumption):

ДО£(*2)>=2гс GR6(t1- t2) (9)

where the <> indicates average over the statistical ensemble.

Now, as we noticed before, by the moment that the gas is responsible at the same time for the fluctuating part of the dynamics (i. e. the random force £(t) ) and the dissipative part (i. e. the damping constant y) there must be a relation between these two. This relation has been established within the linear response theory (that satisfies the equipartition of the energy

among all the degrees of freedom) initially by Harry Theodor Nyquist (1889 – 1976) in 1928[7],

and demonstrated by Callen and Welton in 1951. This relation is:

Gr = ^7 (10)

and represents a formulation of the so-called Fluctuation-Dissipation Theorem (FDT)[1,2]. There exist different formulations of the FDT. As an example we mention that it can be generalized to account for a different kind of dissipative force, i. e. internal friction type where Y is not a simple constant but shows time dependence (work done in the sixties by Mori and Kubo). In that case the random force shows a spectrum that is not flat anymore (non-white noise assumption).

Why is FDT important? It is important because it represent an ideal bridge that connects the equilibrium properties of our thermodynamic system (represented by the amplitude and character of the fluctuations) with the non-equilibrium properties (represented here by the dissipative phenomena due to the presence of the friction). Thus there are basically two ways of using the FDT: it can be used to predict the characteristics of the fluctuation or the noise intrinsic to the system from the known characteristics of the dissipative properties or it can be used to predict what kind of dissipation we should expect if we know the equilibrium fluctuation properties. Its importance however goes beyond the practical utility. Indeed it shows like dissipative properties, meaning the capacity to produce entropy, are intrinsically connected to the equilibrium fluctuations.