Monthly Archives: February 2014

THE APC PROCESS

The quantity, composition, and distribution of pinning cen­ter as well as the composition of the matrix are limited, in the conventional process, by the thermodynamics of the Nb-Ti phase diagram. Additional precipitate can be pro­duced by increasing Ti content of the alloy (as shown in Figure 10), but that is more than offset by the decrease in Hc2 (Fig. 2). The result is a critical current limit in con­ventionally processed Nb-Ti superconductors of approxi­mately 3800 A/mm2, at 4.2 K and 5 T. An alternative ap­

proach is to fabricate the microstructure by mechanically assembling the desired components of the microstructure at large size and reducing the microstructure to the ap­propriate size by extrusion and cold drawing (58, 59). The engineered microstructural rods can be restacked into a composite just as for a conventional Nb-Ti superconduc­tor, but no precipitation heat treatments are required. An intermediate approach developed by Supercon, Inc. (60) uses a low-temperature diffusion heat treatment to mod­ify a densely packed microstructure fabricated from layer of pure Nb and Ti. The diffusion-modified APC has been successfully used in solenoid, model dipole (61), and MRI magnets (62). Round-wire APC superconductors and multi­layers have developed zero-field Jc up to 10% of the theoret­ical upper limit provided by the depairing current density Jd. (Jd ~ Hc/X) (e. g., Refs. 63 and 64), where X is the pen­etration depth. APC superconductors fabricated with Nb pins perform particularly well at low fields (up to about 5 T to 7 T), and Jc values approaching 7500 A/mm[1] at 3 T (65, 66) have been achieved (25% of Nb pinning center in an Nb-47 wt. % Ti matrix). Nb has been a preferred pin­ning material because of its mechanical compatibility with the Nb-Ti matrix. Even using Nb, however, poor workabil­ity and increased costs associated with assembly and yield have so far limited the commercial application ofAPC com­posites. The components of an engineered microstructure must initially be large enough to be stacked by hand (or possibly machine); consequently the engineered pins must undergo a far greater deformation to reach optimum size than for a-Ti precipitates which start at 100 nm to 200 nm in diameter. The larger deformation and multiple extru­sions and the restacks required by the APC process result in a microstructure that can be much less uniform than for the conventional process (67). For this reason, processes that can use smaller cross-sectional starting dimension, such as stacked or wrapped sheet, can result in superior properties such as the Jc of 4250 A/mm2 at 5 T and 4.2 K, achieved by Matsumoto et al. (68) with stacked sheets of Nb-50 wt. % Ti and 28 vol. % of Nb sheets. Because of the large amount of cold work in the engineered microstruc­ture, it is extremely sensitive to heating during extrusion; the highest round-wire Jc (5 T, 4.2 K) of 4600 A/mm2 was achieved by Heussner et al. (69). For the Nb pins, similar volumes of pinning material are required as for conven­tionally processed materials; but by using ferromagnetic pins (Fe or Ni) the required pin volume to achieve high critical current density has been reduced to only 2 vol. % (70). Such developments suggest that there are still ex­citing advances that can be made in the development of ductile Nb-Ti-based superconductors.

that the critical current density ofproduction LHC strands, from five different sources, when measured at 1.9 K and 9 T (2275-2376 A/mm2) is approximately that of the same strands at 4.2 K and only 6 T.

PROSTHESES AND ARTIFICIAL ORGANS

A device that is an artificial substitute for a body part, whether it is a limb or a heart valve, is called a prosthesis. When the prosthesis replaces all or part of an organ, it is called an artificial organ. Though replacement of organs from donor transplants is a more straightforward and reliable method, the supply of donor organs and thus their use is lim­ited. Artificial organs have been designed because they can be produced in sufficient quantities to meet demand and they eliminate the possibility of transferring infections, for exam­ple, HIV and hepatitis, from the donor to the recipient. When designing an artificial organ, function is of primary concern and can result in a device that bears little resemblance to its natural counterpart. Typically, artificial organs are made from synthetic materials not found in nature and use mecha­nisms different from those of the natural organ to achieve the same function. Disadvantages of artificial organs include the relative inability to adapt to growth, which limits their use in children, the mechanical and chemical wear due to use, and the body’s environment, which can limit the life of the device. Recently the design of artificial organs has included combin­ing biological material, such as organelles, cells, or tissues, with synthetic, engineered devices. These hybrids are called bioartificial organs (60).

Artificial hearts are primarily used as a “bridge-to-trans – plant,’’ that is, a temporary replacement used until a donor organ donor is transplanted. Research continues in devel­oping long-term, completely implanted heart replacements. The heart-lung machine is a short-term artificial organ used for patients undergoing transplant operations. It allows the patient to survive the removal of the heart until the replace­ment organ is surgically implanted. Common prostheses for the circulatory system are cardiac valve prostheses and vas­cular grafts. Concerns with these prosthetics include the for­mation of fibrous blood clots inside the circulatory system (thrombi), tissue overgrowth, hemorrhage from anticoagu­lants, and infection (61).

The artificial lung must provide a mechanism for the up­take of O2 by the blood and the removal of CO2. It can be used to completely replace the function of the lung temporarily during surgery or to assist with gas exchange temporarily un­til the lung can heal. Artificial lungs also replace or assist lung function permanently, if necessary. Typically, artificial lungs are not placed where the natural lung is located so the blood in the pulmonary system must be diverted to the artifi­cial lung and pumped to return it to the heart and systemic circulation. Gas is commonly exchanged by using membrane oxygenators. Difficulties in design include developing mem­branes as thin as the walls of the alveoli and finding a blood distribution method that mimics the branching achieved in a short distance by the lung (62).

One kidney can sustain function for a lifetime which makes live kidney donation possible: however, donors are typ­ically cadavers. The artificial kidney provides a common in­termittent treatment for renal failure during diminishing function of the kidneys or for patients who are waiting for a donor kidney. Dialysis, the mechanism of the artificial kidney, performs the necessary functions of the kidneys. These in­volve regulating (1) the volume of the blood plasma (contrib­uting significantly to the regulation of blood pressure), (2) the concentration of waste products in the blood, (3) the concen­tration of electrolytes (Na+, K+, HCO3, and other ions) in the plasma, and (4) the pH of plasma (63). More aggressive dial­ysis of the peritoneum, the membrane surrounding the body cavity and covering some of the digestive organs, is a recently developed treatment for irreversible end-stage kidney fail­ure (64).

The main concern with the loss of liver function is loss of the ability to detoxify the blood. Therefore, devices which aug­ment liver function focus on methods of detoxification. Some procedures currently in practice or under investigation in­volve dialysis, filtration, absorbent materials, and immobi­lized enzymes to convert specific toxins to less harmful sub­stances. Currently, temporary replacement of the liver involves systems with mammalian hepatocytes (liver paren­chymal cells which remove most of the carbohydrates, amino acids, and fat from the digestive products absorbed from the intestines by the blood) attached to a synthetic sup­port, where input from the host is separated from the device by a semipermeable membrane. Bioartificial livers using functional hepatocytes in a device immersed in body fluids are being investigated as an alternative to organ replace­ment (65). Partial or complete removal of the pancreas can occur due to polycystic disease, trauma, or tumors. The replacement ar­tificial pancreas focuses on the hormonal or endocrinal activ­ity of the pancreas (i. e., insulin and glucagon secretion), which regulates the uptake and release of glucose. Devices have not yet been developed that can replace the exocrine function of the pancreas, namely, the secretion of proteolytic and lypolytic enzymes in the gastrointestinal tract. Other ar­tificial organs for the digestive system include trachea re­placements, electrical and pneumatic larynxes, which replace only the phonation function of the larynx because a complete artificial organ that restores respiration and protection of the lower airway during swallowing has yet to be designed, and extracorporeal and intraesophageal stents (66).

Skin replacement following loss from events, such as a fire or mechanical accident, or through conditions, such as skin ulcers, is achieved by using autographs of the patient’s skin, allographs from cadavers, xenographs from animals, or arti­ficial skin. The risk of viral infection and rejection are con­cerns when using allographs and xenographs. Artificial skin is a bilayer membrane whose top layer is a silicone film that controls moisture and prevents infection and whose bottom layer consists of a porous, degradable copolymer. The top layer is removed and replaced by an autograph after about two weeks, and the bottom layer is removed by complete deg­radation after it induces the synthesis of new dermis. Clinical studies have shown that autographs take better than artifi­cial skin, but donor sites in which the top layer has artificial skin instead of silicone film heal faster and appear more like the patient’s skin than donor sites that used autographs (67).

HUYGENS’S PRINCIPLE

The principle proposed by Christian Huygens (1629-1695) is of fundamental importance in the development of the wave theory. Huygens’s principle states that, ‘‘Each point on a pri­mary wavefront serves as the source of spherical secondary wavelets that advance with a speed and frequency equal to those of the primary wave. The primary wavefront at some later time is the envelope of these wavelets’’ (9,10). This is illustrated in Fig. 1 for spherical and plane waves modeled

Incoming

wave

Figure 2. Diffraction of waves through a slit based on the Huygens principle.

as a construction of Huygens secondary waves. Actually, the intensities of the secondary spherical wavelets are not uni­form in all directions, but vary continuously from a maximum in the direction of wave propagation to a minimum of zero in the backward direction. As a result, there is no backward propagating wavefront. The Huygens source approximation is based on the assumption that the magnetic and electrical fields are related as a plane wave in the aperture.

Let us consider the situation shown in Fig. 2, in which an infinite electromagnetic plane wave is incident on an infinite flat sheet that is opaque to the waves. This sheet has an open­ing that is very small in terms of wavelengths. Accordingly, the outgoing wave corresponds to a spherical wavefront prop­agating from a point source. That is, when an incoming wave comes against a barrier with a small opening, all but one of the effective Huygens point sources are blocked, and the en­ergy coming through the opening behaves as a single point source. In addition, the outgoing wave emerges in all direc­tions, instead of just passing straight through the slit.

On the other hand, consider an infinite plane electromag­netic wave incident on an infinite opaque sheet shown in Fig. 3 that has an opening a. The field everywhere to the right of the sheet is the result of the section of the wave that passes through the slot. If a is large in terms of wavelengths, the field distribution across the slot is assumed, to a first approxi­mation, to be uniform. The total electromagnetic field at a point to the right of the opening is obtained by integrating the contributions from an array of Huygens sources distrib­uted over the length a. We calculate the electrical field at point P on a reference plane located at a distance R0 behind

HUYGENS'S PRINCIPLE

(a) Spherical

Figure 1. Spherical and plane-wave fronts constructed with Huy­gens secondary waves.

HUYGENS'S PRINCIPLE

P

(b) Plane

Figure 3. Plane wave incident on an opaque sheet with a slot of

width a.

Incoming

wave

the plane by Huygens’s principle (11):

HUYGENS'S PRINCIPLE

S‘

e

E = E.

-dy

(1)

T

For points near to the array, the integral does not simplify but can be reduced to the form of Fresnel integrals.

The actual evaluation of this integral is best achieved on a PC computer, which reduces the integral to the summation of N Huygens sources:

Distance along the у axis (cm) (a) R0 = 2.5 cm

-jkri

e

E = £ E0

(2)

T

HUYGENS'S PRINCIPLE

where Ті is the distance from the ith source to point P. The field variation near the slot that is obtained in this way is commonly called a Fresnel diffraction pattern (4).

For example, let us consider the case in which the slot length a is 5 cm and the wavelength is 1.5 cm (20 GHz). We can use Eq. (2) to compute the field along a straight line par­allel to the slot and distance R0 from it. The field variation for R0 = 2.5 cm shown in Fig. 4(a) is well within the near field (Fresnel region). As we continue to increase R0 the shape of the field variation along this line continues to vary with R0 until we reach the far field or Fraunhofer region. [See the trends in Figs. 4(b), 4(c), and 4(d)]. Once we have entered the Fraunhofer region, the pattern is invariant to range. For the point to be in the far field, the following relationship must exist:

Distance along the у axis (cm) (b) R0 = 5 cm

HUYGENS'S PRINCIPLE

Distance along the у axis (cm)

2 a2

Y

Rn> —

(3)

where a is the width of the slot and A is the wavelength. Thus, the larger the aperture or the shorter the wavelength, the greater the distance at which the pattern must be measured if we wish to avoid the effects of Fresnel diffraction.

Huygens’s principle is not without limitations as it ne­glects the vector nature of the electromagnetic field space. It also neglects the effect of the currents that flow at the slot edges. However, if the aperture is sufficiently large and we confine our attention to directions roughly normal to the aper­ture, the scalar theory of Huygens’s principle gives very satis­factory results.

Geometric optic techniques are commonly applied in re­flector antennas to establish the fields in the reflector aper­ture plane. This procedure is referred to as the aperture field method and it is employed as an alternative to the so-called induced current method, which is based upon an approxima­tion for the electric current distribution on the reflector sur­face. The fields in the aperture plane can be thought of as an ensemble of Huygens sources. The radiation pattern can be computed via a numerical summation of the sources.

HUYGENS'S PRINCIPLE

(c) Ro

15 cm

Figure 4. Electromagnetic field versus distance along the Y axis.

Distance along the у axis (cm)

(d) R0 = 20 cm

AIR TRAFFIC CONTROL

The United States air traffic management (ATM) system pro­vides services to enable safe, orderly, and efficient aircraft op­erations within the airspace over the continental United States and over large portions of the Pacific and Atlantic oceans and the Gulf of Mexico. It consists of two components, namely, air traffic control (ATC) and traffic flow management (TFM). The ATC function ensures that the aircraft within the airspace are separated at all times, while the TFM function organizes the aircraft into a flow pattern to ensure their safe and efficient movement. The TFM function also includes flow control such as scheduling arrivals to and departures from the airports, imposing airborne holding due to airport capac­ity restrictions, and rerouting aircraft due to unavailable air­space.

In order to accomplish the ATC and TFM functions, the ATM system uses the airway route structure, facilities, equip­ment, procedures, and personnel. The federal airway struc­ture consists of lower-altitude victor airways and higher alti­tude jet routes (1). The low-altitude airways extend from 1200 ft (365.8 m) above ground level (AGL) up to, but not including,

18,0 ft (5486.4 m) above mean sea level (MSL). The jet routes begin at 18,000 ft (5486.4 m) and extend up to 45,000 ft (13,716 m) above MSL. A network of navigational aids mark the centerline of these airways, making it possible to fly on an airway by navigating from one navigational aid to the other. The airways are eight nautical miles wide. Figure 1 shows the location of the jet routes and navigation aids that are within the airspace controlled by the Oakland and Los Angeles Air Route Traffic Control Centers. The jet routes are designated by the letter J, such as J501. Navigation facilities are indicated by a three-letter designation such as PYE.

Four types of facilities are used for managing traffic. They are the flight service stations (FSSs), air traffic control towers (ATCTs), terminal radar approach controls (TRACONs), and air route traffic control centers (ARTCCs) (1). These facilities

J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright © 1999 John Wiley & Sons, Inc.

AIR TRAFFIC CONTROL

provide service during different phases of flight. The flight service stations provide preflight and inflight weather briefings to the pilots. They also request the flight plan infor­mation which consists of the departure and arrival airports, airspeed, cruise altitude, and the route of flight, which they pass on to the ARTCCs. Flight plan filing is mandatory for flight operations under instrument flight rules. It is not re­quired for flight operations under visual flight rules but it is highly recommended. The ATCTs interact with the pilots while the aircraft are on the ground or shortly into the flight. During a part of the climb, the TRACONs are responsible. TRACON airspace, known as terminal control area (TCA), is in the shape of an upside-down wedding cake. At higher alti­tudes, the ARTCCs take on the responsibility for providing the ATM services to the aircraft. The process is reversed as the aircraft nears the destination airport.

The main types of equipment used in ATM are the radars, displays, computers, and communications equipment. Radars provide information regarding the positions of the aircraft within the airspace. This information is processed in conjunc­tion with the flight plans to predict future locations of the aircraft. The display of this information is used by the air traffic controllers in the facilities to determine if the estab­lished rules and procedures would be violated in the near fu­ture. To prevent violations, the air traffic controllers issue clearances to the pilot to modify the flight path of the aircraft such as to speed up, slow down, climb, descend, and change heading. The procedures used by the air traffic controllers and pilots include rules and methods for operations within the particular airspace. For example, the rules define the minimum separation distance between any two aircraft, the authority of an individual facility over the airspace segment, the transfer of responsibility from one facility to the other, and the phraseology for verbal communications. For pilots, these rules specify their responsibility and authority, flight and navigation procedures, reporting requirements, and com­pliance with ATM instructions. The communications equip­ment enable both voice and computer-to-computer communi­cations. Voice communication is used between pilots and the ATM facilities and also between ATM facilities. Information transfer from one facility computer to the next is done using the communications equipment.

Energy transformations for small systems

How does the description of the energy transformation processes presented so far change when we deal with small systems? To answer this question we start considering an important aspect when we deal with physical systems: the condition of being an isolated system. If we say that a system is not isolated, we intend that it has interactions of some kind with something that we consider external to the systems itself. If this is not the case (isolated system) all the dynamics is self-determined by the system itself and we can deal with it by addressing the equations of motion for each particle coupled to each other particle in the system. At this aim we may use the standard Newton laws (or in the quantum case the Schrodinger equation). If the system is not isolated the situation is generally more complex and we need to take into account the interaction of our system with the "external world". In principle however any system can be considered isolated provided that we include in the system all the sources of interactions. In the extreme case we can consider the universe itself as an isolated system. For this reason we will limit our consideration to systems that are isolated.

Before answering the question about the energy transformations in small systems we should be more precise in defining what a small system is. When we deal with real physical systems we cannot ignore that all the matter, as we know it, is composed by atoms. These are more or less individual particles whose interactions determine most of the properties that characterize the matter. The ordinary devices that we are used to deal with are composed by very a large assembly of atoms, numbers are of the order of the Avogadro number, i. e. NA = 6.022 * 1023. Thus when we are dealing with small systems, in general we intend systems composed by a number of atoms N that is small compare to NA. Clearly, due to the extremely large value of NA, a system composed by few thousands of atoms (or molecules or "particles") can still be considered small. This is the case for example of the nanodevices like last generation transis­tors. Unfortunately in this case the small systems are not isolated because they exchange energy and information with the outside. On the other hand small isolated systems are quite rare. An example of the small isolated system can be found in the realm of what is generally called "high energy physics": here the particles are most of the time just few (small system) and isolated from the external. Back to the realm of the physics of matter we have frequently to deal with systems that are usually not small but can be considered in good approximation isolated. What do we do in these cases?

One possibility is to do what we did just before, when we dealt with the movable set in contact with the gas of N particles. Here N is of the order of NA. Overall our system is composed by 3N+1 degrees of freedom (dof): 3 for each of the N particles and 1 for the movable set position coordinate x. This is clearly not a small system, although isolated, because all the interactions are inside the 3N+1 dof. In this case we played a trick: we focused our attention on the single degree x and summarized the role of the remaining 3N dof by introducing the dissipative and the fluctuating force like external forces. By the moment that both these forces are necessary to account for the observed dynamics and by the moment that both are born out of the neglected 3N dof, it comes out that they are connected to each other and the connection is nothing else that the FDT that we discussed. Our equation of motion is not anymore the deterministic Newton (Schrodinger) equation, instead it is the novel stochastic Langevin equation where there is a both friction and fluctuation caused by the added forces due to the neglected dof. Thus the trick we played was to exchange the dynamics of a not small isolated system with small not isolated system. Such an approach has different names (adiabatic elimination, coarse graining, .) and it is considered a very useful tool in describing the properties of dynamical systems composed by many dof.

To summarize our approach: we have transformed a non small isolated system into a small non isolated system. What is the advantage? Easy to say: the dynamics of a non small isolated system can be described in terms of 3N+1 dof by 3N+1 coupled motion equations and when N is of the order of Na this is a practically impossible approach. Thus the advantage was to drastically reduce the number of equations of motion (in this case to just 1) but the price we had to pay is the introduction of dissipation and fluctuation. What we have found is that dissipative and fluctuating effects appear only if we neglect some (usually many) dof through some coarse graining approximation to the system dynamics. In this perspective the dissipation of energy appears to be only an illusion due to our choice of dynamical description.

On the other hand we know that if we perform a real experiment with our movable set, indeed we observe a decrease in the oscillation amplitude of the set until it reaches the stop and then it does start to fluctuate around the equilibrium position. This is not and illusion. The potential energy initially stored in the spring is now dissipated due to the presence of the gas particles. How does this fit with what we just said about the dissipation being an illusion? The answer is that the total energy (the kinetic energy of the gas particles + the potential energy initially stored in the spring) is conserved because the (not small) system is isolated. What happened is that the potential energy of the movable set has been progressively transformed into additional kinetic energy of the N particles that now have a slightly larger average velocity (the temperature of our gas slightly increased). Thus during the dynamics the energy is transferred from some (few) dof to others (many) dof. This is what we called before energy dissipation and now it appears to be nothing more than energy re-distribution. Before we have seen that dissipative effects during a transformation are associated with an increase of entropy. Indeed this energy distribution process is an aspect of the tendency of the system to reach the maximum entropy (while conserving the energy). This is what we have called a spontaneous

transformation: the increase of the entropy up to the point where no more energy distribution process takes place, i. e. the thermal equilibrium.

Is this the end of the story? Actually it is not. There is a quite subtle aspect that is associated with the conservation of energy. It is known as Poincare recurrence theorem. It states that in a system that conserves energy the dynamics evolve in such a way that, after a sufficiently long time, it returns to a state arbitrarily close to the initial state. The time that we have to wait in order to have this recurrence is called the Poincare recurrence time. In simple words this means that not only the dissipation of energy is an illusion because the energy is simply re­distributed among all the dof but also that this redistribution is not final (i. e. on the long term the equilibrium does not exist). If we wait long enough we will see that after some time the energy will flow back to its initial distribution and our movable set will get its potential energy back (with the gas particle becoming slightly colder). This is quite surprising indeed because it implies that in this way we can reverse the entropy increase typical of processes with friction and thus fail the second principle. Although this may appear a paradox this answer was already included in the description of entropy proposed by Boltzmann and specifically in its intrinsic probabilistic character. The decrease of entropy for a system composed by many dof is not impossible: it is simply extremely improbable. It goes like this: for any finite observation time the dynamic system evolves most probably in a direction where the entropy increases because according to Boltzmann this is the most probable evolution. However if we wait long enough also the less probable outcome will be realized and thus the second principle violated. How much time should we wait? The answer depends on the dof of our isolated (energy conserving) system. The larger the number of dof the longer the time to wait. exponentially longer.