Monthly Archives: February 2014


Elaborate antennas or antenna systems require careful de­sign and a thorough understanding of the radiation mecha­nism involved. The selection of the type of antenna to be used is determined by electrical and mechanical constraints and operating costs. The electrical parameters of the antenna are the frequency of operation, gain, polarization, radiation pat­tern, impedance, and so on. The mechanical parameters of importance are the size, weight, reliability, manufacturing process, and so on. In addition, the environment under which the antenna is to be used also needs to be taken into consider­ation for example, the effects of temperature, rain, and wind vibrations. Antennas are shielded from the environment through the use of radomes whose presence is taken into ac­count while designing the antenna.

Antennas can be classified broadly into the following cate­gories: wire antennas, reflector antennas, lens antennas, trav­eling wave antennas, frequency independent antennas, horn antennas, and conformal antennas. In addition, antennas are very often used in array configurations to improve upon the characteristics of an individual antenna element.

Wire Antennas

Wire antennas were among the first type of antennas used and are the most familiar type to the layman. These antennas can be linear or in the form of closed loops. The thin linear dipole is used extensively and the half-wavelength dipole has a radiation resistance of 73 П, very close to the 75 П charac­teristic impedance of feed lines such as the coaxial cable. It has an omnidirectional pattern as shown in Fig. 2 with a half power beamwidth of 78°. Detailed discussions on dipole an­tennas of different lengths can be found in Ref. 25.

Loop antennas can have several different shapes such as circular, square, and rectangular. Electrically small loops are those whose overall wire extent is less than one-tenth of a wavelength. Electrically large loops have circumferences that are of the order of a wavelength. An electrically small circular or square loop antenna can be treated as an infinitesimal magnetic dipole with its axis perpendicular to the plane of the loop. Various configurations of polygonal loop antennas have been investigated (27) in the ferrite loop, where a ferrite core is placed in the loop antenna to increase its efficiency. Loop antennas are inefficient with high ohmic losses and often are used as receivers and as probes for field measurements. The radiation pattern of small loop antennas has a null perpendic­ular to the plane of the loop and a maximum along the plane of the loop. An electrically large antenna has the maximum radiation perpendicular to the plane of the loop and is re­garded as the equivalent to the half wavelength dipole.

Dipole and loop antennas find applications in the low to medium frequency ranges. They have wide beamwidths and their behavior is greatly affected by nearby obstacles or struc­tures. These antennas are often placed over a ground plane. The spacing above the ground plane determines the effect the ground plane has on the radiation pattern and the increase in the directivity (21).

Thick dipoles are used to improve the narrow bandwidth of thin dipole antennas. Examples of these are the cylindrical dipole, the folded dipole, and the biconical antennas. The use of a sleeve around the input region and the arms of the dipole also results in broader bandwidths.

Reflector Antennas

Since World War II, when reflector antennas gained promi­nence due to their use with radar systems, these antennas have played an important role in the field of communications. Love (28) has published a collection of papers on reflector an­tennas. Reflector antennas have a variety of geometrical shapes and require careful design and a full characterization of the feed system. Silver (5) presents the technique for analy­sis based on aperture theory and physical optics. Other meth­ods such as the geometrical theory of diffraction (GTD) and the fast Fourier transform (FFT) along with various optimiza­
tion techniques (29) are now used for a more accurate design of these antennas.

The plane reflector is the simplest type of a reflector and can be used to control the overall system radiation character­istics (21). The corner reflector has been investigated by Kraus (30) and the 90° corner reflector is found to be the most effective. The feeds for corner reflectors are generally dipoles placed parallel to the vertex. These antennas can be analyzed in a rather straightforward manner using the method of im­ages. Among curved reflectors, the paraboloid is the most commonly used. The paraboloid reflector shown in Fig. 3 is formed by rotating a parabolic reflector about its axis. The reflector transforms a spherical wave radiated from a feed at its focus into a plane wave.

To avoid blockage caused by the feed placed at the focal point in a front fed system, the feed is often off-set from the axis (31). The Cassegrain reflector is a dual reflector system using a paraboloid as the primary and a hyperboloid as the secondary reflector with a feed along the axis of the parab­oloid.

The Gregorian dual reflector antenna uses an ellipse as the subreflector. The aperture efficiency in a Cassegrain antenna can be improved by modifying the reflector surfaces (28). Most paraboloidal reflectors use horn antennas (conical or pyrami­dal) for their feeds. With a paraboloidal reflector, beam scan­ning by feed displacement is limited. A spherical reflector pro­vides greater scanning but requires more elaborate feed design since it fails to focus an incident plane to a point. Spherical reflectors can suffer from a loss in aperture and in­creased minor lobes due to blockage by the feed.


Figure 3. A parabolic reflector antenna with its feed. (Courtesy, NASA Lewis Center)


Lens Antennas

At larger wavelengths, reflectors become impractical due to the necessity of having large feed structures and tolerance requirements. At low frequencies, the lens antenna is prohibi­tively heavy. Both lens antennas and parabolic reflectors use free space as a feed network to excite a large aperture. The feed of a lens remains out of the aperture and thus eliminates aperture blockage and high side lobe levels. Dielectric lens antennas are similar to optical lenses and the aperture of the antenna is equal to the projection of the rim shape. Lenses are divided into two categories: single-surface and dual-sur­face. In the single-surface lens refraction occurs only at one surface. The other surface is an equiphase surface of the inci­dent or emergent wave and the waves pass through normal to the surface without refraction. Single-surface lenses convert either cylindrical or spherical waves to plane waves. In a dual-surface lens, refraction occurs at both lens surfaces. The far field is determined by diffraction from the aperture. Dual­surface lenses allow more control of the pattern characteris­tics. Both surfaces are used for focusing and the second sur­face can be used to control the amplitude distribution in the aperture plane. These simple lenses are many wavelengths thick if their focal length and aperture are large compared to a wavelength. The surface of the lens can be zoned by remov­ing multiples of wavelengths from the thickness. The zoning can be done either in the refracting or non-refracting surface as shown in Fig. 4. The zoned lens is frequency sensitive and can give rise to shadowing losses at the transition regions (5).

Artificial dielectric lenses in which particles such as metal spheres, strips, disks, or rods are introduced in the dielectric have been investigated by Kock (32). The size of the particles has to be small compared to the wavelength. Metal plate lenses using spaced conducting plates are used at microwave frequencies. Since the index of refraction of a metal-plate me­dium depends on the ratio of the wavelength to the spacing between the plates, these lenses are frequency sensitive. The Luneberg lens is a spherically symmetric lens with an index of refraction that varies as function of the radius. A plane wave incident on this lens will be brought to a focus on the opposite side. These lens antennas can be made using a series of concentric spherical shells each with a constant dielectric.

Traveling Wave Antennas

Traveling wave antennas (33) are distinguished from other antennas by the presence of a traveling wave along the struc-



Figure 6. A two-arm balanced conical spiral antenna.




Figure 5. A Yagi-Uda antenna.



Sectoral E-plane Conical

Sectoral H-plane


ture and by the propagation of power in a single direction. Linear wire antennas are the dominant type of traveling wave antennas. Linear wave antennas with standing wave patterns of current distributions are referred to as standing wave or resonant antennas, the amplitude of the current dis­tribution is uniform along the source but the phase changes linearly with distance. There are in general two types of trav­eling wave antennas. The surface wave antenna is a slow wave structure, where the phase velocity of the wave is smaller than the velocity of light in free space. The radiation occurs from discontinuities in the structure. A leaky wave an­tenna is a fast wave structure, the phase velocity of the wave being greater than the velocity of light in free space. The structure radiates all its power with the fields decaying in the direction of wave travel.

A long wire antenna, many wavelengths in length, is an example of a traveling wave antenna. The Beverage antenna is a thin wire placed horizontally above a ground plane. The antenna has poor efficiency but can have good directivity and is used as a receiving antenna in the low to mid-frequency range. The V antenna is formed by using two Beverage anten­nas separated by an angle and fed from a balanced line. By adjusting the angle, the directivity can be increased and the side lobes can be made smaller. Terminating the legs of the V antenna in their characteristic impedances makes the wires nonresonant and greatly reduces back radiation. The rhombic antenna consists of two V antennas. The second V antenna brings the two sides together and a single terminating resis­tor can be used to connect the balanced lines. An inverted V over a ground plane is another configuration for a rhombic an­tenna.

The pattern characteristics can be controlled by varying the angle between the elements, the lengths of the elements, and the height above the ground. The helical antenna (21) is a high gain broadband end-fire antenna. It consists of a conducting wire wound in a helix. It has found applications as feeds for parabolic reflectors and for various space commu­nications systems. A popular and practical antenna is the Yagi-Uda antenna (34,35) shown in Fig. 5. It uses an ar­rangement of parasitic elements around the feed element to act as reflectors and directors to produce an end-fire beam. The elements are linear dipoles with a folded dipole used as the feed. The mutual coupling between the standing wave current elements in the antenna is used to produce a travel­ing wave unidirectional pattern.

Frequency Independent Antennas

Frequency independent antennas or self scaling antennas were introduced in the early 1950s extending antenna band – widths by greater than 40% (36). Ideally an antenna will be frequency independent if its shape is specified only in terms of angles. These antennas have to be truncated for practical use and the current should attenuate along the structure to a negligible value at the termination. Examples of these anten­nas are the bidirectional planar spiral, and the unidirectional conical spiral antenna shown in Fig. 6.

Horn Antennas

The electromagnetic horn antenna is characterized by attrac­tive qualities such as a unidirectional pattern, high gain, and purity of polarization. Horn antennas are used as feeds for reflector and lens antennas and as a laboratory standard for other antennas. A good collection of papers on horn antennas can be found in Ref. 37. Horns can be of a rectangular or circular shape as shown in Fig. 7.

Rectangular horns, derived from a rectangular waveguide, can be pyramidal or sectoral E plane and H plane horns. The E plane sectoral horn has a flare in the direction of the E field of the dominant TE10 mode in the rectangular waveguide and the H plane sectoral horn has a flare in the direction of the H field. The pyramidal horn has a flare in both directions. The radiation pattern of the horn antenna can be determined from a knowledge of the aperture dimensions and the aperture field distribution. The flare angle of the horn and its dimen-




Figure 8. A coaxial fed (a) microstrip antenna and (b) stacked mi­crostrip antenna.

sions affect the radiation pattern and its directivity. Circular horns derived from circular waveguides can be either conical, biconical, or exponentially tapered.

The need for feed systems that provide low cross polariza­tion and edge diffraction and more symmetrical patterns led to the design of the corrugated horn (38). These horns have corrugations or grooves along the walls which present equal boundary conditions to the electric and magnetic fields when the grooves are A/4 to A/2 deep. The conical corrugated horn, referred to as the scalar horn, has a larger bandwidth than the small flare angle corrugated horns.

Conformal Antennas

Microstrip antennas have become a very important class of antennas since they received attention in the early 1970s. These antennas are light weight, easy to manufacture using printed circuit techniques, and are compatible with mono­lithic microwave integrated circuits (MMICs). In addition, an attractive property of these antennas is that they are low pro­file and can be mounted on surfaces, that is, they can be made to ‘‘conform’’ to a surface, hence they are referred to as confor – mal antennas. The microstrip antenna consists of a conduct­ing patch or radiating element which can be square, rectangu­lar, circular, or triangular etched on a grounded dielectric substrate as shown in Fig. 8.

These antennas are an excellent choice for use on aircraft and spacecraft. Microstrip antennas have been investigated extensively over the past twenty years and the two volumes published by Hall and Wood (39) provide an excellent descrip­tion of various microstrip antennas, their design, and usage. Microstrip antennas are fed either using a coaxial probe, a microstrip line, proximity coupling, or through aperture cou­pling. A major disadvantage of these antennas is that they are poor radiators and have a very narrow frequency band­width. They are often used in an array environment to achieve the desired radiation characteristics. Larger fre­quency bandwidths are obtained by using stacked microstrip antennas.

Antenna Arrays

Antenna arrays are formed by suitably spacing radiating ele­ments in a one or two dimensional lattice. By suitably feeding these elements with relative amplitudes and phases, these arrays produce desired directive radiation characteristics.

The arrays allow a means of increasing the electric size of the antenna without increasing the size of the individual elements. Most arrays consist of identical elements which can be dipoles, helices, large reflectors, or microstrip elements. The array has to be designed such that the radiated fields from the individual elements add constructively in the de­sired directions and destructively in the other directions. Arrays are generally classified as end-fire arrays that produce a beam directed along the axis of the array, or broadside arrays with the beam directed in a direction normal to the array. The beam direction can be controlled or steered using a phased array antenna in which the phase of the individual elements is varied. Frequency scanning arrays are an exam­ple where beam scanning is done by changing the frequency. Adaptive array antennas produce beams in predetermined di­rections. By suitably processing the received signals, the an­tenna can steer its beam toward the direction of the desired signal and simultaneously produce a null in the direction of an undesired signal.

What does irreversible mean?

When we introduced the entropy change we specified that this is defined in terms of heat transfer, once we perform a reversible transformation. What does it exactly mean reversible? Well, reversible literally means that "it can be done the other way around" but in my opinion it is not a very clear definition. What is usually meant is that if we want to go from a state A toward a state B, we do need to do a transformation that it is so slow that it goes through an infinite number of equilibrium states so that at any instant all the macroscopic quantities like temperature, pressure, volume, … are well defined. By the moment that these quantities are defined only in equilibrium condition we need to be as close to equilibrium as possible. For a number of comprehensible reasons that we will address more in detail in another chapter, this requires that we go quite slow[6] when we change anything in the system.

What happens if we do not go "slow"? Well, as we have seen before, in this case we are performing an irreversible transformation. During an irreversible transformation the entropy always increases. Moreover, due to the Clausius inequality it always increases some more compared to what it would be required by the second law. Why is that? The answer is that in addition to the physiological increase there is an extra contribution due to the dissipative effect of the non-equilibrium processes. With dissipative effect we intend a way in which some low – entropy energy is changed into high-entropy energy. A typical example of dissipative process is friction. If during any transformation there is friction then the transformation is irreversible and the increase in entropy benefits from the additional contribution of this process.

In this regards it is interesting to inspect more in detail the example of the movable set in contact with the gas, we introduced before. When the system represented by the particle gas + the movable set is at equilibrium the movable set is not only acted on by the collision of the particles but is also damped by the very same source. To see this effect we can consider two simple cases.

1. We compress the spring to some extent and then we release the compression leaving it free to oscillate. After few oscillations we observe that the oscillation amplitude decreases as a consequence of what we call the friction (viscous damping force) action due to the presence of the gas. The decrease ceases when the oscillation amplitude reaches a certain equilibrium value and after that it remains constant (on average, see following figure). Some energy has been dissipated into heat.

What does irreversible mean?

2. We now start with the movable set at rest and leave it free. After few seconds we will see that the set starts to move with increasing oscillation amplitude that soon reaches an equilibrium condition at the very same value (on average) of the first case (see following figure).

What does irreversible mean?

In both cases the two different roles of damping-force and pushing-force has been played by the gas. This fact led to think that there must be a connection between the process of dissipating energy (a typical irreversible, i. e. non-equilibrium process) and the process of fluctuating at equilibrium with the gas.

Ground-Based NAS Equipment

The ground-based equipment needed for the NAS architec­ture involves improvements and development at all NAS facilities. Traffic flow management and air traffic control tools are improved at the ARTCCs (Centers), the TRACONs (Ap­proach Control), and the ATCT (Towers).

En Route ARTCC Equipment. The new NAS architecture up­grades the existing ARTCC Center equipment. New display systems, surveillance data processors (SDP), and flight data processors (FDP) are improvements to existing systems. The SDP system will collect information from the surveillance sys­tems such as the ADS-B reports. The SDP will provide air­craft tracking and conflict detection/resolution. The FDP will correlate aircraft tracks with flight plan information. The FDP will also communicate with other ARTCC Centers and terminals to ensure that all air traffic management units have the same flight plan information for an aircraft (9).

Air Traffic Management Decision Support Services. Air traffic management (ATM) combines the ATC and traffic flow man­agement (TFM) functions. ATM support tools are called deci­sion support services. The TFM decision support services function includes the collaborative decision-making tool that aids the pilot/controller interaction in flight planning (9).

The decision support services for the ATC function involves conflict detection/resolution and Center/TRACON Automa­tion System (CTAS). The Center/TRACON Automation Sys­tem is a tool developed by NASA to support air traffic man­agement. CTAS computes an aircraft’s route and intentions 40 min into the future. The aircraft destination, as filed in the flight plan, and aircraft type are considered in the calcula­tions. CTAS examines the aircraft mix that is arriving at an airport and provides the arrival sequencing and separation for efficient operation (8).

Ground Controller Equipment. Sensors and conflict de­tection/resolution equipment dominate enhancements to the ground controller equipment. At a large, busy airport, the number of aircraft taxiing can be significant. During arrival and departure pushes, in good and bad weather, it is difficult to manage the position of each aircraft and its intentions. Three systems that will help the ground controller manage the traffic safely and efficiently are the Airport Surface Detec­tion Equipment (ASDE), the Airport Target Identification System (ATIDS), and the Airport Movement Area Safety Sys­tem (AMASS) (9).

Ground-Based NAS Equipment

Airport Surface Detection Equipment (ASDE) is a radar system that detects aircraft and other vehicles moving on the airport surface. The ASDE antenna is a large rotodome that typically mounts on top of the control tower. The rotodome physically spins at 60 revolutions per minute. The ASDE sys­tem ‘‘paints’’ surface traffic using the radar reflection from the
target. The ASDE system is already installed at numerous airports. A large ASDE monitor is mounted in the control tower to display traffic.

One drawback with the ASDE system is that traffic ap­pears as ‘‘blips’’ on the monitor with no flight identification tags. The ATIDS solves that problem by applying tags to the ASDE targets. ATIDS is a multilateration system that listens to the Mode-S transmissions from aircraft. By timing the ar­rival of the transmission at multiple sites, it is possible to determine the aircraft location through triangulation. The ATIDS system uses flight plan information to correlate the aircraft’s transponder code with the flight number (14).

AMASS tracks ASDE targets and performs collision detec­tion analysis on airport traffic. AMASS alerts the ground con­troller of possible conflicts. AMASS also alerts the controller to possible runway incursion incidents where a taxiing air­craft is entering an active runway incorrectly. AMASS corre­lates position information from the ASDE and ATIDS systems and applies the ATIDS identification tag to the targets on the ASDE display.

Airport Facilities and Procedures. To increase capacity, the nation’s airports have been building new runways and ex­tending existing runways. Extending the length of the run­ways can help increase capacity by making general aviation runways into air-carrier-length runways. New procedures are also being defined for parallel approaches and reduced sepa­ration standards.

Adding new runways and extending existing runways adds capacity without the cost of adding new airports. By 1997, 64 of the top 100 airports had recently completed, or were in the process of developing, new runways or runway extensions to increase airport capacity. Many of these are at the busiest airports that are forecast to have more than 20,000 h of an­nual air carrier delay in 2005 (3).

Figure 6 lists the number of new runways and runway ex­tensions that are currently planned. There are 17 new run­ways and 10 runway extensions not shown on the figure be­cause they are planned but have not been assigned a firm completion date (3).

The largest capacity gains result from the construction of new airports. Considering that the new Denver International Airport, which opened in 1995, cost more than $4 billion, building new airports is not always feasible. Only one new airport was under construction in 1997. The new airport is being created from the conversion of Bergstrom Air Force Base in Austin, Texas to a civilian facility. The closed military base was to be open for passenger service by 1999. The new facility will add capacity to the system at a reduced cost from building a new airport (3).

Terminal area capacity can be increased by redesigning terminal and en route airspace. Relocating arrival fixes, cre­ating new arrival and departure routes, modifying ARTCC traffic flows, and redefining TRACON boundaries can all in­crease capacity. Improvements to en route airspace must be coordinated with terminal area improvements to avoid a de­crease in terminal capacity. If the en route structure is im­proved to deliver more aircraft to the terminal area, then ad­ditional delays would decrease the terminal capacity (3).

Instrument Approach Procedures. Instrument approach pro­cedures can improve capacity by reducing the separation standards for independent (simultaneous) instrument ap­proaches to dual and triple parallel runways. Landing and hold short operations for intersecting runways and simultane­ous approaches to converging runways can also increase ca­pacity.

Simultaneous instrument approaches to dual parallel run­ways are authorized when the minimum spacing between runways is 4300 ft. The spacing minimum has been reduced to 3000 ft when the airport has a parallel runway monitor, one localizer is offset by 2.5°, and the radar has a 1.0 s update. Airport capacity is expected to increase by 15 to 17 arrivals per hour (3).

Simultaneous arrivals to three parallel runways are also authorized. Spacing requirements state that two runways are a minimum of 4000 ft apart. The third runway must be sepa­rated by a minimum 5300 ft. Radar with a 1.0 s update rate must also be used (3).

Land and hold short operations (LAHSO) allow simultane­ous arrivals to intersecting runways. Land and hold short op­erations require that arriving aircraft land and then must hold short of the intersecting runway. Current regulations de­fine land and hold short operations only for dry runways. Spe­cial criteria for wet operations are being developed and should be implemented by early 1997. During tests at Chicago O’Hare, a 25% increase in capacity was achieved during wet operations using land and hold short operations on inter­secting runways (3).

Simultaneous approaches can be performed on runways that are not parallel provided that VFR conditions exist. VFR conditions require a minimum ceiling of 1000 ft and minimum visibility of 3 miles. The VFR requirement decreases runway capacity in IFR (Instrument Flight Rules) conditions and causes weather-related delays. Simultaneous instrument ap­proaches to converging runways are being studied. A mini­mum ceiling of 650 ft is required. The largest safety issue is the occurrence of a missed approach (go-around) by both air­craft. An increase in system capacity of 30 arrivals per hour is expected (3).

Reduced Separation Standards. A large factor in airport ca­pacity is separation distance between two aircraft. The main factor in aircraft separation is generation of wake vortexes. Wake vortexes are like horizontal tornadoes created from an aircraft wing as it generates lift. Wake vortex separation standards are based on the class of the leading and trailing aircraft. Small aircraft must keep a 4 nautical mile (nm) sepa­ration when trailing behind large aircraft. If the lead aircraft is a Boeing 757, then a small aircraft must trail by 5 nm. Large aircraft only need to trail other large aircraft by 3 nm. The FAA and NASA are studying methods of reducing the wake vortex separation standards to increase capacity. Any reduction in the spacing standards must ensure that safety is preserved (3).


Modeling attempts to find the most simplified method for ac­curately defining a system. Physiological modeling, or bio­modeling, assists in (1) research by verifying hypotheses or indicating areas needing further study, (2) teaching and training in medical schools, and (3) clinical applications by aiding in such areas as diagnosis, determination of drug regi­mens, or design of biomedical devices, including prostheses or drug delivery systems (56). Typically, these models are con­tinuous models and some use artificial intelligence and neu­ral modeling.

Biomedical engineers who model physiological systems must have (1) an in-depth understanding of the physiology, anatomy, biochemistry and biophysics of the physiological system being modeled; (2) knowledge of instrumentation, methods of measurement, and sources of data for important parametric and system variables; (3) a background in applied mathematics, such as ordinary differential equations (ODEs), partial differential equations (PDEs), and statistics; and (4) experience with computer hardware and software, including differential equation solving and compiler languages (56).

Models of physiological systems need to consider transport phenomenon associated with the system under consideration.

Transport mechanisms in the body include momentum, mass, energy, and information transport. Momentum transport is considered when modeling blood flow. Mass transport deals with the flow of various substances, such as oxygen, carbon dioxide, and pharmaceuticals, that are carried in the blood, air, food and digestive juices, and urine, and with the diffu­sion of these substances into and out of air, blood, and tissues. Energy transport refers to the mechanisms the body uses to deal with heat energy. Energy transformation and transport need to be considered when models involve muscle tissue. The transmission of information through nerves or hormones is what is meant by information transport.

A typical modeling method for quantifying the kinetics of materials in the body via production, distribution, transport, utilization, or substrate-hormone control interactions in­volves compartmental analysis (57). One example of compart – mental analysis is a model of the kinetics of a pharmaceutical in the blood stream. These models treat any part of the physi­ological system which can be considered homogeneous, as a compartment, and the system that is being modeled is seg­mented into a finite number of these compartments. The di­rection of flow of material between these compartments is de­termined and then modeled with differential equations. Unlike modeling, simulation attempts to reproduce the exper­imental data without trying to identify the mechanisms re­sponsible for the experimental observations (58).

Closed-loop drug delivery (CLDD) systems represent a practical application of control (59). CLDD systems are used for therapeutic and diagnostic purposes. For example, an in­fusion pump administers a drug to the patient, the patient’s response is sent to a monitor, and the monitor feeds the infor­mation to a controller which determines the next infusion rate for the patient and adjusts the pump accordingly. The control laws typically applied to CLDDs are proportional-inte­gration-derivative (PID), adaptive, and fuzzy control. Adap­tive is the most prevalent. In the clinical use of these systems, a supervisor is present to override control in case of unphysio – logical disturbances, such as a change in drug concentration.

Precipitation Heat Treatment

Lee et al. (41) first established a linear relationship be­tween the optimized critical current density and the vol­ume of precipitate in a laboratory-scale monofilamentary composite fabricated from Nb-47 wt. % Ti alloy as shown in Fig. 9. The relationship extended from 0% of the strand volume being precipitated (non-heat-treated) to 25 vol. %. Chernyj et al. (42) extended this relationship to an Nb-50 wt. % Ti alloy and a maximum volume percent of a-Ti of 28%. A wider study of strands produced by different manu­facturers for the superconducting supercollider confirmed the linear relationship for industrial-scale strands (43). The importance of maximizing the amount of precipitate in the strand is unambiguous. Precipitate is produced in the Nb-Ti by heat treatments at 375° to 42°C for a dura­tion of typically 40 h to 80 h. Increasing the temperature increases the precipitation rate but increases the precipi­tate diameter and the low-field Jc to high-field Jc ratio (43). The amount of precipitation is also dependent on the alloy composition; the quantity of a-Ti produced by the first pre­cipitation heat treatment increases strongly with Ti con­tent (44) as shown in Fig. 10. This relationship shows how too low a Ti content in the Nb-Ti alloy can result in insuf­ficient precipitation for high critical current density and how a large local variation in Ti content can lead to an inho – mogeneous distribution of precipitates and, subsequently, flux-pinning sites.

After approximately 10 vol. % precipitate has been pro­duced in the first heat treatment, it becomes very diffi­cult to produce significantly more without excessively long heat treatment times. By applying additional cold-work strain to the microstructure, more precipitate is produced

(as shown in Fig. 10 for the second heat treatment). An op­timum balance between increased precipitate volume and minimum strain space is at a strain of approximately 1.2 (12). Three or more heat-treatment and strain cycles are normally required to produce the 20 vol. % or more precip­itate in the microstructure required for high critical cur­rent densities (Jc > 3000 A/mm2 at 5 T and 4.2 K). As the a-Ti is precipitated, the composition of the в-Nb-Ti is de­pleted in Ti until it reaches between 36 wt. % Ti and 37 wt. % Ti, at which point there is insufficient Ti to drive fur­ther precipitation. More aggressive heat treatment is more likely to compromise the Nb diffusion barrier and coarsen the precipitate size.

After the final heat treatment the microstructure viewed transverse to the drawing axis consists of a uniform distribution of roughly equiaxed a-Ti precipitates, 80 nm to 200 nm in diameter, in a matrix of equiaxed Nb-Ti grains of similar dimensions. Viewed in longitudinal cross section the a-Ti and в-Nb-Ti grains are somewhat elongated along the drawing axis with an aspect ratio of 4 to 15 depending on the processing history. Further cold-work strain is re­quired to reduce the dimensions of the precipitates so that they can pin flux efficiently. During the a-Ti precipitation heat treatments, the в-Nb-Ti matrix has been depleted in Ti to a level of 37 wt. % Ti to 38 wt. % Ti, and the Hc2 and Tc of the composite at this point in processing are the same as the values of single-phase material of these lower Ti levels (5).