Monthly Archives: April 2014

MAGNET COOLING Superconductor Critical Temperature

The superconducting state exists only at temperatures below the so-called critical temperature TC. For NbTi, TC can be esti­mated as a function of applied magnetic flux density B using

/ B 17

TC(B) = Tco (1 – n—j (22)

BC20 /

where TC0 is the critical temperature at zero field (about 9.2 K) and BC20 is the upper critical magnetic flux density at zero temperature (about 14.5 T).

Boiling and Supercritical Helium Cooling. To achieve low temperatures and ensure stable operations against thermal disturbances, the accelerator magnet coils are immersed in liquid helium (70). Helium is a cryogenic fluid whose pres – sure-temperature phase diagram is presented in Fig. 12. Its boiling temperature is 4.22 K at 1 atm (1 atm « 0.1 MPa).

Small superconducting magnet systems usually rely on boiling helium at 1 atm (71). Boiling helium offers the advan­tage that, as long as the two phases are present, the tempera­ture is well determined. However, in large-scale applications,
such as superconducting particle accelerators, the fluid is forced to flow through numerous magnet cryostats and long cryogenic lines, where heat leaks are unavoidable. The heat leaks result in increases in vapor contents and create a risk of gas pocket formation that may block circulation.

The aforementioned difficulty can be circumvented by tak­ing advantage of the fact that helium exhibits a critical point at a temperature of 5.2 K and a pressure of 0.226 MPa (see Fig. 12). For temperatures and pressures beyond the critical point, the liquid and vapor phases become indistinguishable. The single-phase fluid, which is called supercritical, can be handled in a large system without risk of forming gas pockets. However, its temperature, unlike that of boiling helium, is not constant and may fluctuate as the fluid circulates and is subjected to heat losses.

The cryogenic systems of the Tevatron, HERA, and RHIC, and that designed for the SSC, combine single-phase and two – phase helium (71). In the case of the Tevatron and HERA, the insides of the magnet cold masses are cooled by a forced flow of supercritical helium, while two-phase helium is circulated in a pipe running at the cold mass periphery (around the col – lared-coil assembly for Tevatron magnets, in a bypass hole in the iron yoke for HERA magnets). In the case of the SSC, it was planned to only circulate supercritical helium through the magnet cold masses, while recoolers, consisting of heat exchangers using two-phase helium as primary fluid, would have been implemented at regular intervals along the cryo­genic lines. The cryogenic system used for the RHIC is in­spired by that of the SSC. In all these schemes, the boiling liquid is used to limit temperature rises in the single-phase fluid.

Superfluid Helium Cooling

A peculiarity of helium is the occurrence of superfluidity (70). When boiling helium is cooled down at 1 atm, it stays liquid until a temperature of the order of 2.17 K, where a phase transition appears. For temperatures below 2.17 K (at 1 atm) helium loses its viscosity and becomes a superconductor of heat. This property, unique to helium, is called superfluidity. Superfluidity is very similar to superconductivity, except that, instead of electrical conductibility, it is the thermal conduct – ibility that becomes infinite. The transition temperature be­tween the liquid and superfluid phases depends on pressure. It is called the lambda temperature Tx.

Figure 12. Pressure-temperature phase diagram of helium (71).

Temperature (K)

The LHC magnets are cooled by superfluid helium, and their operating temperature is set at 1.9 K (72). Decreasing the temperature improves the current-carrying capability of NbTi dramatically and allows higher fields to be reached. (For NbTi, the curve of critical current density as a function of field is shifted by a about +3 T when lowering the tempera­ture from 4.2 K to 1.9 K.) The feasibility of a large-scale cryo­genic installation relying on superfluid helium has been dem­onstrated by Tore Supra, a superconducting tokamak built at Commissariat a l’Energie Atomique/Cadarache near Aix en Provence in the South of France and operating reliably since 1988 (73).

Magnet Cryostat

To maintain the magnet cold masses at low temperature, it is necessary to limit heat losses. There are three mechanisms of heat transfer (74): (1) convection, (2) radiation, and (3) con­duction. The convection losses are eliminated by mounting the cold masses into cryostats, which are evacuated (71,75). The radiation losses, which scale in proportion with the effec­tive emissivities of the surfaces facing each other and with the fourth power of their temperatures, are reduced by sur­rounding the cold masses with blankets of multilayer insula­tion and thermal shields at intermediate temperatures. The main sources of conduction losses are the support posts, the power leads, and the cryogenic feedthroughs, which are de­signed to offer large thermal resistances.


A missile control system consists of those components that control the missile airframe in such a way as to automatically provide an accurate, fast, and stable response to guidance commands throughout the flight envelope while rejecting un­certainties due to changing parameters, unmodeled dynamics, and outside disturbances. In other words, a missile control system performs the same functions as a human pilot in a piloted aircraft; hence, the name autopilot is used to repre­sent the pilotlike functions of a missile control system. Missile control and missile guidance are closely tied, and for the pur­poses of explanation, a somewhat artificial distinction be­tween the two roles is now made. It must be remembered, however, that for a guided missile the boundary between guidance and control is far from sharp. This is due to the common equipment and the basic functional and operational interactions that the two systems share. The purpose of a missile guidance system is to determine the trajectory, rela­tive to a reference frame, that the missile should follow. The control system regulates the dynamic motion of the missile; that is, the orientation of its velocity vector. In general terms, the purpose of a guidance system is to detect a target, esti­mate missile-target relative motion, and pass appropriate in­structions to the control system in an attempt to drive the missile toward interception. The control system regulates the motion of the missile so that the maneuvers produced by the guidance system are followed, thereby making the missile hit or come as close as required to the target. The autopilot is the point at which the aerodynamics and dynamics of the air­frame (or body of the missile) interact with the guidance sys­tem. Instructions received from the guidance system are translated into appropriate instructions for action by the con­trol devices (e. g., aerodynamic control surfaces, thrust vec-

J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright © 1999 John Wiley & Sons, Inc.


Figure 1. A block diagram describing the functional relations among the compo­nents of the missile control system.

toring or lateral thrusters) that regulate the missile’s flight- path. A block diagram describing these missile control system operations is depicted in Fig. 1 where the function of each component is further explained as following.


Many anatomical and physiological parameters of a muscle are not accessible and cannot be measured directly. However, they are accessible in a model simulating EMG signals and EMG variables and may be changed until the simulated ob­servable EMG variables and parameters match the experi­mental ones. When the matching is obtained, it is likely that the parameters of the model have values similar to those that cannot be measured directly from the real system. This con­clusion must always be taken with caution since (a) a model always implies approximations and simplifications that may affect the results and (b) there may be more than one set of model parameters that provide a good fit of the experimental data. Motor unit action potential models have been developed by many researchers among which are P. Rosenfalck (41) and N. Dimitrova (42). The model described in Fig. 14(a) is based on the work of Gootzen et al. (43) and has been used to inves­tigate and explain some experimental findings. An application example is provided by Fig. 14(b), which shows 10 firings of the same MU of a healthy biceps brachii during a low-level voluntary contraction. The signals are detected bipolarly from a 16-contact linear array.



Figure 13. Initial 300 ms of EMG obtained from bipolar differential measurement with elec­trodes over biceps and triceps during elbow flexion. (a) Four 300 ms records and (b) the ensemble average of sixty 300 ms records demonstrating the deterministic component of the initial phase [from Hudgins et al. (35)].

The 10 firings are selected during a time interval of 1.5 s, are aligned and superimposed, and are similar enough to jus­tify the assumption that they belong to the same MU. The results of the simulation (open circles) are superimposed and the indicated model parameters provide an estimate for ana­tomical features of the MU, conduction velocity, and anisot­ropy of the tissue. Future research might lead to the develop­ment of systems for the automatic identification of the most likely set of parameters for individual MUs and make them available to the neurologist for diagnostic evaluation.

Conductive medium: ay? az; ay = ax


Electrode pair

: exp. data

о : simulation Model parameters:

e = 10 mm a, /a. = 6 b = 7 mm a/b = 1/3 h = 4.5 mm R = 2 mm W| = 5 mm WTR = 20 mm WTL = 10 mm LR = 56 mm Ll = 73 mm

0 5 10 15 20 25 CV = m/s

Time (ms)


Figure 14. (a) Model for the simulation of surface EMG signals and of their variables. Schematic structure of the model of a single motor unit. The motor unit has N fibers uniformly distributed in a cylinder of radius R at depth h. The axis of this cylinder may present an angle with respect to the skin plane and with respect to the z axis. The neu­romuscular junctions are uniformly distributed in a region WI, and the fiber-tendon terminations are uniformly distributed in two regions WTR and WTL. A right and a left current tripole originate from each neuromuscular junction and propagate to the fiber-tendon termination, where they be­come extinguished. The conduction velocity is the same in both directions and for all fibers of a mo­tor unit but may be different in different motor units. Each of the voltages VA, VB, VC, and VD is the summation of the contributions of each tri­pole. (b) Example of simulation of 10 superim­posed firings of a motor unit detected during a low-level contraction of a healthy biceps brachii muscle with a linear 16 contacts array. Pair 14 is proximal, pair 0 is distal.

Signal Losses in Radio Receiver

Due to the inherent complexity of radio receiver number of signal loss sources are associated with the receiver de­sign. Bandlimiting loss in transmitter and receiver is the result of finite bandwidth usage, usually restricted by the radio regulations. Intersymbol interference in which re­ceived digital symbols overlap with each other may re­sult from different effects: bandlimited operation, physical multipath radio propagation and usage of partial response modulation formats. Local oscillator phase noise results in the phase jitter in the receiver and consequently degrades the performance of the detection algorithm. Other impair­ments of radio front-end also result in loss of receiver per­formance – for example DC offset, gain and phase imbal­ance of in-phase and quadrature branches. Nonlinear dis­tortions in radio equipment include AM/AM and AM/PM conversion, limiter loss and intermodulation distortion. AM/PM conversion produces phase variations in the sig­nal, mostly after nonlinear amplification. Signal sidelobes growth is another effects that can produce excessive in­terference despite tight filtering at the transmitter. Hard limiting stages in receiver may cause suppression of weak signal components in the presence of stronger ones. Inter­modulation products are results on multiple signals inter­action in a nonlinear device and creates additional noise source which contributes to total noise level. Also receiver operation can be largely affected by several imperfections associated with the demodulation part such as imperfec­tions of the synchronization circuits that are producing noisy estimates of received signal parameters necessary for the detection process and finite numeric precision effects associated with programmable of fixed logic implementa­tion of the demodulation functions.

Photovoltaics versus rectenna technologies

When electromagnetic waves were experimentally observed, they were generated using antennas and radiating elements. Along the development of radio emission, antenna design became a separate area of expertise where the geometry of those elements configured the characteristics and capabilities of emission and reception of the EM waves. The shape and orientation of those antennas determine the polarization and direction of the emission, and reception. Electromagnetic spectrum was mastered and used in science and technology. Fortunately, the wavelengths associated with the radioelectric and microwave spectra allowed the manufacturing of radiating elements with the available fabrication tools. When increasing the frequency of the electromagnetic radiation, the geometries were shrunk accordingly and new fabrication strategies were used. Actually, an important leap in antenna design and fabrication appeared when using planar antennas written on flat substrates by microlithogra­phy techniques. Millimeter waves and Terahertz still benefit from those fabrication techniques. However, when the optical domain was placed as a feasible goal for antenna design, the use of electron beam lithography, focused ion beam, and related nanometric precision manufac­turing tools were necessary. Even more, those metals traditionally used as materials for antenna fabrication appeared to behave as non-perfect conductors, showing spectral disper­sion and a non-negligible penetration depth.

At the same time that antennas were clearly devoted to the emission and detection of EM wave in the radioelectric, and microwave regimes, light and optical spectrum was covered with other reliable technologies for emission (incandescence lamps, spectral lamps, lasers, etc.) and detection (Golay cells, thermoconductors, photovoltaics, etc.) mainly based for detection in the quantified energy levels of semiconductors. Then, photodetectors improved their performance in responsivity, signal-to-noise ratio, cut-off frequency, size, and biasing requirements.

Then, it is easy to understand that antennas did not find a suitable place to develop as optical detectors. Semiconductor detectors were here to stay, fabrication of optical antennas is difficult and requires high-tech machinery, and metals are non-perfect conductors anymore in the optical regime.

However, some advances were made in using antenna-coupled detectors in the detection of light at higher and higher frequencies, and in its use as frequency mixer or coupled to bolometric devices. Besides, nanoscience has found optical antennas as promising elements to explore materials and media with high spatial resolution. Plasmonic optics has become an emerging field, where the collective oscillation of charges produces exotic phenomenologies that are used for sensing and probing sub-wavelength structures.

Several reports and papers [4,5,6,7] have been published in the past years presenting optical antennas and rectennas as harvesters of electromagnetic radiation in the infrared and visible spectrum. They are based on the principle of rectification of the currents generated in an antenna structure that resonates at the visible frequency. The idea, although appealing, has been somehow over-estimated when promising efficiencies above 80%. However, as we will see in this chapter, some important problems need to be addressed before fabricating an operative device. Unfortunately, the task of rectifying electric fields oscillating at 1014-1015Hz frequencies is formidable, and the efficiency figures obtained so far are well below the announced limit. The bottleneck of the technology remains in the rectification process. At the same time, some important advances have been made to tailor the impedance of optical antennas to properly couple the electromagnetic field and also to transfer the power to the load, i. e., to the rectifier. Then, optical rectennas can be considered as a promising technology with high potential. Based on the current results, more effort needs to be allocated to leap over the rectifying mechanism with novel technologies.

Although it is limited to the solar region of the electromagnetic spectrum, the most mature and standard technology (developed since the mid 70’s) to harvest energy from EM radiation is photovoltaics (PV). According to the National Renewable Energy Laboratory (NREL), conver­sion efficiency of PV technologies has been increasingly evolved during the last 40 years (figure 2). From the most simple variant of the 1st generation represented by the silicon based cells, to the 2nd and 3rd generations corresponding to thin-film and the most sophisticated multijunction cells respectively, a trade-off between efficiency and production cost is defining the market of each variant (table 5).

PV technology

Efficiency (%)

Market Share (%)

1st generation



2nd generation



3rd generation


Table 5. Efficiency versus market of the 3 different PV technology generations.

The basic element in PV technology is the photovoltaic/solar panel/module which is composed of photovoltaic cells connected in parallel, when photogenerated current must be enhanced, or placed in series, when the output voltage is the parameter that needs to be maximized (see chapter 10: "Electronics for Power and Energy Management").

The working principle of a photovoltaic cell is based on the photovoltaic effect, which was firstly described by Alexander-Edmond Becquerel in 1839. As it is reviewed in chapter 3 about Solar Energy Harvesting, the photovoltaic effect has the same quantum nature as the photo­electric effect, so both can only be described by considering that the energy of the electromag­netic radiation is quantized in quanta called photons, with an energy hv, as it has been explained before (equation 2). As it is shown in figure 3.a, photovoltaic effect takes place at the core of the cell, which is found at the junction of the two semiconductors that integrates a typical PV cell. When an individual photon interact with an individual electron at the valence band of the semiconductor, the energy of the photon (and the photon itself) can be absorbed by the electron to get promoted to the conduction band, leaving a hole in the valence band. This process called, photo-generation of an electron-hole pair, is only possible if the photon energy is at least equal to the energy of the band-gap (energy distance between the conduction and valence band). The population of photogenerated electrons and holes is then driven by the electrical field in the depletion zone of the PN junction and can eventually contribute to a photovoltage and the corresponding photocurrent, when an electric load is connected to the PV cell. In this case, both the photovoltage and the photocurrent are dc magnitudes and their product gives directly the electrical power converted by the PV cell.

Photovoltaics versus rectenna technologies

Figure 2. Efficiency evolution of the main photovoltaic technologies (from National Renewable Energy Laboratory).

Instead, the radiofrequency rectenna (RFR) technology is based on the combined operation of two basic elements: an electrical rectifier that follows an electromagnetic antenna (rectenna). The operation principle (figure 3.b) does not require quantum mechanics to be explained since, in this case, electrons in the metallic antenna are already in the conduction band, and do not need to be promoted in energy by absorbing photons from the electromagnetic radiation. In this case, the phenomenology is better explained by the interaction between the electrons in the antenna and the electric field of an electromagnetic incident wave. Similarly to PV technology, in rectenna technology matching conditions must be also satisfied. Now, the wavelength of the EM incident wave has to be a multiple of the antenna characteristic length in order to induce a resonant electrical current in the antenna. As opposite to a PV cell, an antenna will generate at its output both an ac voltage and an ac current. For this reason, a rectifier is needed as the first basic electrical component to transform ac values into dc values.

Optical rectenna (OR) technology can be considered as a particular case of rectenna technology where the frequency of the electromagnetic radiation involved is in the optical range. So, from this point of view, RFR technology covers the radiofrequency part of the electromagnetic spectrum and OR the optical part (figure 4). However, as it will be described in a next section of this chapter, OR technology cannot be considered just as an extrapolation of the RF rectenna concept to the optical range, since neither the antenna element, in this case a nanoantenna, nor the rectifier, typically a metal-insulator-metal (MIM) diode, have exactly the same properties of the RF counterparts. New physics such as plasmon resonances have to be taken into accountin the optical antenna (OA), an antenna with characteristic lengths in the nanometer range (nanoantenna) to match the wavelengths of light radiation. Also special structures and materials are needed to achieve response times short enough to rectify signals in the THz range, which are induced in the nanoantenna element by the incident optical radiationWhen used as light detectors, optical antennas involving rectifiers perform quite well in several specifications, especially in those related with their intrinsic electromagnetic nature. Table 6 shows these figures for a few technologies working in the visible and the infrared. We may already see in this table that the responsivity of optical antennas is lower than the rest of technologies. This figure is in accordance with the low efficiency of rectennas that has been observed in actual experiments involving MIM, or Metal-Insulator-Insulator-Metal (MIIM), junctions as transducers. Summarizing this table we may say that optical antennas are point detectors, very fast, work at room temperature, can be integrated with some other elementsand devices (for example with focusing optics), and they present a broad tuneability, and a remarkable selectivity in direction and polarization.

Visible CCD/ CMOS

MIM Junctions













101-102 A2

101-102 A2

10-2-100 A2





















1 03-1 04 V/W

0,7-0,9 A/W

0.7 A/W

1 03-1 04 V/W

1 03-1 04 V/W

0.1 V/W

Time response

100 ns

10 ps

9 ps

400 |is

400 |is

1 ps

Table 6. Four different photodetection mechanisms are compared with optical antennas technology.

Nowadays, space in urban areas, including work and home environments is strongly packed with EM radioelectric waves at various bands and spectral regions: besides the ubiquitous presence of radio and TV bands, cell phones and personal communications devices, a myriad of wi-fi stations, Bluetooth gadgets, and remote emitters and detectors produce a non- negligible amount of EM energy flowing around us. Then, from a harvesting point of view this energy could be recycled and properly used by electronic systems with ultra-low power requirements. This strategy may work in those environments with strong RF signals, where signal-to-noise ratio of other operative elements is not compromised. This idea of RF and microwave recycling has been developed some time ago in the form of antenna arrays and half or full wave rectifiers. Optical rectennas can be seen as an evolution and transposition of those designs and devices already working in the microwave region. In this band some designs have demonstrated more than 75% of efficiency when used for power transmission [8]. These figures are reduced when considering broad-band antennas designed to recycle microwave energy from ambient background.

Unfortunately, so far the efficiency number obtained at those frequencies have not been replicated at infrared or visible frequencies. The reasons are mostly derived from the inherent behaviour of materials when frequency increases. Besides, the difficulties of designing THz electronics and oscillators, metals begin to behave as dispersive materials and the currents built on their surface penetrates within the structure.

In order to place the reader in a position to make an educated guess on the different technol­ogies we present here a brief comparison among the photovoltaics, radiofrequency rectifiers, and optical rectennas (optical antennas coupled to rectifiers)

Photovoltaics: Direct conversion of light into electric power using the photovoltaic effect exhibited by semiconductor materials.

• Efficiency: The theoretical limit is around 41% for single junction solar cells, and reaches 87% for multiple junctions.

• Pros: Well established and mature technology. Fabrication issues have been solved due to the intrinsic relation with semiconductor technology.

• Cons: The performance is strongly dependent on temperature, especially for multiple junction cells.

Radiofrequency Rectifiers: Direct conversion of light into electricity using a rectifier working at radio or microwave frequencies.

• Efficiency: The limit is set around 85%. Practical devices have been demonstrated with an efficiency larger than 75%.

• Pros: Well known basic mechanism of rectification. Fabrication can be made using standard photolithography on dielectric substrates.

• Cons: Polarization and spectral selectivity.

Optical Rectennas: Direct conversion of light into electricity using rectifiers working at optical frequencies.

• Efficiency: The theoretical limit is around 85%.

• Pros: Antenna theory and its scaling to optical frequencies is known and antenna-coupled detectors have been demonstrated in the infrared and the visible. Minimum size of about Л2, allowing very high packaging. No dependence with temperature. Metals are used for fabrication with some advances in the use of conducting graphene.

• Cons: The efficiency of working devices is well below 85%. Barrier rectifiers are not able to follow optical frequencies and behave as square law rectifiers. Further advances are needed to have feasible rectifying mechanisms. Nano-fabrication technologies are necessary (nano­imprint could solve large scale fabrication numbers).