Monthly Archives: April 2014

Computer Graphics

Intimately related to the issues of image processing are the techniques by which medical and biological images are dis­played with enough realism to achieve the intended results but with enough efficiency to be used in actual clinical situa­tions. Algorithms and programs for accurately portraying anatomy and, to some extent, function have improved stead­ily, sometimes exceeding the ability of the hardware to meet the demands. Fortunately, the well-known advances in per­formance and cost of advanced graphics hardware, including general-purpose computers as well as special-purpose graph­ics processors, have provided the platforms necessary for im­plementation of state-of-the-art graphics techniques.

The display of two-dimensional images is, in principle, straightforward on a computer output screen with multiple colors or gray levels per pixel. The display programs provide an interface between the user, the image, and the graphics hardware and software of the computer so that one pixel of the image is translated to one pixel of the video screen. Com­plications arise when there is a mismatch between the image and the screen, so that image pixels must be removed or dis­play pixels must be interpolated. A further complication for the developer of either two – or three-dimensional graphics software is the plethora of data file formats that exist (43). Fortunately, many public domain or proprietary software packages provide excellent format conversion tools, but some experimentation is frequently required to use them properly.

The development of methods for efficient and realistic ren­dering of three-dimensional images continues to be an area of ongoing research. Early work reduced anatomic structures to wire frame models (44), and that technique is still sometimes used for previewing and rapid manipulation on hardware that is not sufficiently powerful for handling full images in real time or near real time. Several methods require the identifi­cation of surfaces through image segmentation, as described above. The surfaces can be triangulated and displayed as es­sentially two-dimensional structures in three dimensions (45). After initial processing, this is a rather efficient display method, but much of the three-dimensional information is lost. Alternately, the image can be reduced to a series of volu­metric structures that can be rendered by hardware special­ized for their reproduction (46). One of the most realistic, but computationally expensive, three-dimensional rendering methods is ray tracing, in which an imaginary ray of light is sent through the structures and is attenuated by the opacity of the anatomic structures that it encounters along the way (47). Different effects can be emphasized by modifying the dy­namic range of the pixels in the image—that is, by changing the relationship between the opacity of the image and the pixel value to be displayed on the screen.

Figure 6. A composite of eight magnetic resonance and isochronal surface images from the second activation wavefront after an unsuccessful defibrillation shock. The electrical data were acquired from about 60 plunge needles with endocardial and epicardial electrodes inserted through the left and right ventricles of the heart of an experimental animal. Successive iso – chrones (left to right, top to bottom) are shown at 6 ms intervals. Visualization techniques that allow the superposition of function and anatomy are very helpful in understanding the relation­ships between variables and how they affect physiological mechanisms, and they can potentially lead to improved diagnosis and therapy. Reprinted from Ref. 49, with permission. Copyright CRC Press, Boca Raton, FL.

Medical computer graphics are at their most useful when it is possible to superimpose images from more than one mod­ality into a single display or to superimpose functional infor­mation acquired from biochemical, electrical, thermal, or other devices onto anatomical renderings. As an example of the former, images from positron emission tomography (PET) scans, which reflect metabolic activity, can be displayed on anatomy acquired by magnetic resonance imaging. The com­bination provides a powerful correlation between structure and function, but the technical challenges of registering im­ages from two different devices or taken at different times are significant (48). An example of the combination of functional and anatomic data is the superposition of electrical activity, either intrinsic or externally applied, of the heart onto realis­tic cardiac anatomy. This kind of technique can provide new insights into the mechanisms and therapy of cardiac arrhyth­mias (49). Figure 6 is a sequence of still frames from a video showing the progression of a wavefront of electrical activation across a three-dimensional cardiac left ventricle after an un­successful defibrillation shock.

Computer graphics and image processing, along with ad­vanced imaging technologies, are making a significant impact in medical knowledge and practice and have the potential for many more applications. A combination of traditional CAD/ CAM visualization and advanced imaging can be used for ef­fective assessment of quality of fit of orthopedic prostheses (50). Capabilities and functionality have increased dramati­cally with the advent of advanced graphics hardware and commercial software packages aimed at scientists and clini­cians who are not graphics experts. Full realization of the benefits of these systems will require further advances in these areas, along with adaptation to the needs of clinicians and the constraints of the changing health care climate (51).

SUPERCONDUCTING MAGNETS FOR FUSION REACTORS

The magnetic confinement of plasma is the most promising option to use controlled nuclear fusion as a power source for future generations. A number of different magnetic field con­figurations have been proposed to achieve plasma ignition, all requiring high field strength over a large volume. Most of the experimental machines use conventional, copper windings op­erated in pulsed mode, to investigate the plasma physics. The advanced plasma experiments, as well as the future fusion reactors, call for long confinement time and high magnetic field, which can be reasonably maintained only by supercon­ducting coils.

Unlike other applications of superconductivity, for fusion magnets there is no ‘‘normal conducting” alternative: when­ever a magnetic confinement fusion power plant will operate, it will have superconducting windings. For this reason, fusion magnets are an important, long-term factor in the market of superconducting technology. Today, for NbTi-based conduc­tors, fusion is a nonnegligible share of the market, with over 50 t of strand recently used for the LHD and about 40 t com­mitted for W7-X. For Nb3Sn technology, two large devices, the T-15 tokamak and the ITER model coils, have used most of the conductor ever produced (each about 25 t of strand), being the driving input for the development of high performance Nb3Sn strands.

J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright © 1999 John Wiley & Sons, Inc.

Strand Weight (t)

Conductor/

Cooling0

Stored Energy (MJ)

Peak Field (T)

Operating Current (kA)

Tokamak T-7

1

NbTi/FF

20

5

6

Tokamak T-15

25

Nb3Sn/FF

795

9.3C

5.6C

MFTF (all coils)

74

Nb3Sn+NbTi/pool

1 000

2-12.75

.9

5.

.5

1.

TRIAM

2

Nb3Sn/pool

76

11

6.2

Tore Supra

43

NbTi/pool 1.8 K

600

9

1.4

LHD-Helical (2 coilsb)

10

NbTi/pool 4.5(1.8)K

930 (1 650)

6.9 (9.2)

13 (17.3)

LHD-Poloidal (6 coils)

43

NbTi/FF

1 980

5-6.5

20.8-31.25

Wendelstein 7-X

37

NbTi/FF

600

6

16

0 FF = forced flow.

b Operation at superfluid helium is planned at a later stage. c Design values, achieved on single coil test.

The first use of superconducting coils in experimental fu­sion devices dates back to the mid-1970s. In the last twenty – five years, six sizable devices for magnetic plasma confine­ment have been built with superconducting coils (see Table 1): T-7 and T-15 in the former Soviet Union, MFTF in the United States, TRIAM and LHD in Japan, and Tore Supra in France. In Germany, Wendelstein 7-X is under construction. Moreover, a number of developmental and prototype coils have been tested in the scope of large international collabora­tions (large coil task, demonstration poloidal coils, ITER model coils).

The operating requirement for fusion magnets may vary over a broad range, depending on the kind of confinement and the size of the device (1), for example, from medium-field, pure dc mode in the helical coils of the stellarators, to the high-field, fast rate in the central solenoid of the tokamaks. There is no general recipe for the magnet design, but a few common issues can be identified. Long-term reliability calls for a conservative component design and generous operating margins. The maintenance by remote handling in a nuclear environment imposes strong restrictions to either repair or replacement of individual parts. Safety regulations are also a major issue for superconducting magnets in a fusion reactor: the design must account for any likely or less likely failure mode of the coil system and provide that it will not turn into a nuclear-grade accident. Last but not least, the cost of the magnets, which is a large fraction of the reactor cost, must be contained to be commercially competitive with other power sources.

Only low-temperature superconductors have been consid­ered to date for use in fusion magnets at field amplitudes up to 13 T. A substantially higher field, which would make at­tractive the use of high-temperature superconductors, is not likely to be proposed as the electromagnetic loads, roughly proportional to the product of field, current and radius, al­ready set a practical limit for structural materials. It may sound surprising that the actual superconducting material cross-section is mostly smaller than 5% of the overall coil cross-section. The choice between NbTi and Nb3Sn conductors is dictated by the operating field. The upper critical field of NbTi conductors is « 10 T at 4.5 K and « 13 T at 1.8 K. According to the design current density and the temperature margin, the operating field is set at least 3 T to 4 T below the upper critical field. In the conservatively designed fusion magnets, the peak field for NbTi conductors is up to « 9 T for coils cooled by a superfluid helium bath (e. g., Tore Supra and LHD helical coils) and up to « 6 T for supercritical helium forced flow (e. g., W7-X and LHD poloidal coils). At a higher operating field, the choice of Nb3Sn conductors is mandatory to obtain adequate temperature margins and high current density. The increasing confidence in Nb3Sn technology, as well as its slowly decreasing cost, tends to move down the field threshold for the NbTi versus Nb3Sn. Conductors based on Nb3Al are in a developmental stage and may become an alternative to Nb3Sn for selected high-field magnets (e. g., the D-shaped toroidal field coils), because of the better tolerance to bending strain.

The winding packs may be either potted in epoxy resin or laid out as a spaced matrix of noninsulated conductors in a liquid helium bath. This last option offers the advantages of constant operating temperature and potential high stability due to the bath-cooled conductor surface. The drawbacks are the poor stiffness of the winding and the limited operating voltages (the insulation relies on the helium as dielectric). The potted coils with forced-flow conductors have superior mechanical performance and may operate at higher voltage: as a rule of thumb, they become a mandatory option for stored energy in excess of 1 GJ to 2 GJ. The cable-in-conduit conduc­tors became, in the last decade, the most popular option for forced-flow conductors because of the potential low ac loss and the good heat exchange due to the large wet surface. To with­stand the mechanical and electromagnetic loads, the coils are fitted in thick-walled steel cases, either welded or bolted. More structural material may be added, if necessary, both in the conductor cross-section and in winding substructures, for example, plates and cowound strips.

The magnetic stored energy is very large, up to 130 GJ for the proposed magnet system of ITER. In case of a quench (local transition from superconducting to normal state), the stored energy must be dumped into an outer resistor to avoid an overheating and damage of the winding. A large operating current is needed to reduce the number of turns, that is, the winding inductance, and extract quickly the stored energy at a moderatly high voltage (up to 10 kV to 20 kV). The op­erating current density in the superconducting cross-section (NbTi or Nb3Sn filaments), J“, is selected according to the specific design criteria to be a fraction of the critical current density, Jc, at the highest operating field. Typically, Jop is in the range of 200 A/mm2 to 700 A/mm2, and Jop/Jc = 0.3

Figure 1. The Yin-Yang coils being assembled at one end the mirror fusion test facility (courtesy of C. H. Henning, Lawrence Livermore National Laboratory).

A/mm2 to 0.6 A/mm2. The current density over the coil cross­section is over one order of magnitude smaller.

In the nonsteady-state tokamak machines, the normal op­erating cycles and the occasional plasma disruption set addi­tional, challenging requirements, in terms of mechanical fa­tigue of the structural materials and pulsed field loads on the superconductors.

Radio Signal Propagation

The salient features of KF propagation are briefly described in this section. For a detailed treatment of this subject, the reader is referred to Kefs. 7 and 10. The three basic propaga­tive mechanisms, illustrated in Fig. 15, are reflection, diffrac­tion, and scattering. Together, these three modes enable us to estimate the signal level received by a transmitter for a given KF propagative channel.

• Reflection: occurs when a radio wave propagating in one medium is incident upon another medium that has dif­ferent electrical properties and a part of the energy is reflected back into the first medium, depending on the specific electrical properties of the second medium. If the second medium is a perfect conductor, all of the incident energy is reflected. If the second medium is a dielectric, then the energy is only partially reflected. The reflection coefficient is a function of the medium’s properties, the signal frequency, and the angle of incidence. Keflections of KF signals typically occur from objects in the propaga­tive path whose size is larger than the wavelength (A) of the KF carrier, such as buildings and walls. In the case of cellular/PCS signals at 1.9 GHz, the wavelength A =

Scattering

Radio Signal Propagation

Transmitter

Figure 15. The different modes of KF signal propagation, reflection, diffraction, and scattering.

Radio Signal Propagation

15 cm « 6 in. Hence, a variety of objects act as reflectors. Signals are also reflected from the ground. A model com­monly used to characterize KF channels is the two-ray ground reflection model (10).

• Diffraction: can be viewed as the ‘‘bending’’ of KF signals around an obstruction, as shown in Fig. 15. Diffraction occurs when the obstruction between the transmitter and receiver has sharp edges. As explained by Huygen’s prin­ciple, when a wavefront impinges on an obstruction, then secondary wavelets are produced, which give rise to bending of waves around the obstruction. The field strength of the diffracted wave in the shadowed region is the vector sum of the electric field components of the secondary wavelets. The knife-edge diffraction model (10) can be used to characterize the diffraction caused by a single object, such as building in the path of an KF sig­nal.

• Scattering: occurs when the KF signal is incident on a surface that has a certain degree of ‘‘roughness’’ (7,10). Scattering in an KF channel is commonly caused by ob­jects, such as buildings. The critical height hc = A/8 sin ei, where 0i is the angle of incidence. This implies that the maximum to minimum level of the surface must be greater than hc.

LARGE-SCALE EFFECTS

Understanding and characterizing the effects of the KF prop­agative channel are essential to designing KF communication systems. A wide range of channel conditions are encountered in KF communications, all the way from LOS channels to se­verely obstructed channels. Further, the channel may also be time-varying. Hence modeling is based on statistical and ex­perimental information. This is an area of extensive research and measurements, over the past two decades, and even until the present time (1,7,10,16-20). In this section, the two main components of signal variability due to the large-scale effects of KF propagation, namely, path loss and shadowing, are dis­cussed.

SAMPLE-AND-HOLD CIRCUITS

Sample-and-hold circuits were first introduced as a front end for analog-to-digital data converters. Processing electrical in­formation in a discrete or digital fashion appears to be more reliable, repeatable, and accurate than in the analog domain. However, converting analog signals to digital ones can be of­ten disrupted if the input signal changes during the conver­sion cycle. The exact moment in time when the input is sensed and compared with a reference can be different across the data converter, resulting in aperture errors. Conse­quently, it appears useful to memorize the input signal and hold it constant for the comparators that perform the conver­sion to the digital domain. Among different data-converter ar­chitectures, only the so-called ‘‘flash’’ ones can perform with­out requiring a sample-and-hold circuit although its usage is usually beneficial even in this case.

Another application for sample-and-hold circuits is as a back end for digital-to-analog data converters. The analog voltage generated by these converters is subject to glitches due to the transitions occurring between consecutive digital input codes. A sample-and-hold circuit can be used to sample the analog voltage between the glitches, effectively smoothing the output waveform between two held output-voltage levels. Then, a low-pass filter following the sample-and-hold circuit is able to restore the continuous-time analog waveform much more efficiently than in the presence of glitches.

Switched-capacitor or switched-current signal processing inherently requires the input signal to be sampled and held for subsequent operations. There is a widespread usage of this type of processing; hence a variety of applications employ some sort of sample-and-hold circuit as a front end. It can be inferred that any sampled-data system requires a sample- and-hold operation at some point.

When used inside a larger system, the performance of a sample-and-hold circuit could limit the overall performance. Speed, accuracy, and power consumption are a few criteria to be observed during the design process or simply from a user’s perspective. This article is intended to describe and present different implementations of sample-and-hold circuits as well as associated nonidealities. The following section includes a list of errors that make a real-life implementation different from the ideal sample-and-hold circuit model. The third sec­tion describes different implementations of these circuits organized into metal oxide semiconductor (MOS) transistor – based open-loop architectures, MOS-transistor-based closed – loop architectures, bipolar-device-based architectures, and current-mode architectures. The last section of the article out­lines some conclusions and provides a brief overview regard­ing modern applications of sample-and-hold circuits.