Monthly Archives: February 2014


This section is devoted to a presentation of four systems in which electrical signals arising from specific organs are re­corded; the goal in each case is to obtain information about the sources of each signal. The aforementioned systems are the electrocardiogram (ECG), electromyogram (EMG), electro­encephalogram (EEG), and electrogastrogram (EGG). Each is treated in detail as a separate chapter in this volume; the consideration here is limited solely to a discussion of source – field relationships introduced in this chapter. Our interest centers on the quantitative evaluation of sources and on perti­nent aspects of the volume conductor for each of the afore­mentioned systems. This introduces more advanced material as well as illustrate the application of the earlier material of this chapter.


Information on the electrical activity within the heart itself comes, mainly, from canine studies where multipoint (plunge) electrodes are inserted into the heart. The instant in time that an activation wave passes a unipolar electrode is marked by a rapid change in potential (the so-called intrinsic deflec­tion) and, based on recordings from many plunge electrodes, it is possible to construct isochronous activation surfaces. The cardiac conduction system initiates ventricular activation at many sites nearly simultaneously and this results in an ini­tial broad front. The syncytial nature of cardiac tissue ap­pears to result in relatively smooth, broad, activation sur­faces, and because fibers lie parallel to the endocardium and epicardium the anisotropy insures wavefronts to also lie par­allel to these surfaces.

The temporal cardiac action potential has a rising phase of approximately 1 msec, followed by a plateau of perhaps 100 msec and then by slow recovery, which also requires approxi­mately 100 msec. Because activation is a propagated phenom­ena, a spatial action potential can be obtained from the tem­poral version since the space-time function must be of the form of a propagating wave vm(s — 9t) where s is the local di­rection of propagation and в the velocity. Thus behind the isochronal wavefront is a region undergoing depolarization (for в = 50 cm/sec and a rise time of 1 msec its thickness is

5 mm (it is hence quite thin and often approximated as a surface). Behind that, the tissue is uniformly in the plateau state while ahead of the activation wave it is uniformly at rest. Application of Eq. (86) shows that there are no sources except in the region undergoing activation (the gradient being zero wherever vm is unvarying). Thus the activation source is a volume double layer with a thickness of around 0.5 mm ly­ing behind the activation isochrone. The total strength of the double layer is given by integrating Eq. (86) in the direction of propagation; using the bidomain model this come out

where ri and re are bidomain intracellular and extracellular resistance per unit length in the direction of propagation and Vpeak and Vrest describe the peak and resting values of the ac­tion potential. The value of т in Eq. (87) has been found from the component potentials and resistances and also from direct measurement of the potential across an activation wave and consistent values of т = 40 mV found (20). Because the activa­tion wave is only 0.5 mm thick the double layer may be con­sidered to lie in the surface corresponding to the activation isochrone.

That activation sources are limited essentially to a surface is a consequence of an activation time of 1 msec. Recovery, on the other hand occupies 100-200 msec and consequently recovery sources are distributed throughout the heart. To make things even more complicated, recovery is not propa­gated (although cells undergoing recovery can and do influ­ence neighboring cells). Of course Eq. (86) continues to apply, but the spatial distribution of potentials now depends on spa­tial variations in waveshape. Assuming that cells recover ear­lier at the epicardium than at the endocardium would result in equiphase surface propagation from epicardium to endocar­dium, and hence the dipole density found from Eq. (86) which is outward during activation is also (on average) outward dur­ing recovery (and this would account for the observed upright QRS and T waves). Although action potential morphology can be readily examined at the epicardium and endocardium (with good resolution using optical or microelectrode tech­niques) in vivo intramural action potential waveforms are not accessable (although aspects, like refractory period, can be sampled).

In vitro and isolated cell electrophysiology, although less reliable quantitatively (since cellular interactions are abnor­mal), reveal that the variation in action potential duration from endocardium to epicardium is not monotonic, as as­sumed above. Recent work of the Antzelevitch group (21) de­scribes a mid-wall region containing M cells which have the longest action potential durations. Consequently the T-wave sources are more complex in distribution and orientation. While they are not uniformly outward, indeed they appear to be inward in the subendocardial region, the collective dipole source direction is outward.

In connection with recovery, interest over the years has developed in the time-integrated electrocardiogram. This can be interpreted as the algebraic area of the QRS and T waves and is consequently designated Aqrst. For the jth lead, with

lead vector field lj(v), it has been shown that, based on Eqs.

(85) and (86) (22)

Aqest =-C ( Vx ■ lj dv (88)


where u is the area of the action potential (a function of posi­tion) and the volume integral in Eq. (88) is taken throughout the heart. If the cardiac action potentials all had similar shapes but the duration of the plateau was a variable (possi­bly this is the leading difference in morphology), then

Aqest = – C ( Vd ■ ljdv (89)


where d is the action potential duration (23). The dependence of the integrated electrocardiogram on the recovery gradient, described in Eq. (89), led to its designation as the ventricular gradient. Dispersion of recovery has been linked to a propen­sity for arrhythmias; so the ventricular gradient has been ex­amined as a possible evaluative tool.

We have concentrated most of our attention on cardiac sources, but to complete a forward simulation one must also consider the volume conductor. This is clearly inhomogeneous the most important inhomogeneity being the finite torso it­self. Other components are the blood cavities, the lungs, and the surface muscle layer. The latter is anisotropic but is usu­ally taken into account by increasing its thickness by a factor of three. Assuming that each tissue is uniform limits the sec­ondary sources to the various interfacial surfaces. This formu­lation lends itself to a forward solution by the boundary ele­ment method (BEM). A number of studies have appeared in the literature mostly demonstrating inhomgenieties to be of importance, although the effect is more pronounced on the quantitative body surface potentials and less on their tempo­ral and spatial potential patterns.


Perhaps the earliest application of the active antenna concept (following that of Hertz) was aimed at solving the small – antenna problem. As we recall, an antenna can be modeled (roughly) by a series RLC network with the R representing the radiation resistance. The input impedance of such a com­bination is given by

1-і»2/<t>Q + ja>RC jo)C

and so we see that, when the operation frequency w is well below the resonant frequency


u Vlc

and the reciprocal of the RC time constant

t — RC

then the antenna appears as a capacitor and radiates quite inefficiently. The problem of reception is similar. Apparently, already in 1928 Westinghouse had a mobile antenna receiver that used a pentode as an inductive loading element in order to boost the amount of low-frequency radiation that could be converted to circuit current. In 1974, two works discussed transistor-based solutions to the short-aerial problem (36,37). In Ref. 37, the load circuit appeared as in Fig. 34. The idea was to generate an inductive load whose impedance varied with frequency, unlike a regular inductor, but so as to in­crease the antenna bandwidth. The circuit’s operation is not intuitively obvious. I think that it is possible that most AM, short-wave, and FM receivers employ some short-antenna so­lution whether or not the actual circuit designers were aware that they were employing active antenna techniques.

Another set of applications where active devices are essen­tially used as loading elements is in the greater-than-100-


Figure 34. A circuit taken from Ref. 37 in which a transistor circuit is used to load a short antenna. Analysis shows that, in the frequency regime of interest, the loading circuit appears, when looking toward the antenna from the amplifier terminals, to cancel the strongly ca­pacitive load of the short antenna.

GHz regime. Reviews of progress in this regime are given in Refs. 1 and 38. To date, most work at frequencies greater than 100 GHz has involved radio-astronomical receivers. A problem at such frequencies is a lack of components, includ­ing circuit elements so basic as waveguides. Microstrip guides already start having extra-mode problems at Ku band. Copla – nar waveguides can go higher, although to date, rectangular metallic waveguides are the preferred guiding structures past about 60 GHz. In W band (normally narrowband, about 94 GHz—see Table 1), there are components, as around 94 GHz there is an atmospheric window of low propagation loss. How­ever, waveguide tolerances, which must be a small percentage of the wavelength, are already severe in W band, where the wavelength is roughly 3 mm. Higher frequencies have to be handled in free space or, as one says, quasi-optically. Receiv­ers must therefore by nature be downconverting in this >100 GHz regime. Indeed, these types of solutions are the ones be­ing demonstrated by the group at Michigan (38), where re­ceivers will contain multipliers and downconverting mixers right in the antenna elements in order that CPW can be used to carry the downconverted signals to the processing electron­ics. Millimeter-wave-terahertz radio astronomy seems to be a prime niche for quasioptical active antenna solutions.

The first applications of active antennas where solid-state components were used as gain elements were primarily for power boosting (39-44). Power combining (see reviews in Refs. 45 and 46) can be hard to achieve. There is a theorem that grew out of the early days of radiometry and radiative transfer (in the 1800s), known variously as the brightness theorem, the Lagrange invariant, or (later) the second law of thermodynamics. (See, for example, Ref. 8, Chap. 5.) The theorem essentially states that one cannot increase the brightness of a source by passive means. This theorem practi­cally means that, if one tries to combine two nominally identi­cal sources by taking their outputs, launching them into waveguides, and then bringing the two waveguides together in a Y junction into a single waveguide, the power in the out­put guide, if the output guide is no larger than either of the input guides, can be no greater than that of either of the nom­inally identical sources. This seems to preclude any form of power combining. There is a bit of a trick here, though. At the time the brightness theorem was first formulated, there were no coherent radiation sources. If one takes the output of a coherent radiation source, splits it in two, and adds it back together in phase, then the brightness, which was halved, can be restored. If two sources are locked, they are essentially one source. (As P. A. M. Dirac said, a photon only interferes with itself. Indeed, the quantum mechanical meaning of locking is that the locked sources are sharing a wave function.) There­fore, locked sources can be coherently added if they are prop­erly phased. We will take this up again in a following para­graph.

An alternative to power combining that obviates the need for locking and precise phase control is amplification of the signal from a single source at each element. By 1960, solid – state technology had come far enough that antennas inte­grated with diodes and transistors could be demonstrated. The technology was to remain a laboratory curiosity until the 1980s, when further improvements in microwave devices were to render it more practical. Recent research, however, has been more concentrated on the coherent power combining of self-oscillator elements. This is not to say that the element – mounted amplifier may not still be of practical use. The main research issue at present, though, is the limited power avail­able from a single active element at millimeter-wave fre­quencies.

Another application area is that of proximity detection (47). The idea is that an oscillator in an antenna element can be very sensitive to its nearby (several wavelengths) environ­ment. As was discussed previously, variation in distances to ground planes changes impedances. The proximity of any metal object will, to some extent, cause the oscillator to be aware of another ground plane in parallel with the one in the circuit. This will change the impedance that the oscillator sees and thereby steer the oscillator frequency. The active an­tenna of Ref. 47 operated as a self-oscillating mixer. That is, the active element used the antenna as a load, whereas the antenna also used a diode mixer between itself and a low – frequency external circuit. The antenna acted as both a trans­mitting and a receiving antenna. If there were something moving near the antenna, the signal reflected off the object and rereceived might well be at a different frequency than the shifting oscillator frequency. These two frequencies would then beat in the mixer, be downconverted, and show up as a low-frequency beat note in the external circuit. If such a com­posite device were to be used in a controlled environment, one could calibrate the output to determine what is occurring. Navarro and Chang (1, p. 130) mention such applications as automatic door openers and burglar alarms. The original pa­per (47) seemed to have a different application in mind, as the term Doppler sensor was in the title. If one were to care­fully control the immediate environment of the self-oscillating mixer, then reflections off more distant objects that were re­ceived by the antenna would beat with the stable frequency of the oscillator. The resulting beat note of the signals would then be the Doppler shift of the outgoing signal upon reflec­tion off the surface of the moving object, and from it one could determine the normal component of the object’s velocity. It is my understanding that some low-cost radars operate on such a principle. As with other applications, though, the active an­tenna principle, if only due to size constraints, becomes even more appealing at millimeter-wave frequencies, and at such frequencies power constraints favor use of arrays.

An older antenna field that seems to be going through an active renaissance is that of retroreflection. A retroreflector is a device that, when illuminated from any arbitrary direction, will return a signal directly back to the source. Clearly, retro – reflectors are useful for return calibration as well as for vari­ous tracking purposes. An archetypical passive retroreflector is a corner cube. Another form of passive reflector is a Van Atta array (48). Such an array uses wires to interconnect the array elements so that the phase progression of the incident signal is conjugated and thereby returned in the direction of the source. As was pointed out by Friis already in the 1930s, though, phase conjugation is carried out in any mixer in which the local oscillator frequency exceeds the signal fre­quency (49). (A phase conjugate signal is one that takes on negative values at each phase point on the incoming wave.) This principle was already being exploited in 1963 for imple­menting retroreflection (50). This work did not catch on, per­haps for technical reasons. A review in 1994 (51) and designs for such arrays were demonstrated and presented at the 1995 International Microwave Symposium (52,53). Although both demonstrations used transistors and patch-type elements,


Source bias Gate bias

Source bias



Figure 35. Schematic depiction of (a) the active surface of a grid oscillator and (b) a breakout of an internal region of the grid showing the active device placement relative to the bias lines.

both also employed circulators for isolation and therefore were not actually active array demonstrations. It would seem that retroreflection should motivate an active self-oscillating mixer solution, which will perhaps appear in the future.

As was mentioned earlier in this article, a quite important application area for active antennas is free-space power com­bining. As was pointed out then, a number of groups are working on developing compact elements such as those of Fig. 14 (7) and Fig. 30 (21). As was also previously mentioned, in order to do coherent power combining, the elements must be locked. In designs where the elements are spatially packed tightly enough, proximity can lead to strong enough nearest – neighbor coupling so that the array will lock to a common frequency and phase. Closeness of elements is also desirable in that arrays with less than A/2 spacing will have no side – lobes sapping power from the central array beam. In designs that do not self-lock, one can inject a locking signal either on bias lines or spatially from a horn to try to lock to all elements simultaneously. Of course, the ultimate application would be for a high-bandwidth, steerable, low-cost transceiver.

Another method of carrying out power combining is to use the so-called grid oscillator (54,55). The actual structure of a grid appears in Fig. 35. The operating principle of the grid is quite a bit different from that of the arrays of weakly coupled individual elements. Note that there is no ground plane at all on the back, and there is no ground plane either, per se, on the front side. Direct optical measurements of the potentials on the various lines of the grid (56), however, show that the source bias lines act somewhat like ac grounds. In this sense, either a drain bias line together with the two closest source biases, or a gate bias line together with the two horizontally adjacent bias lines, appears somewhat like CPW. The CPW lines, however, are periodically loaded ones with periodic ac­tive elements alternated with structures that appear like slot antennas. The radiating edges of the slots are, for the drain bias lines, the vertical ac connection lines between drain and drain or, for the gate bias CPW, the horizontal ac gate-to-gate connection lines. Indeed, the grid is known to lock strongly between the rows and more weakly between columns. As ad­jacent row elements are sharing a patch radiator, this behav­ior should be expected.

In a sense, this strong locking behavior of the grid is both an advantage and a disadvantage. It is advantageous that the grid is compact (element spacing can be < A/6) and further that it is easy to get the rows to lock to each other. However, the compactness is also a disadvantage in that it is quite hard to get any more functionality on the grid. Much effort has been made in this area to generate functionality by stacking various grid-based active surfaces such as amplifying sur­faces, varactor surfaces for frequency shifting and modula­tion, doubling surfaces, etc. A problem with stacking is, of course, diffraction as well as alignment. Alignment tolerance adds to complexity. Diffraction tends to ease alignment toler­ance, but in an inelegant manner. A 100-transistor array with A/6 spacing will have an extent of roughly 1.5A per side. As the diffraction angle is something like the wavelength divided by the array diameter, the diffraction angle for such an array is a good fraction of a radian. One can say that grids are quasi-optical, but in optics one generally doesn’t use aper­tures much smaller than a millimeter (center optical wave­length of micrometers), for which the diffraction angle would be roughly a thousandth of a radian. As far as pure combining efficiency goes, grids are probably the optimal solution. How­ever, more functionality may well be hard to obtain with this solution.

As we have mentioned, there are a number of techniques for steering being investigated. There seems to be less work on modulation, and I do not know of any simultaneous steer­ing of modulated beams to date. Although the field of active antennas began with the field of radio frequency, it still seems to be in its infancy. However, as I hope this article has brought across, there is a significant amount of work ongoing, and the field of active antennas will grow in the future.



Figure 1. Resistor configuration for the ‘‘T’’ attenuator. Zj and Zo are the resistive impedances presented to the attenuator by external cir­cuits.

The MOSFET-C Integrator

The MOSFET-C integrator is shown in Fig. 1(b). It is popular in integrated circuit design, where the amplifier, capacitor, and resistance are fabricated on the same substrate. An MOS transistor operating in the triode region acts like a voltage – controlled resistor, where the nominal conductance G = 1/R has units of 1/П. Using the same analysis as for the Miller integrator, we find that the integrating time constant is C/G. The two main advantages of the MOSFET-C integrator over the inverting integrator are: (1) an MOS transistor gen­erally occupies less silicon area than an equivalent resistor, and (2) the conductance is tunable via the gate voltage VG. The latter property is particularly important in integrated circuit design, where the tolerance on capacitors is approxi­mately 10 to 50%.

The Transconductance-C Integrator

The transconductance-C integrator is shown in Fig. 1(c). It consists of a transconductance amplifier which converts a dif­ferential input voltage to an output current via the relation

iO = G(v+- v-) (3)

The MOSFET-C Integrator

In Fig. 1(c), we note that v+ = v/t), and v_ = 0 V. Thus, the output current is equal to the input voltage times the conduc­tance. The current iO(t) is integrated on the capacitor, produc-




Integrating this equation throughout a volume V and using Gauss’s theorem,

In order to be able to make calculations on active antennas, it is important to know what level of approximation is neces­sary in order to obtain results. An interesting point is that, although the operating frequency of active antennas is high, the circuit tends to be small in total extent relative to the operating wavelength, and therefore the primary design tool is circuit theory mixed with transmission line theory. These techniques are approximate, and a most important point in working with high frequencies is to know where a given tech­nique is applicable. Exact treatments of all effects, however, prove to be impossible to carry out analytically. Numerical approaches tend to be hard to interpret unless one has a framework to use. The combined circuit transmission-line framework is the one generally applied. When it begins to break down, one tends to use numerical techniques to boot­strap it back to reality. We will presently try to uncover the basic approximations of transmission line and circuit theory.

Maxwell’s equations are the basic defining equations for all electromagnetic phenomena, and they are expressible in MKSA units as (8)

V xE = ——


V xH = J+ —


V D = p VB = 0

JV-SdV = f S ■ dA

where dA is the differential area times the unit normal point­ing out of the surface of the volume V, one finds that

д d

fSdA = We Wm

J dt dt

where We is the electric energy density We=^fE EdV and Wm is the magnetic energy density

Wm = ^- fH-HdV 2 J

The interpretation of the above is that the amount of S flow­ing out of V is the amount of change of the energy within. One therefore associates energy flow with S = E X H. This is important in describing energy flow in wires as well as trans­mission lines and waveguides of all types. As was first de­scribed by Heaviside (9), the energy flow in a wire occurs not inside the wire but around it. That is, as the wire is highly conductive, there is essentially no field inside it except at the surface, where the outer layer of oscillating charges have no outer shell to cancel their effect. There is therefore a radial electric field emanating from the surface of the wire, which
combines with an azimuthal magnetic field that rings the cur­rent flow to yield an E X H surrounding the wire and pointing down its axis.

It was Pocklington in 1897 (10) who made the formal struc­ture of the fields around a wire a bit more explicit and, in the effort, also formed the basis for the approximation upon which most of circuit and transmission line theory rests, the quasi-static approximation. A simplified version of his argu­ment is as follows. Assume an x—y—z Cartesian coordinate system where the axis of the wire is the z axis. One then assumes that all of the field quantities f(x, y, z, t) vary as

f (x, y, z, t) = f (x, y) cos(fiz — at + ф)

If one assumes that the velocity of propagation of the above­defined wave is c = (^0е0)~1/2, the speed of light, then one can write that


P = ~


The assumption in the above that f(x, y) is independent of z, by substitution of the above into Maxwell’s equations, can be shown to be equivalent to the assumption that the transverse

Bx, and By all satisfy relations of the

field components Ex, Ey. form

d Ex

<< в Ex

d z

which is the crux of the quasistatic approximation. With the above approximation, one finds that

V t x Et = p

V t x H t = J


which is just the transverse, and therefore two-dimensional, gradient operator. These equations are just the electro – and magnetostatic equations for the transverse fields, whereas the propagation equation above shows that these static trans­verse field configurations are propagated forward as if they corresponded to a plane wave field configuration. If the mag­netic field is caused by the current in the wire, it rings the wire, whereas if the electric field is static, it must appear to emanate from charges in the wire and point outward at right angles to the magnetic field. If this is true, then the Poynting vector S will point along the direction of propagation and the theory is self-consistent, if approximate.

If we wish to guide power, then the quasistatic picture must come close to holding, as the Poynting vector is in the right direction for guidance. The more general approximate theory that comes from Pocklington’s quasistatic approxima­tion is generally called transmission line theory. To derive this theory, first consider the two-wire transmission line of Fig.


Figure 15. A sketch of a two-conductor transmission line where some equipotentials and some current lines are drawn in, as well as a volume V with outward-pointing normal dA1. There is also an out­ward-pointing normal dA2 associated with the area bounded by con­tour C2.

15. If we are to have something that we can actually call a transmission line, then we would hope that we can find equiphase fronts of the electromagnetic disturbance propagat­ing in the gap crossing the gap conductor and that we can find lines along which the current flows on the current-carrying conductor. Otherwise (if the equiphases closed on themselves and/or we had eddies in the current), it would be hard to think of the structure as any form of guiding structure. Let us say we form an area in the gap with two walls of the four­sided contour C1 surrounding this area following equiphases an infinitesimal distance dz from each other. We can then write

d B

fVxE – dA, = — f — ■ dA,

where dA1 corresponds to an upward-pointing normal from the enclosed area. One generally defines the integral as

/ B ■ dAx = ф

where ф is the magnetic flux. We often further define the flux as the inductance of the structure times the current:

ф = Li

The integral with the curl in it can be rewritten by Stokes’ theorem as

f VxE ■ dAx = ^ E ■ dl


where C1 is the contour enclosing the area. If we define v = f E ■ dl

on the two equiphase lines of the contour C1, where v is an ac voltage (this is the main approximation in the above, as it is only strictly true for truly static fields), then, noting that v does not change along two of the boundaries of the contour (because they are the infinitesimal walls on constant-voltage plates) and making the other two connecting lines infinitesi­mal, we note that the relation between the curl of E and the magnetic field reduces to

v(z + dz)-v(z) = —(Li) dt


v(z) —► v(z + dz) l l

where it has been tacitly assumed that geometric deviations from rectilinearity are small enough that one can approxi­mately use Cartesian coordinates, which can be rewritten in the form




dv З І

3 z 31

d 2v, d 2v n

dz* ~ c ~dt* ~


where l is an inductance per unit length, which may vary with longitudinal coordinate z if the line has longitudinal variation of geometry. A similar manipulation can be done with the second and third of Maxwell’s equations. Taking


V (VxH) = V – J+—V D

v ‘ Зі

and noting that the divergence of a curl is zero, substituting for V • D, we find

t dp v •’+¥ = “

which is the equation of charge conservation. Integrating this equation over a volume V2 that encloses the current-carrying conductor whose walls lie perpendicular to the current lines gives


JV-JdV2 = –fpdV2 where the total change Q, given by

Q = f p dV2

is also sometimes defined in terms of capacitance C and volt­age v by

Q = Cv

Noting that

fV ■ J dV2 = f J ■ dA2

where dA2 is the outward-pointing normal to the boundary of the volume V2 and where one usually defines

i = f J ■ dA2

and letting the volume V have infinitesimal thickness, one finds that

j J ■ dA2 = i(z + dz) — i(z)


l r l r l r



►g vo -O


Figure 16. A circuit equivalent for (a) a lossless and (b) a lossy trans­mission line. The actual stages should be infinitesimally long, and the l’s and c’s can vary with distance down the line. In reality, one can find closed-form solutions for the waves in nominally constant l and c segments and put them together with boundary conditions.

tation, as is schematically depicted in Fig. 16(a). One can ver­ify this by writing Kirchhoff’s laws for the nodes with v(z + dz) and v(z) using the relations

7 дІ V = і —





Figure 16(b) illustrates the circuit equivalent for a lossy (and therefore dispersive) transmission line, where r represents the resistance encountered by the current in the metallization and where g represents any conductance of the substrate ma­terial that might allow leakage to ground. A major point of the diagram is that the structure need not be uniform in order to have a transmission line representation, although one may find that irregularities in the structure will lead to longitudi­nally varying inductances and capacitances.

The solution to the circuit equations will have a wave na­ture and will exhibit propagation characteristics, which we discussed previously. In a region with constant l and c, one can take a z derivative of Eq. (1) and a t derivative of Eq. (2) and substitute to obtain


Putting this together with the above, we find

З і dv

dz dt

where c is the capacitance per length of the structure, and where longitudinal variations in line geometry will lead to a longitudinal variation of c. The system of partial differential equations for the voltage and current have a circuit represen – which is a wave equation with solutions

v(z, t) = vf cos(ojt — ez + фf) + vb cos(o)t + fiz + фь) (3)

where vf is the amplitude of a forward-going voltage wave, vb is the amplitude of a backward-going voltage wave, and

– = VTc в

Similarly, taking a t derivative of Eq. (1) and a z derivative of Eq. (2) and substituting gives


Figure 17. Schematic depiction of a top view of the metallized sur­face of an FET, where G denotes gate, D drain, and S source.

d2i, d2i n

te2 " C 9Ї2 “

which will have a solution analogous to the one in Eq. (3) above, but with

l .



allowing us to write that

v(z — l) Zl + jZ0 tanP(z — l)

— A


i(z — l)


which indicates that we can make the identification that the line phase velocity vp is given by

A CO rr-

VP = S =


and the line impedance Z0 is given by

z0 = 4Tfc

Oftentimes, we assume that we can write (the sinusoidal steady-state representation)

v(z, t) = Rev(z)ejat] i(z, t) = Rei(z)ejat]

This equation allows us to, in essence, move the load from the plane l to any other plane. This transformation can be used to eliminate line segments and thereby use circuits on them directly. However, note that line lengths at least comparable to a wavelength are necessary in order to significantly alter the impedance. At the plane z = l, then, we can further note that the ratio of the reflected voltage coefficient vb and the forward-going vf, which is the voltage reflection coefficient, is given by

Z0 + jZl tan P(z — l)

zi —Zq zi + zn

Z(z — I) =

ejP l ejP l

so that we can write

dz d i

— = — jcocv dz

with solutions

v(z) = vfe jPz + vbejPz i(z) = if e—jP z — ibejP z

Let us say now that we terminate the line with a lumped impedance Zt at location l. At the coordinate l, then, the rela­tions

Zli(l) = vfe jP 1 + vb Z0i(l) = vf e—P l — vt and has the meaning of a Fresnel coefficient (8). This is the reflection we discussed in the last section, which causes the difference between large and small circuit dimensions.

One could ask what the use was of going at some length into Poynting vectors and transmission lines when the discus­sion is about active antennas. The answer is that any antenna system, at whatever frequency or of whatever design, is a sys­tem for directing power from one place to another. To direct power from one place to another requires constantly keeping the Poynting vector pointed in the right direction. As we can surmise from the transmission line derivation, line irregulari­ties may cause the Poynting vector to wobble (with attendant reflections down the line due to attendant variations in the l and c), but the picture must stay close to correct for power to get from one end of the system to another. For this reason, active antennas, even at very high frequencies (hundreds of gigahertz), can still be discussed in terms of transmission lines, impedances, and circuit equivalents, although ever greater care must be used in applying these concepts at in­creasingly higher frequencies.

—Jp l

hold, and from them we can find

vf = l(Zi + zo)i(l)eJf>l

vb = ¥Zi – ZQ)i(l)e

which gives

Ohmic contact

Ohmic contact

v(z) = ^[(Z, +Z0)eM-*> + (Zt — ZQ)e~jp(-l~z’>]

i(z) = +Z0)eM-* – (Zl – Z0)e~J^l

Figure 18. Schematic depiction of the cross section of the active re­gion of a GaAs FET. Specific designs can vary significantly in the field-effect family.


Figure 19. (a) Circuit element diagram with voltages and currents labeled for (b), where a typical I-V curve is depicted.



The next piece of an active antenna that needs to be dis­cussed is the active element. Without too much loss of gener­ality, we will take our device to be a field effect transistor (FET). The FET as such was first described by Shockley in 1952 (5), but the MESFET (metal-semiconductor FET), which is today’s workhorse active device for microwave cir­cuitry, was not realized until 1965 (6), when gallium arsenide (GaAs) fabrication techniques became workable albeit only as a laboratory demonstration. [Although we will discuss the MESFET in this section, it should be pointed out that the silicon MOSFET (metal-oxide-semiconductor FET) is the workhorse device of digital electronics and therefore the most common of all electronic devices presently in existence by a very large margin.] A top view of an FET might appear as in Fig. 17. As is shown clearly in the figure, an FET is a three – terminal device with gate, drain, and source regions. A cross section of the active region (that is, where the gate is very narrow) might appear as in Fig. 18. The basic idea is that the saturation-doped n region causes current to flow through the ohmic contacts from drain to source (that is, electrons flow from source to drain), but the current is controlled in magni­tude by the electric field generated by the reverse bias voltage applied to the gate electrode. The situation is described in a bit more detail in Fig. 19, where bias voltages are defined and a typical I-V curve for dc operation is given. Typically the bias is supplied by a circuit such as that of Fig. 20. In what follows, we will simply assume that the biases are properly applied and isolated, and we will consider the ac operation. An ac circuit model is given in Fig. 21. If one uses the proper number of circuit values, these models can be quite accurate, but the values do vary from device to device, even when the

devices were fabricated at the same time and on the same substrate. Usually, the data sheet with a device, instead of specifying the circuit parameters, will specify the parameters of the device S, which are defined as in Fig. 22 and which can be measured in a straightforward manner by a network ana­lyzer. The S parameters are defined by the equation

An important parameter of the circuit design is the trans­fer function of the transistor circuit, which can be defined as the ratio of vo to Vi as defined in Fig. 21. To simplify further analysis, we will ignore the package parasitics Rg and Rd in comparison with other circuit parameters, and thereby we will carry out further analysis on the circuit depicted in Fig. 23. The circuit can be solved by writing a simultaneous sys­tem of equations for the two nodal voltages Vi and vo. These sinusoidal steady-state equations become

V; — V



j°>Cgd(v0 — 1>і) +£Гт^і + jeoCAsV0 + д b — 0


The system can be rewritten in the form

v0 (ja>(Cgd +Cds) + ^ = и;(-£т +jo>Cgd)

which gives us our transfer function T in the form

—gm + j® Cgd


rp U<!






j10 (Cgd +Cgs) + — Ь 7j—









V – ejez ~*

V+ eje z

► V2 e~jez

defines the quantities used in the S matrix of Eq. (5).


Figure 20. Typical FET circuit including the bias voltages vgs and

vds as well as the ac voltages vi and vo, where the conductors represent Figure 22. Schematic depiction of an FET as a two-port device that

ac blocks and the capacitors dc blocks.



gd ds





It is useful to look at approximate forms. It is generally true that

Cgd < Cds, Cgs

and for usual operating frequencies it is also generally true that

Oftentimes we are interested in open-circuit parameters—for example, the circuit transfer function when ZL is large com­pared to other parameters. We often call this parameter G the open-circuit gain. We can write this open-circuit gain in the form

ja Cgd + Cgs) Rds + 1

—gmRds + jaCgd R

G= —

< R,




Figure 23. Simplified transistor circuit used for analyzing rather general amplifier and oscillator circuits, where the circuit parameter definitions are as in Fig. 22.

Let us now consider an oscillator circuit. The basic idea is illustrated in the one-port diagram of Fig. 24. The transistor’s gain, together with feedback to the input loop through the capacitor Cgd, can give the transistor an effective negative in­put impedance, which can lead to oscillation if the real and imaginary parts of the total impedance (that is, ZT in parallel with the Zi of the transistor plus load) cancel. The idea is much like that illustrated in Fig. 25 for a feedback network. One sees that the output of the feedback network can be ex­pressed as

vo = G( ja)[vi — H (ja)vo] or, on rearranging terms,

vo G( ja)

Figure 24. Diagram depicting the transistor and its load as a one – port device that, when matched to its termination so that there is no real or imaginary part to the total circuit impedance, will allow for oscillations.

1 + G( ja)H (ja)



Using both of the above in our equations for T and G, we find

rp _ —gm-Rds R + Zh

G = ~gm Rds

Clearly, from the above, one sees that the loaded gain will be lower than the unloaded gain, as we would expect. Making only the first of our two above approximations, we can write the above equations as

—gm R

T =

і • Rds

l+jonds +

which clearly will exhibit oscillation—that is, have an output voltage without an applied input voltage—when


H (j a) = —

G( ja)

What we need to do to see if we can achieve oscillation is to investigate the input impedance of our transistor and load seen as a one-port network. Clearly we can write the input current of Fig. 23 as

i = ja CgSv; + ja Cgd(v; — vo)

and then, using the full expression for T to express vo as a function of vi, one finds

—gm R,



gm — ja C


Zi — — — jcoCgS + j(j)C_

1 +


>(Cgd+Cds) + ^- + ^-y

1 + jaTds

where Tds is a time constant given by


G =

ds r* p


Figure 25. Depiction of a simple feedback network.




Cds Rds

We see that, in this limit, the high-frequency gain is damped. Also, an interesting observation is that, at some frequency w, an inductive load could be used to cancel the damping and obtain a purely real transfer function at that frequency. This effect is the one that allows us to use the transistor in an os­cillator.

which can be somewhat simplified to yield

The condition for oscillation in such a system can be ex­pressed in either of the forms


SmRds + 1 + jc»Tds +

Zi — jto Cgs + jtoC!:


1 • Rd

і + jtotds + y~



ro — S22 +

We can again invoke a limit in which wj-ds < 1 and then write

Zl(1 + gm Rds + RdJ

Zi — jto Cgs + jtoC

Rds + ZL

Perhaps the most interesting thing about this expression is that if

ZL — jm L


aRda ^ 1

then clearly

R: < 0

Whether or not X: can be made to match any termination is another question, which we will take up in the next para­graph.

As was mentioned earlier, generally the data sheet one ob­tains with an FET has plots of the frequency dependence of the S parameters rather than values for the equivalent circuit parameters. Oscillator analysis is therefore usually carried out using a model of the circuit such as that depicted in Fig. 26, where the transistor is represented by its measured S ma­trix. The S matrix is defined as the matrix of reflection and transmission coefficients. That is to say, with referrence to the figure, S11 would be the complex ratio of the field reflected from the device divided by the field incident on the device. S21 would be the field transmitted from the device divided by the field incident on the device. S12 would be the field incident from the load side of the device divided by the power incident on the device, and S22 would be the power reflected from the load side of the device divided by the power incident on the device. For example, if there is only an input from ZT, then

Гг — Sn

If there is only an input from ZL, then

ro — S22


Гт Гі

Го г,

Figure 26. Schematic depiction of an oscillator circuit in which the transistor is represented by its S matrix and calculation is done in terms of reflection coefficients rT looking into the gate termination, Г looking into the gate source port of the transistor, Г„ looking into its drain source port, and rL looking into the load impedance.

TorL — 1

where the r’s are defined in the caption of Fig. 26. If both ZT and ZL were passive loads—that is, loads consisting of resis­tance, inductance, and capacitance, then we would have that

IГт| < 1 TlI < 1

and the conditions for unconditional stability (nonoscillation at any frequency) would be that

I r i | < 1

|ToI < 1

Clearly, we can express Г: and ro as series of reflections such that

Гі — S11 + S12TLS21 + S12TLS22rLS21

+ S12TLS22TLS22rLS21 +—–

ro — S22 + S21rTS12 + S21rTS11rTS12 + S21rTS11rTS11rTS12 + •••

Using the fact that



1 — x

we can reexpress the r’s as


>sn + r


^12^2іГт 1 —

If we denote the determinant of the S matrix by

A — S11S22 – S12S21

and define a transistor parameter к by

l-|gn|2-|g22|2 + |A|2


then some tedious algebra leads to the result that stability requires

К > 1

A < 1

At frequencies where the above are not satisfied, oscillation can occur if the load and termination impedances, ZL and ZT respectively, are chosen properly. Oscillator design is dis­cussed in various texts (11-14). Generally, though, oscillator

design involves finding instability points and not predicting the dynamics once oscillation is achieved. Here we are dis­cussing only oscillators which are self-damping. External cir­cuits can be used to damp the behavior of an oscillator, but here we are discussing only those that damp themselves inde­pendent of an external circuit. The next paragraph will dis­cuss these dynamics.


If a transistor circuit is designed to be unstable, then as soon as the dc bias is raised to a level where the circuit achieves the set of unstable values, the circuit’s output within the range of unstable frequencies rises rapidly and dramati­cally. The values that we took in the equivalent ac circuit, though, were small-signal parameters. As the circuit output increases, the signal will eventually no longer be small. The major thing that changes in this limit is that the input resis­tance to the transistor saturates, so that (14)

Ri = – Ri<*. + mv2

which we can rewrite in terms of other parameters as





d v v ^ + 1^=0


1 –

R; — RT) dt LC

where the plus sign on the nonlinearity is necessary, for if it were negative the transistor would burn up or else burn up the power supply. Generally, m has to be determined empiri­cally, as nonlinear circuit models have parameters that vary significantly from device to device. For definiteness, let us as­sume that the ZT is resistive and the ZL is purely inductive. At the oscillation frequency, the internal capacitance of the transistor then should cancel the load inductance, but to con­sider dynamics we need to put in both C and L, as dynamics take place in the time domain. The dynamic circuit to con­sider is then as depicted in Fig. 27. The loop equation for this circuit in the time domain is

Recalling the equivalent circuit of Fig. 23 and recalling that

Cgs » Cgd

we see that, approximately at any rate, we should have a rela­tion between vi and ii of the form


1 g 31

Using this i-v relation in the above, we find that

d2v Ri — RT

Ы2 L

d i

L —- + (R■ + Rrp )i + — f і cit — 0 at ‘

which is the form of Van der Pol’s equation (15,16), which describes the behavior of essentially any oscillator.

Now that we have discussed planar circuits and dynamical elements that we can put into theory, the time has arrived to discuss planar antenna structures. Perhaps the best way to gain understanding of the operation of a patch antenna is by considering a cavity resonator model of one. A good review of microstrip antennas is given in Carver and Mink (17) and is reprinted in Pozar and Schaubert (18). Let us consider a patch antenna and coordinate system as is illustrated in Fig. 28. The basic idea behind the cavity model is to consider the region between the patch and ground plane as a resonator. To do this, we need to apply some moderately crude approximate boundary conditions. We will assume that there is only a z – directed electric field underneath the patch and that this field achieves maxima on the edges (open-circuit boundary condi­tion). The magnetic field H will be assumed to have both x and y components, and its tangential components on the edges will be zero. (This boundary condition is the one consis­tent with the open-circuit condition on the electric field and becomes exact as the thickness of the layer approaches zero, as there can be no component of current normal to the edge at the edge, and it is the normal component of the current that generates the transverse H field.) The electric field satis­fying the open-circuit condition can be seen to be given by the modes

^ – e(l – y2v+0%V = 0 dt



cos knx cos kmy




kn = nn /a km = mn /b 1,



m = 0 and n = 0 m = 0 or n = 0 m = 0 and n = 0



Xmn —

Figure 27. Circuit used to determine the dynamical behavior of a transistor oscillator.






G2 + jB2 Vi ^ G1 + jB1

• G1 + jBi



Figure 29. (a) A transmission line model for a patch antenna, and (b) its circuital equivalent as resonance.

The H field corresponding to the E field then will consist of modes

Amn = -7— ~~j~ (e. xkm cos ^„xsin ^„3/ – eyk„ sill k„XCOS kmy)

G2 – jB2


G1 + jB1 —

As can be gathered from Fig. 13, the primary radiation mode is the mode with m = 1 and n = 0.

The basic operation is described by the fact that the bound­ary conditions are not quite exact. Recall from the earlier ar­gument that accompanied Fig. 13 that the z-directed field gives rise to a fringe field at the edges y = 0 and y = b such that there are strips of y-directed electric field around y < 0 and y > b. Because the boundary conditions are not quite correct on H, there will also be strips of x-directed magnetic fields in these regions. As the Poynting vector is given by E x H, we note that these strips will give rise to a z-directed Poynting vector. Similar arguments can be applied to the edges at x = 0 and x = a. However, the x-directed field at x < 0 has a change of sign at the center of the edge and is pointwise oppositely directed to the x-directed electric field at x = 0. These fields, therefore, only give rise to very weak radi­ation, as there is significant cancellation. Analysis of the slot antenna requires only that we interchange the E and H fields.

The picture of the patch antenna as two radiating strips allows us to represent it with a transmission line as well as a circuit model. The original idea is due to Munson (19). The transmission line model is depicted in Fig. 29. The idea is that one feeds onto an edge with an admittance (inverse im­pedance) G1 + jB1 and then propagates to a second edge with admittance G2 + jB2. When the circuit is resonant, then the length of transmission line will simply complex-conjugate the given load [see Eq. (4)], leading to the circuit representation of Fig. 29(b). The slot admittance used by Munson (19) was just that derived for radiation from a slit in a waveguide (20) as

(1 — j0.636ln k01)

We have now considered all of the pieces, and therefore it is time to consider a couple of actual active antenna designs. Figure 30 depicts one of the early designs from Kai Chang’s group at Texas A&M (21). Essentially, the patch here is being used precisely as the feedback element of an amplifier circuit (as was described in connection with Fig. 9). A more compact design is that of Fig. 14 (7). There, the transistor is actually mounted directly into the patch antenna. The slit between the gate and drain yields a capacitive feedback element such that the effective ac circuit equivalent of this antenna may appear as depicted in Fig. 31. The capacitor-inductor pair attached to the gate lead forms what is often referred to as a tank circuit, which (if the load were purely real) defines a natural frequency through the relation

As was discussed at some length in the last section of this article, a major argument for the use of active antennas is that they are sufficiently compact that they can be arrayed together. Arraying is an important method for free-space power combining, which is necessary because as the fre­quency increases, the power-handling capability of active de­vices decreases. However, element size also decreases with increasing frequency so that use of multiple coherently com­bined elements can allow one to fix the total array size and power more or less independently of frequency, even though the number of active elements to combine increases. In the next paragraph, we shall consider some of the basics of arrays.


Figure 31. Ac circuit equivalent of the active antenna of Fig. 14.

where Z0 is the impedance of free space (V^0/e0 = 377 П), A0 is the free-space wavelength, and k0 is the free-space propaga­tion vector, and where a and t are defined as in Fig. 28. When the edges are identical (as for a rectangular patch), one can write

G2 + jB2 — G1 + jB1 to obtain the input impedance in the form

Z = — = —

1 Уі 2G,


Ri4, — RT



1 –

Riф — RT

Consider a linear array such as is depicted in Fig. 32. Now let us say that the elements are nominally identical apart from phases that are set by the array operator at each of the elements. The complex electric field far from the nth element due to only the nth element is then given by

En = Ee єіФп

where Ee is the electric field of a single element. To find out what is radiated in the direction в due to the whole array, we need to sum the fields from all of the radiators, giving each radiator the proper phase delay. Each element will get a pro­gressive phase shift kd sin в due to its position (see Fig. 32), where k is the free-space propagation factor, given by

k ‘^‘Jt ~ ~k

where A is the free-space wavelength. With this, we can write for the total field radiated into the direction в due to all n elements


Figure 32. Depiction of a linear array of N identical radiating ele­ments.


which is plotted in Fig. 33. Several interesting things can be noted from the expression and plots. For kd less than w, there is only one central lobe in the pattern. Also, the pattern be­comes ever more directed with increasing N. This is called the directivity effect. If the array has a power-combining efficiency of 100% (which we have built into our equations by ignoring actual couplings, etc.), then the total power radiated can only be N times that of a single element. However, it is radiated into a lobe that is only 1/N times as wide as that of a single el­ement.

If we are to realize array gain, however, we need to be certain that the array elements are identical in frequency and have fixed phase relations in time. This can only take place if the elements are locked together. The idea of locking is proba­bly best understood in relation to the Van der Pol equation (16), with an injected term, such that

where R1ф is the input resistance of the transistor circuit as seen looking into the gate source port and RT is the external termination resistor placed between the gate and common source. In the absence of the locking term, one can see that oscillation will take place with a primary frequency (and some harmonics) at angular frequency w0 with amplitude VRi0 — RT/m such that

d v 2

1-(OnV = A cos o): t

dt 0 1

Ri0 — RT

AF =

v(t) ■




which in turn can be written as

. 2 і „ Tkd. sin iV—sm0

. 2 I kd. sin — sm0


Et(в) = EeJ2 e —inkd sinвєФ

Without being too quantitative, one can say that, if is close enough to w0 and A is large enough, the oscillation will lock




It (в ) = Ie


One notes immediately that, if one sets the phases Фп to Фп = nkd sin в

then the intensity in the в direction is N2 times the intensity due to a single element. This is the effect of coherent addition. One gets a power increase of N plus a directivity increase of N. To illustrate, let us consider the broadside case where we take all the фл to be zero. In this case, we can write the array factor in the form

The sum is generally referred to as the array factor. The in­tensity, then, in the в direction is

"y ^ e —iknd sin в є^Фп

N— 1





1 _ e —iNkd sin в

^ ^ e —ind sin в

_ g — ikd sin#




Figure 33. Plots of the array factor of Eq. (6), where (a) N = 1, N = 5 and kd = w/2, w, and 2w, and (c) N = 10 and kd = w.

to w in frequency and phase. If w is not quite close enough and A not quite big enough (how big A needs to be is a func­tion of how close w is), then the oscillation frequency w0 will be shifted so that

V(t) — A0 cos[(&>0 + Aoi)t + ф]

where А. Ш and ф are functions of w and A. These ideas are discussed in a number of places including Refs. 1, 15, 16, 22, 23, and 24. In order for our array to operate in a coherent mode, the elements must be truly locked. This locking can occur through mutual coupling or through the injection of an external signal to each of the elements.

Ideally, we would like to be able to steer the locked beam. A number of techniques for doing this are presently under investigation. Much of the thinking stems from the work Ste­phan (25-28) and Vaughan and Compton (28a). One of the ideas brought out in these works was that, if the array were mutually locked and one were to try to inject one of the ele­ments with a given phase, all of the elements would lock to that phase. However, if one were to inject two elements at the locked frequency but with different phases, then the other elements would have to adjust themselves to these phases. In particular, if one had a locked linear array and one were to inject the two end elements with phases differing by ф, then the other elements would share the phase shift equally so that there would be a linear phase taper of magnitude ф uni­formly distributed along the array.

A different technique was developed by York (29,30), based on work he began when working with Compton (31,32). In this technique, instead of injecting the end elements with the locked frequency and different phase, one injects with wrong frequencies. If the amplitudes of these injected frequencies are set to values that are not strong enough to lock the ele­ments to this wrong frequency, then the elements will retain their locked frequencies but will undergo phase shifts from the injected signal. If the elements of the array are locked due to mutual feedback, trying to inject either end of the array with wrong frequencies will then tend to give the elements a linear taper—that is, one in which the phase varies linearly with distance down the array—with much the same result as in the technique of Stephan. This will just linearly steer the main lobe of the array off broadside and to a new direction. Such linear scanning is what is needed for many commercial applications such as tracking or transmitting with minimum power to a given location.

Another technique, which again uses locking-type ideas, is that of changing the biases on each of the array’s active de­vices (33-35). Changing the bias of a transistor will alter the w0 at which the active antenna wants to oscillate. For an ele­ment locked to another frequency, then, changing the bias will just change the phase. In this way one can individually set the phase on each element. There are still a couple of problems with this approach (as with all the others so far, which is why this area is still one of active research). One is that addressing each bias line represents a great increase in the complexity that we were trying to minimize by using an active antenna. The other is that the maximum phase shift obtainable with this technique is ±n from one end of the array to the other (a limitation that is shared by the phase – shifts-at-the-ends technique). In many phased-array applica­tions, of which electronic warfare is a typical one, one wants to have true time delay, which means that one would like to have as much as a n phase shift between adjacent elements. I do not think that the frequency-shifting technique can achieve this either. Work, however, continues in this excit­ing area.

Flight Dynamics Simulation

The main feature of the flight dynamics simulation is that the aircraft model representing the handling characteristics of the airframe, engines, and the control systems is encoded in the computer. The flight dynamics simulation is based on a rigorous quantitative mathematical model expressed in terms of continuous differential equations. Research on interfacing such quantitative simulation of the aircraft in flight with a qualitative simulation, in an attempt to support decision making, has been presented in (23). The system extracts quantitative data from a mathematical model of aircraft flight dynamics and uses fuzzy inductive reasoning on the qualita­tive model to recognize the flight accidents.

Fuzzy Reasoning (or Fuzzy Logic) is based on the theory of Fuzzy Sets pioneered by Zadeh (9). It extends the conven­tional logic introducing the concept of partial truth—truth values between “completely true’’ and “completely false.’’ Fuzzy Reasoning attempts to mirror the imprecision of the real world by providing a model for human reasoning in which even the truth is not an absolute but rather a matter of degree. Fuzzy Logic has emerged as a key methodology in the conception, design and deployment of intelligent systems.