Monthly Archives: February 2014

RECTANGULAR APERTURES

There are many kinds of antennas for which the radiated electromagnetic fields emanate from a physical aperture. This general class of antennas provides a very convenient basis for analysis and permits a number of well-established mathemat­ical techniques to be applied that provides expressions for the distant radiation fields.

Horns or parabolic reflectors, in particular, can be ana­lyzed as aperture antennas. Incident fields are replaced by equivalent electrical and magnetic currents. With use of vec­tor potentials, the far fields are found as a superposition of each source. Generally one can assume that the incident field is a propagating free-space wave, the electrical and magnetic fields of which are proportional to each other. This will give the Huygens source approximation and allow us to use inte­grals of the electric field in the aperture. Each point in the aperture is considered a source of radiation.

The first step involved in the analysis of aperture antennas is to calculate the electromagnetic fields over the aperture due to the sources on the rearward side of the infinite plane and to use these field distributions as the basis for the predic­tion of the distant fields in the forward half-space. The elec­tromagnetic fields in the aperture plane cannot be determined exactly but approximation distributions can be found by many different methods, which are dependent upon the antenna. One can find the far-field radiation pattern for various distri­butions by a Fourier-transform relation.

For instance, consider a line source of length Lw using the coordinate system as illustrated in Fig. 7. Assume that the

RECTANGULAR APERTURES

Figure 7. Coordinate system used to analyze a linear aperture of length Lw.

L

w

where k = 2’n/A. For real values of в, —1 < sin в < 1, the field distribution represents radiated power, while outside this re­gion it represents reactive or stored power (18). The field dis­tribution E(sin в), or an angular spectrum, refers to an angu­lar distribution of plane waves. The angular spectrum for a finite aperture is the same as the far-field pattern, E(e). Thus, for a finite aperture the Fourier integral representation of Eq. (6) may be written (8):

source is positioned in a ground plane of infinite extent. This model is simple and yet the analysis gives results that illus­trate the main features of the most practical of the two-di­mensional apertures. The line-source distribution does have a practical realization, namely, in a long one-dimensional array that has sufficient elements to enable it to be approximated to a continuous distribution. The applicable transform pair is (7,17)

This sin(x)/x distribution is very important in antenna theory and is the basis for many antenna designs. It has a first side – lobe level of —13.2 dB.

Another popular continuous aperture distribution is the co­sine raised to power n,

where —Lw/2 < x < Lw/2. This is shown in Fig. 8 for n = 0, 1, 2, and 3. To make a relative comparison of the various distri­butions, we must first normalize to the transmitted power of the uniform case. To do this, we multiply the pattern function

Note that Eq. (8) is a relative relation.

For example, consider a uniform distribution for which

1

С-Lw/2

E (0) = E (x)ejkx sin 0 dx

J+Lw/2

E (sin 0) = j E(x)ejkx sin 0 dx

E (x) = cos" ( j—x

Lw

E(x) = —

Lw

0d(sin 0)

E (0) =

and

X

30

The field distribution pattern can be found by incorporating this into Eq. (8):

We complete the straightforward integration to get the final result:

(10)

RECTANGULAR APERTURES

by the normalization constant:

1

Ср =

(13)

Lw/2

і

E2 (x) dx

(15)

Lw/2

where 0 < p < 1. This is a combination of a uniform plus a cosine type distribution. The triangular distribution is popu­lar:

E (x) = 1 +

x

Lw/2

(14)

RECTANGULAR APERTURES

Figure 9. Radiation patterns of line sources for three different aperture distri­butions (Lw = 1m, A = 3 cm).

0, and

(16)

To demonstrate the principles, we computed the antenna ra­diation pattern of a 1 meter long line-source antenna for cosine0 (uniform), cosine1, and cosine2 distributions. The op­erating wavelength is 3 cm. The resulting patterns are shown in Fig. 9. These data indicate that the more heavily tapered illuminations result in decreased side-lobe levels, but at a penalty of main beam peak gain.

Many distributions actually obtained in practice can be ap­proximated by one of the simpler forms or by a combination of simple forms. For example, a common linear aperture dis­tribution is the cosine on a pedestal p:

f n x

2£(x) = p + (1 – p) COS I —

VLw

for – Lw/2

x

E (x) = 1 —

Lw/2

for 0 < x < Lw/2.

In practice, the rectangular aperture is probably the most common microwave antenna. Because of its configuration, the rectangular coordinate system is the most convenient system to express the fields at the aperture. The most common and convenient coordinate system used to analyze a rectangular aperture is shown in Fig. 10. The aperture lies in the x-y plane and has a defined tangential aperture distribution E(x, y). In keeping with the equivalence principle we shall assume

RECTANGULAR APERTURES

Figure 10. Coordinate system used to analyze rectangular aperture

of dimensions Aw, Bw.

the x-у plane is a closed surface that extends from — oo to + oo in the x-у plane. Outside the rectangular aperture bound­aries we shall assume that the field distribution is zero for all points on this infinite surface. The task is to find the fields radiated by it, the pattern beam widths, the side-lobe levels of the pattern, and the directivity.

Note that a horn of aperture size Aw by Bw, with Aw/A > 1 and Bw/A < 1, can be analyzed as a continuous line source. If these conditions are not met, the pattern must be obtained by the integral (19):

Г-Bw/2 Г-Aw/2

E(0, ф) = / E(x, y)ej(kxx+kyy) dxdy (17)

JBw/2 JAw/2

where

kx = k sin в cos ф ky = k sin в sin ф

These are the x and у components of the propagation vector k (20).

For many types of antennas, such as the rectangular horn, the x and у functions are separable and may be expressed by the form

For nonseparable distributions, the integration of Eq. (17) is best carried out on a PC computer using numerical meth­ods. Figure 11 is a listing of a simple program written in Ba­sic that can be run on any PC computer.

In running the program, ф = 0 corresponds to the principal plane pattern in the x-z plane while ф = 90° is the principal plane pattern in the y-z plane. For example, consider an ap­erture with Aw = 75 cm, Bw = 125 cm, and A = 3 cm. Assume cosine distribution in each plane. The principal plane pat­terns in the x plane and у plane and the pattern in the inter­cardinal plane (в = 45°) that result are shown in Fig. 12.

We applied the computer code to compute the secondary pattern characteristic produced by uniform, cosine raised to power n, cosine on a pedestal p, and triangular aperture dis­tributions. The results shown in Table 1 compare the gain, beam width, and the first side-lobe levels. All gain levels are compared with the uniform illumination case.

A uniform line-source or rectangular aperture distribution produces the highest directivity. However, the first side lobe is only about —13.2 dB down. Thus, aperture distributions used in practice must be a trade-off or a compromise between the desired directivity (or gain) and side-lobe level.

Introduction

The recovery of wasted energy present in the ambient that is a reject of artificial or natural processes to power wireless electronics is paving the way for enabling a huge number of applications. One of the main targeted technologies that meets the levels of harvestable power, typically few hundreds of microwatts, is represented by wireless sensor networks (WSNs) [1]. This technology consists of a grid of spatially-distributed wireless nodes that sense and communicate information like acceleration, temperature, pressure, toxicity of the air, biolog­ical parameters, magnetic field, light intensity and so on, among each other and up to the end user through a fixed server. In the next years, WSNs will be massively employed in a wide range of applications such as structural monitoring, industrial sensing, remote healthcare, military equipment, surveillance, logistic tracking and automotive monitoring. In fact, harvesting energy directly from the ambient not only represents a realistic mean to integrate or substitute batteries, but is the sole way for enabling many contemporary and future wireless applications that will be all integrated in the so called "internet of things" [2].

Actually, WSNs already have the characteristics of ubiquity, self-organizing and self-healing but they would not be deployable unless they will also be self-powering. As a matter of fact, it is very expensive and impractical to change batteries in most of the anticipated potential applications. For long-term operation in inaccessible or harsh locations, energy harvesting is a key solution. For example, long-term environmental, structural health of buildings or bridge monitoring and control would require many thousands of integrated sensors impossible to be replaced or maintained. The possibility for chronically ill patients to be continuously moni­tored without changing batteries would represent a significant improvement in their life quality.

Among various renewable energy present in the environment such as solar, radio frequency RF, temperature difference and biochemical, kinetic energy in the form of mechanical vibra­tions is deemed to be the most attractive, in the low-power electronic domain, for its power density, versatility and abundance [3]. This type of energy source is located in buildings, vibrating machineries, transportations, ocean waves and human beings, and it can be con­verted to power mobile devices.

The power consumption of wireless sensors has been largely reduced in the last years thanks to the Ultra-Low-Power electronics [4]. Typical power needs of mobile devices can range from few microwatts for wristwatches, RFID, MEMS sensors and actuators up to hundreds of milliwatts for MP3, mobile phone and GPS applications. They are usually in a sleep state for the 99.9% of their operation time, waking up for a few milliseconds only to communicate data. Consequently, their average power consumption has been reduced below 10|jW in order to match the power density capability of current generators (100-300 microwatts per cubic centimeter). For comparison, a lithium battery can provide 30|jW/cc for 1 year or 30mW/cc for just 10 hours, while a vibration-driven generator could last for at least 50 years with the same power level [5]. Along with virtually infinite operational life, many other benefits come from motion-driven energy harvesting: no chemical disposal, zero wiring cost, maintenance-free, no charging points, capability for deployment in dangerous and inaccessible sites, low cost of retrofitting, inherent safety and high reliability.

A typical integrated vibration-powered wireless sensor includes an embedded vibration energy harvester (VEH), multiple-sensor module, microcontroller and a transceiver (Figure 1). Due to the variable nature of vibrations in their intensity and frequency, the device also contains an AC/DC voltage regulation circuit, which in turn can recharge a temporary storage system, typically a super-capacitor or a thin film Lithium battery. Capacitors are usually preferred as temporary storage systems for their longer lifetime, higher power density and fast recharging. In some applications, however, a storage system is not even necessary. The vibration energy harvester module is often tailored for the specific application and vibration spectrum of the source: harmonic excitation, random noise or pulsed movement.

SUPERCONDUCTORS: PROCESSING OF HIGH-TC BULK, THIN FILM, AND WIRES

Superconductors are a class of materials possessing two unique properties: the complete loss of electrical re­sistivity below a transition temperature called the critical temperature (Tc) and the expulsion of magnetic flux from the bulk of a sample (diamagnetism) in the superconducting state. The latter property is also known as the Meissner-Ochsenfeld effect or more commonly, the Meissner effect (1). At temperatures above Tc, these materials possess electrical resistivity like ordinary conductors, although their normal state properties are unusual in many aspects. The abrupt change from normal conductivity to superconductivity occurs at a ther­modynamic phase transition determined not only by the temperature but also by the magnetic field at the surface of the material and by the current carried by the material. Several metals and metallic alloys exhibit superconductivity at temperatures below 22 K, and will be henceforth called low temperature superconductors (LTS). In 1950, superconductivity was explained as a quantum mechanical phenomenon by the London phe­nomenological theory (2). Later, the two-fluid phenomenological model explained the electronic structure of a superconductor as a mixture of superconducting and normal electrons, with the proportion of superconducting electrons ranging from zero at the onset of superconductivity to 100% at 0 K (3). In 1955, Bardeen, Cooper, and Schrieffer’s (BCS) theory explained that superconductivity was the result of the formation of electron pairs of opposite spins (known as Cooper pairs), primarily owing to electron-phonon coupling (4). The BCS theory proved to be the most complete theory for explaining the superconducting state and the normal state of LTS materials. A major development in superconductors was the discovery by Josephson in 1963 that Cooper pairs show macroscopic phase coherence, and that such pairs can tunnel through a thin insulating layer sandwiched between two superconducting layers [the superconductor-Insulator-super-conductor (SIS) junction known as the Josephson junction] (5). This effect, called the Josephson effect (5), caused a flurry of activity in the fields of high-speed computer logic and memory circuits in the 1960s and 1970s, since it can be used to make high-speed low-power switching devices. Problems were encountered in the mass fabrication of Josephson junctions for complex systems such as digital computers. Although the applications of LTS for electrical applications and electronics were demonstrated, the cost of cooling was too high for the commercial development of LTS.

The era of high-temperature superconductors (HTS) began in 1986 when two IBM Zurich researchers, K. A. Muller and J. G. Bednorz, reported the occurrence of superconductivity in a lanthanum barium copper oxide (LaBaCuO) at 30 K(6). Soon after, M. K. Wu, P. W. Chu, and their collaborators at the University of Alabama and University of Houston (7), respectively, announced the discovery of 90 K superconductivity. Since these two historic discoveries, there has been substantial progress in HTS technology. Several new families of cuprates including BiSrCaCuO (8), TlCaBaCuO (9), and HgCaBaCuO (10) have been found to be superconducting above 90 K. These discoveries make feasible electrical and electronics applications at temperatures above the boiling point of liquid nitrogen (77 K). Cuprate superconductors with a Tc value higher than 30 Khave been classified as high-temperature superconductors. The obvious advantage of using liquid nitrogen rather than liquid helium for cooling is its higher heat of vaporization, which not only simplifies the design of cryostats but also the cost of cooling. Furthermore, liquid nitrogen (at $0.25/liter) is more than an order of magnitude cheaper than liquid helium (at $5/liter). Progress made in cryocoolers has made feasible HTS applications in electrical wires,

Bulk

Thin films

Wires, tapes

Structure, electrical and magnetic properties

Epitaxial growth on single crystal substrates

Processing of silver- sheathed wires, coated conductors on metallic substrates

.

1

Relationship between processing and critical superconducting properties

Large area depositions, epitaxial films on useful substrates with buffer layers

Long-length wires with high current density at high magnetic fields

HTS materials

Demonstration of applications: High current leads, power devices-transformers. motors, and large magneis

Demonstration of applications: High Q microwave components. SQUIDs. signal processing, digital circuits, and sensors

Demonstration of applications: Microwave cavities, magnetic shielding, and frictionless bearings

Fig. 1. A generalized road map for the HTSs technology. YBCO 123 compound has been the most studied material among the HTSs. Most applications have been demonstrated with the YBCO superconductor. Most applications in HTS wires have been demonstrated using BSCCO.

magnets, and electronics. The excitement and challenges posed by these HTS materials have touched multiple disciplines, such as physics, chemistry, material science, and electrical engineering. Tremendous progress has been made in the application of HTS materials in such areas as Superconducting Quantum Interference Devices (SQUIDs), passive microwave devices, and long-length wires, as illustrated in the road map for the HTS technology, shown in Fig. 1. Better-quality materials emerging from refined processing methods have made it possible to separate the intrinsic properties of HTS from its extrinsic properties. The interrelationships of processing with structural, physical, electrical, and magnetic properties continues to be an area of intensive scientific research. In this article, we provide an overview of important high-temperature superconducting materials, their properties, and promising procedures for synthesizing bulk, thin film and wire forms of HTS conductors for engineering applications.

Level of Design Abstraction

A design can be described in different levels of abstraction, as shown in Fig. 1.

• Architecture level (also called behavioral level). At this level, the designer has the freedom to choose different algorithms to implement a design (for instance, different digital filtering or edge detection algorithms). The em­phasis is on input-output relations. Different implemen­tations for the same function can be considered. For in­stance, for a given function, one can chose between two logic implementations: sequential and parallel combina­tional (arithmetic adder, comparator or multiplier being good examples).

• Register transfer level (RTL). At this stage, the design is specified at the level of transfers among registers. Thus, the variables correspond to generalized registers, such as

Level of Design Abstraction

Figure 1. The abstraction levels of a logic design.

shifters, counters, registers, memories, and flip-flops. The operations correspond to transfers between registers and logical, arithmetical and other combinational opera­tions on single or several registers. Examples of opera­tions on a single register are shift left, shift right, shift cyclically, add one, subtract one, clear, set, negate. An example of more general register-transfer operations is A ^ B + C, which adds the contents of registers B and C and transfers the result to register A. A register-trans – fer description specifies the structure and timing of oper­ations in more detail but still allows for transformations of data path, control unit, or both. The transformations will allow improved timing, lower design cost, lower power consumption, or easier circuit test.

• Logic level (gate level). At this level every individual flip – flop and logic bit is specified. The timing is partially fixed to the accuracy of clock pulses. The (multioutput) Bool­ean functions with certain number of inputs, outputs, and certain fixed functionality are specified by the user or obtained by automatic transformations from a regis – ter-transfer level description. These functions are speci­fied as logic equations, decision diagrams, arrays of cubes, netlists, or some hardware description language (HDL) descriptions. They can be optimized for area, speed, testability, number of components, cost of compo­nents, or power consumption, but the general mac­ropulses of the algorithm’s execution cannot be changed.

• Physical level. At this level a generic, technology-inde­pendent logic function is mapped to a specific technol­ogy—such as electronically programmable logic devices (EPLD), complex programmable logic devices (CPLD), field programmable gate arrays (FPGA), standard cells, custom designs, application specific integrated circuits (ASIC), read only memory (ROM), random access mem­ory (RAM), microprocessor, microcontroller, standard small scale integration (SSI)/medium scale integration (MSI)/large scale integration (LSI) components, or any combinations of these. Specific logic gates, logic blocks, or larger design entities have been thus defined and are next placed in a two-dimensional area (on a chip or board) and routed (interconnected).

EQUIVALENCE PRINCIPLE

The ability to determine electromagnetic waves radiated fields via field-equivalence principles is a useful concept and the development can be traced back to Schelkunoff (2). The equivalence principle often makes an exact solution easier to obtain or suggests approximate methods that are of value in simplifying antenna problems. Field-equivalence principles are treated at length in the literature and we will not con­sider the many variants here. The book by Collin and Zucker (12) is a useful source of references in this respect. The basic concept is illustrated in Fig. 5. The electromagnetic source region is enclosed by a surface S that is sometimes referred to as Huygens’s surface.

<1

(a) Horn

(c) Lens

JS = n x (H1 — H) MS = —n x (E1 — E)

(4)

(5)

EQUIVALENCE PRINCIPLE

E1, H1

/Js = n X (Hi – H)

n

V1

Si

/

/

E, H

г

MS = – n x (E1 – E)

(b) Equivalent problem

(a) Original problem

Figure 5. Equivalence principle with a closed Huygens surface S en­closing sources.

In essence, Huygens’s principle and the equivalence theo­rem shows how to replace actual sources by a set of equiva­lent sources spread over the surface S (13). The equivalence principle is developed by considering a radiating source, elec­trically represented by current densities J1 and M1. Assume that the source radiates fields E1 and H1 everywhere. We would like to develop a method that will yield the fields out­side the closed surface. To accomplish this, a closed surface S is shown by the dashed lines that enclose the current densi­ties J1 and M1. The volume inside S is denoted by V. The primary task is to replace the original problem [Fig. 5(a)] by an equivalent one that will yield the same fields E1 and H1 [Fig. 5(b)]. The formulation of the problem can be greatly aided if the closed surface is judiciously chosen so that the fields over most of the surface, if not the entire surface, are known a priori.

The original sources J1 and M1 are removed, and we as­sume that there exists a field E and H inside V. For this field to exist within V, it must satisfy the boundary conditions on the tangential electrical and magnetic field components on surface S. Thus on the imaginary surface S, there must exist equivalent sources (14):

These equivalent sources radiate into an unbounded space. The current densities are said to be equivalent only outside region V, because they produce the original field (E1, H1). A field E or H, different from the original, may result within V.

The sources for electromagnetic fields are always, appar­ently, electrical currents. However, the electrical current dis­tribution is often unknown. In certain structures, it may be a complicated function, particularly for slots, horns, reflectors, and lenses. With these types of radiators, the theoretical work is usually not based on the primary current distributions. Rather, the results are obtained with the aid of what is known as aperture theory (15). This simple, sound theory is based upon the fact that an electromagnetic field in a source – free, closed region is completely determined by the values of tangential E or tangential H on the surface of the closed re-

E1, H1

EQUIVALENCE PRINCIPLE

gion. For exterior regions, the boundary condition at infinity may be employed, in effect, to close the region. This is exem­plified by the following case.

Without changing the E and H fields external to S, the electromagnetic source region can be replaced by a zero-field region with appropriate distributions of electrical and mag­netic currents (Js and Ms) on the Huygens surface. This exam­ple is overly restrictive and we could specify any field within S with a suitable adjustment. However, the zero internal field approach is particularly useful when the tangential electrical fields over a surface enclosing the antenna are known or can be approximated. In this case, the surface currents can be obtained directly from the tangential fields, and the external field can be determined.

Assuming zero internal field, we can consider the electro­magnetic sources inside S to be removed, and the radiated fields outside S are then determined from the electrical and magnetic surface current distributions alone. This offers sig­nificant advantages when the closed surface is defined as a two-hemisphere region, with all sources contained on only one side of the plane. If either the electrical or magnetic fields arising from these sources can be determined over the planar Huygens surface S, then the radiated fields on the far side of the plane can be calculated. The introduction of an infinite conducting sheet just inside the Huygens surface here will not complicate the calculations of the radiated fields in the other half-space (16). This infinite-plane model is useful for antennas the radiation of which is directed into the right hemisphere (Fig. 6), and has found wide application in deal­ing with aperture antennas. For instance, if the antenna is a rectangular horn, it is assumed the horn transitions into a infinite flange. All tangential fields outside the rectangular boundary along the infinite Huygens surface are taken to be zero.

When the limitations of the half-space model are accept­able, it offers the important advantage that either the electri­cal or magnetic currents need to be specified. However, knowledge of both is not required. It must be emphasized that any of the methods described before will produce exact results over the Huygens surface.

In the analysis of electromagnetic problems, often it is eas­ier to form equivalent problems that will yield the same solu­tion only within a region of interest. This is the case for aper­ture antenna problems.

Figure 6. Some apertures yielding the same electromagnetic fields to the right side of the Huygens surface S.

(b) Parabola

W

/IN

(6)

/

E (x) = E (sin 0)e-j

(7)

(8)

(9)

Lw/2

E(0) = -!-[ eij2nxlx)sine dx

Lw J-

‘w J-Lw/2

■ (nLw. „

sin sin 0

к_______

nLw sin 0

(11)

(12)

The steps that must be used to form an equivalent problem and solve an aperture antenna problem are as follows:

1. Select an imaginary surface that encloses the actual sources (the aperture). The surface must be judiciously chosen so that the tangential components of the electri­cal field and/or the magnetic field are known, exactly or approximately, over its entire span. Ideally, this surface is a flat plane extending to infinity.

2. Over the imaginary surface, form equivalent current densities JS and MS over S, assuming that the E and H fields within S are not zero.

3. Lastly, solve the equivalent-aperture problem.