Monthly Archives: May 2014


05 * CD

Percent of time Fa is exceeded

The man-made noise statistics presented are largely based on measurements that were made more than 20 years ago in North America by Spaulding and Disney (22). More recently, Spaulding has warned that the CCIR data may now be inac­curate due to technological advances (23). This is largely based on the fact that emissions from newer automobile igni­tion systems, a major contributor to man-made noise in urban





















































Table 2. Location Variability in Terms of the Standard Deviation for Various Environments


^ 30

-a; 30






“ 20

Ж 10

areas, have decreased dramatically over the years. After re­viewing more recent measurements and trend analyses, Spaulding concluded (23) that in the business environment ‘‘at 100 MHz in the 1970’s time-frame, Fam was on the order of 20 dB but now is probably approximately 20 dB less.’’ This conclusion, however, is not based on a comprehensive set of noise measurements as would be necessary to update the pre­vious survey described in Ref. 23.

While the improvements in automobile ignition systems have likely affected the noise levels in business and residen­tial environments, emissions from gap discharge and corona in power transmission and distribution lines have probably not decreased with time. Figure 10 (22) shows Fam under, and one-quarter mile from, a 115 kV line in rural Wyoming. It is interesting to note that the noise measured one-quarter mile from the power line is about the same as that predicted for a rural environment. A possible conclusion is that if power and distribution lines are the primary noise source in rural envi­ronments, rural man-made noise is not likely to have de­creased. However, one would not expect noise in an urban environment to be less (than rural), as would be the case with the estimated 20 dB reduction in Fam.

Another factor that could significantly affect the level and character of man-made radio noise is the proliferation of elec­tronic devices (e. g., computers, electronic switching devices, microwave ovens, etc.) that are unintentional RF emitters. Such devices have become ubiquitous in business, residential, and rural environments and could affect both the magnitude of the noise power as well as its frequency dependence.

The man-made noise data presented in the previous sec­tions are applicable to North America; the validity of exten­sion to other parts of the world cannot be determined pre­cisely. CCIR Report 258 describes very high frequency (VHF) measurements made in business and residential areas of the United Kingdom where the noise power was found to be some 10 dB below that shown in Fig. 3 (16). This is attributed to differences in patterns of utilization of electric and mechani­cal appliances and regulation of interference. The report also states that due to such differences, the noise statistics should be used with caution. It should be noted, however, that if an overall 10 dB reduction in urban noise can be justified, the

іУ A-

0 3 6 9 12 15 18 21 24

Hour (MST)

Figure 11. Median, mean, and peak noise power near an office park (24).

man-made noise environments near 100 MHz would be bounded by what are now classified as rural (worst) and quiet rural (best) environments, as shown in Fig. 3.

Relatively recent noise measurements at 137 MHz (24) show that the statistics of man-made noise are significantly different from what is predicted by CCIR Report 258. For ex­ample, Fig. 11 shows the median, mean, and peak (exceeded

0. 01% of the time) values of Fa measured over a 24-h period in a business environment. Diurnal variations corresponding to human activity are clearly evident. The relatively steady within-the-hour values of the mean power (Fa) are not consis­tent with the predicted within-the-hour distribution of Fa for a business environment (see Fig. 9). Figure 12 shows the dis­tribution of Fa measured at six urban sites plotted on normal probability paper. The distribution at a particular site was obtained by collecting statistics measured within two-minute intervals spaced about an hour apart from hours of continu­ous measurements made at that particular location. Hence, the results should correspond to the hour-to-hour time vari­ability, which, for the most part, is relatively low at most of









Figure 10. Power line noise measurements near a 115 kV line in rural Wyoming (22).





Frequency (MHz)

Figure 12. Power averages from measurements at six urban sites


Office park, edge Office park, middle Office park, edge Downtown Denver Downtown Boulder Denver at I-25

5 10 20 30 50 70 80 90 95

Percent exceeding ordinate

99 99.9

the locations. Location variations however are quite large, ex­ceeding 12 dB in some cases. More importantly, these mea­surements show that there are business environments (down­town urban areas) where Fm is still nearly 20 dB.

In summary, the 137 MHz measurements demonstrate that important changes have occurred in both the level and character of man-made noise since the comprehensive noise survey described by Spaulding and Disney (22). While these measurements can only be considered as a ‘‘spot check,’’ they do show that standard methods used to predict man-made ra­dio noise are probably outdated. It is concluded that addi­tional comprehensive man-made noise measurements at RF frequencies into the ultrahigh frequency (UHF) band will be necessary to provide radio system designers and engineers with the required tools to effectively design modern radio systems.


Modern medicine allows for the monitoring of high-risk pa­tients so that medical treatment can be applied adequately as their condition worsens. To detect changes in the physiologi­cal condition of each patient, appropriate monitoring is ap­plied routinely according to the patient’s condition, at least in well-equipped hospitals. Patient monitoring usually means the physiological monitoring of high-risk patients using ap­propriate instruments.

In hospitals, there are many sites where patient monitor­ing is especially important. For example, in the operating room, instruments such as a pulse oximeter are used for mon­itoring anesthesia; in the intensive care unit, vital signs are monitored; in the coronary care unit, the patient’s electrocar­diogram (ECG) is routinely monitored and analyzed automat­ically; and in the incubator, the vital signs of the infant as well as the internal environment of the incubator are moni­tored. In addition, during examinations such as cardiac cathe­terization, and therapeutic procedures such as hyper – or hy­pothermia therapy, patient monitoring is required for ensuring safety. Even in the general ward, monitoring is per­formed fairly often when some risks are suspected. By using a telemetry system, the patient is not constrained to a bed. Even out of the hospital, patient monitoring is still performed in some situations. In the ambulance, postresuscitation man­agement requires the use of a cardiac monitor. In the home where medical care such as oxygen therapy and intravenous infusion therapy is carried out, monitoring instruments are helpful. A so-called Holter recorder is used in which 24-h ECG is recorded for detecting spontaneous events such as cardiac arrhythmia.

There are many parameters that are used for patient mon­itoring: Among them are heart rate, ECG, blood pressure, car­diac output, rate of respiration, tidal volume, expiratory gas content, blood gas concentrations, body temperature, metabo­lism, electroencephalogram (EEG), intracranial pressure, blood glucose levels, blood pH, electrolytes, and body motion. Many types of monitoring techniques and instruments have been developed to enable measurement of these parameters.

For high-risk patients, monitoring should be performed continuously. The real-time display of the trend or waveform of each parameter is helpful especially in a patient who is experiencing cardiopulmonary function problems, because if a sudden failure of respiration or circulation is not detected immediately it may result in the physiological state of the patient becoming critical. The reliability of monitoring is quite important. In some situations, invasive procedures for monitoring are allowed if they are considered essential. For example, an indwelling arterial catheter is used when instan­taneous blood pressure has to be monitored continuously. However, invasive methods are undesirable if the patient’s condition is less critical. In some situations, noninvasive methods are preferred. Because noninvasive methods are al­ways more difficult to perform or less accurate than invasive methods, the development of reliable noninvasive monitoring techniques is highly desirable; many smart noninvasive tech­niques have already been developed and supplied commer­cially.

Safety is an important feature of any monitoring device because monitoring is performed for a long period of time for the critically ill patient. Electric safety is strictly required es­pecially when the monitoring device has electric contacts to the patient body. Sometimes, two or more monitors are ap­plied to a patient. Leakage current should be avoided under any failure of each device. Electromagnetic compatibility is also important. Monitoring instruments should be immune to any possible electromagnetic interference from telemetering devices, mobile telephones or other noice sources such as elec­trosurgery.

Many patient monitors have an automatic alarm function. When a monitoring item is expressed as a single value such as heart rate, blood pressure, or body temperature, the alarm condition is determined by setting a level or range, and the monitor gives an alarm sign, such as warning and urgent, according to the patient’s condition. When the monitoring item is expressed in a waveform, such as the ECG, the alarm system needs to be able to perform real-time waveform analy­ses. In any alarm system, two kinds of error—false positives and false negatives—may occur. In critically ill patients, a false negative may be fatal. While false positives may be tol­erated to some extent, repeated false alarms may seriously disturb the clinical staff. In general, any alarm system re­quires some logic, and some times highly intelligent signal processing is required.


Electrocardiogram Monitoring

For sudden heart failure, urgent treatment is required. Moni­toring of heart function is therefore quite important. The ECG is the most convenient method of monitoring the electrical function of the heart, whereas the mechanical pump function of the heart is best monitored by examining the patient’s blood pressure and cardiac output. An ECG signal can be ob­tained by attaching electrodes to the body surface. For patient monitoring, electrodes are always attached to the torso as shown in Fig. 1(a), whereas the standard lead system in which electrodes are attached to the limb and chest is used in ordinary ECG examinations for diagnosis. Disposable ECG electrodes, as shown in Fig. 1(b), are commonly used for long­term monitoring. A stable ECG can be obtained using these electrodes for a day or longer.

The ECG waveform thus obtained is always displayed on a CRT monitoring screen with ordinary sweep speeds, to­gether with other parameters. Unusual waveforms such as premature ventricular contractions can be identified visually. However, it is unlikely that someone would be able to watch the monitor screen all of the time. Most ECG monitors have a built-in computer that automatically detects abnormal waveforms and triggers the alarm. To reduce as much as pos­sible the number of false alarms, both false negatives and false positives, highly intelligent algorithms for detecting ab­normal waveforms, such as arrhythmias, have been developed

Metal snap

Figure 1. Typical electrode locations for ECG monitoring (a), and a cross-section of a disposable foam electrode (b).


and installed in intensive care monitoring systems (1). Most bedside ECG monitoring systems have a real-time display and large data storage facility that allows for retrospective observation. Some of them have a memory capacity that is able to record an ECG for up to 24 h. Radiotelemetering is convenient, even in bedside monitoring. Eliminating the cable connection to the patient is advantageous not only to make the patient less restricted, but also to attain electrical safety. However, electromagnetic compatibility should be secured when it is used together with other instruments.

For ambulatory patients, Holter ECG monitoring is per­formed in which the ECG is recorded typically for 24 h. The typical Holter recorder records the ECG on an audio cassette tape for 24 h, then the tape is brought to the hospital, the recorded ECG is played back by a scanner at 60 or 120 times the recording speed, and analyzed automatically so that dif­ferent kinds of arrhythmias and other abnormalities may be classified and counted. To detect and record only pathological waveforms, a digital recorder with solid-state memory can be used; for example, a system can detect automatically the change in ECG during transient myocardial ischemia and re­cord up to 18 episodes that are only 6 s each (2). Although longer time digital recording needs a very large memory ca­pacity, 24 h recording is realized using a small hard disk drive in a system in which the ECG data is first stored in a solid-state memory and then transferred to the disk over short periods of time (3).

Blood Pressure Monitoring

Arterial blood pressure monitoring is essential in a patient whose circulation is unstable, and is commonly performed during cardiovascular surgery and postoperative care. There are two methods of blood pressure monitoring—direct and in­direct. In the direct method, a catheter is introduced into the artery as shown in Fig. 2, and a pressure transducer is con­nected to the proximal end of the catheter. To avoid blood clotting in the catheter, a small amount of saline is supplied either continuously or intermittently. Intraarterial pressure can be measured accurately enough as long as the transducer is adequately calibrated. Either a strain-gage or capacitive type of pressure transducer is commonly used for this pur­pose. Disposable pressure transducers are convenient because sterilization of the transducer before use is troublesome. In addition, the performance of disposable pressure transducers is comparable or even better than that of expensive reusable pressure transducers (4).

The catheter-tip pressure transducer which has a pres­sure-sensing element at the tip is sometimes used for in – traarterial pressure monitoring. It has many advantages: It has no time delay and has a flat frequency response over a wider range; saline injection is unnecessary; and it is less af­fected by the mechanical motion of the catheter. However, it is fragile and expensive. Many different principles can be used in detecting pressure at the tip, such as semiconductor strain gauges, and capacitive and optical principles. Some transducers have many pressure-sensing elements near the tip. For example, a model is available that has up to six pres­sure sensing elements in an 8F size tip (outer diameter 2.67 mm) (Mikro-Tip, Millar Instruments, Inc., Houston Texas).

While the direct blood pressure measurement method is accurate and reliable, it is an invasive procedure, and, thus, an indirect noninvasive method is preferred for less critical patients. The most common method of indirect blood pressure measurement is the auscultatory method in which a pressure cuff is attached to the upper arm. The cuff is deflated from a position somewhat above the systolic pressure, and both the systolic and diastolic pressures are determined by monitoring a sphygmomanometer while listening for the Korotkoff sound using a stethoscope. While the auscultatory method is the standard method of clinical blood pressure measurement, and is actually performed for patient monitoring such as during anesthesia, it is neither automatic nor continuous. Hence, a noninvasive continuous blood pressure monitor had been in demand. Two methods have now become available: the vascu­lar unloading method and the tonometry.

Figure 2. The conventional method of direct arterial pressure moni­toring.

Finger cuff

The vascular unloading method is used to measure instan­taneous intraarterial pressure by balancing externally ap­plied pressure to the intravascular pressure using a fast pneumatic servo-controlled system (5). As shown schemati­cally in Fig. 3(a), a cuff is attached to a finger, and near-infra­red light transmittance is measured at the site where the cuff pressure is affected uniformly. Because absorption at near – infrared is mainly due to the hemoglobin in blood, the change in light absorption corresponds to the change of blood volume at optical pass, thus a pulsatile change in transmitting light intensity is observed from the pulsation of the artery. It is possible to compensate for the pulsatile change of arterial blood volume by introducing a servocontrol in which cuff pres­sure is controlled by the intensity of the transmitted light so that an increase of arterial blood increases light absorption and the signal increases cuff pressure so as to obstruct fur­ther increase of arterial flow. If such a servocontrol works fast enough and with a sufficient loop gain at an adequate level of light intensity, a condition is realized where the intraarterial and the cuff pressures are balanced. At this condition, the circumferential tension of the arterial wall is reduced to zero; such a situation is called vascular unloading. It has been shown that accurate blood pressure together with instanta­neous arterial pressure waveforms can be obtained when an adequate servocontrol system is introduced and adjusted cor­rectly (6). A commercial unit that uses this principle has been developed (Finapress, Ohmeda, Englewood, Colorado). In this system, the interface module, which has a pneumatic servo – valve is attached to the back of the hand so that the connec­tion from the valve to the finger cuff is minimized, thus reduc­ing the time delay.

Tonometry is a method of measuring internal pressure from the reaction force. When a flat plate is pressed onto a flexible deformable boundary membrane to which internal pressure is exerted, internal pressure can be measured from the outside regardless of the transverse tension developed in the membrane. This principle has been applied successfully in intraocular pressure measurement, and it is also applicable to arterial blood pressure measurement (7). As shown in Fig. 3(b), the tonometry transducer, the tonometer, is applied to the skin surface so that an artery is just beneath the sensing element, and a part of the arterial wall is flattened. To detect the pressure at the center of the arterial flattening, a multi – ple-element transducer is used, and the value at the center of the pressure distribution is detected automatically. Measure­ment is always performed on the radial artery at the wrist. A tonometer is now commercially available (Jentow, Nihon Co­lin Co., Komaki-shi Japan).

Sometimes, blood pressure is monitored in an ambulatory patient. For this purpose, a fully automated portable sphyg – momanometry system is used. A pressure cuff is attached to the upper arm, and is inflated intermittently at selected inter­vals. The Korotkoff sound is detected by a microphone, and systolic and diastolic pressures are determined and stored in a memory. Commercial models are now available (e. g., Medi – log ABP, Oxford Medical Ltd., Oxford, UK) (8).

Exploitation of Elevation Data from IFSAR

IFSAR is a technique to generate high-resolution digital elevation model (DEM) based on the phase difference in SAR signals received by two spatially separated antennas (11). There are drawbacks in height maps derived

Exploitation of Elevation Data from IFSAR

Fig. 3. An IFSAR image.

from IFSAR data: The data are noisy and the spatial resolution is much inferior to that of visual data. The spatial resolution is further degraded by the noise removal step. Figure 3 shows a height map produced by a real IFSAR. A typical IFSAR elevation image is noisy and needs to be filtered before it can be reliably used. Also, there are regions with “no data” that result either from the fact that the original scene was not on a rectangular grid or from radar geometry effects, which cause some points not to be mapped. Interpolation and nonlinear filtering techniques are used to filter the elevation data.

Positioning ofIFSAR and visual data allows for the fusion ofclues from both sensors for target recognition. It is needed to overcome various difficulties resulting from the limitations of the sensor. For example, building detection requires the extraction and grouping of features such as lines, corners, and building tops to form buildings (12). The features extracted from visual data usually contain many unwanted spurious edges, lines, and so on that do not correspond to buildings. The grouping stage requires complex and computationally intensive operations. Further, the height of a building is typically estimated by extracting shadows and sun angle when available and is not reliable when the shadows are cast on adjacent buildings. Another drawback of methods based exclusively on visual data lies in their sensitivity to imaging conditions.

IFSAR elevation data can be used in conjunction with visual data to overcome the aforementioned dif­ficulties. Current IFSAR technology provides sufficient elevation resolution to discriminate building regions from surrounding clutter. These building regions are not well defined from a visual image when the buildings have the same intensity level as their surrounding background. Similarly, a building having different colors may be wrongly segmented into several buildings. IFSAR data are not affected by color variations in buildings and therefore are better for building detection.

Figure 4 shows a visual image and edges detected by the Canny operator for the area shown in Fig. 3. The top part of Fig. 4 shows a building with two different roof colors and roof structures on many buildings. Many spurious edges not corresponding to the building appear in the edge map shown on the bottom right of Fig. 4. Using the IFSAR elevation map shown in Fig. 3, buildings and ground regions are labeled using a two class

Exploitation of Elevation Data from IFSAR

Exploitation of Elevation Data from IFSAR

Fig. 4. Visual image and edges detected by the Canny operator.

classifier. The IFSAR and visual images are registered. Figure 5 shows the result of registration of a visual image and the segmented elevation image. Features corresponding to roads, parked cars, trees, and so on are suppressed from the visual images using the segmented buildings derived from the IFSAR image.

The locations and the directions of edges in the segmented image are estimated and are used to locate edges of buildings in the visual image. In the visual image, an edge pixel corresponding to each edge pixel in the registered height image is searched in the direction perpendicular to the estimated direction in the height

Exploitation of Elevation Data from IFSAR

Fig. 5. Buildings segmented from the IFSAR image overlaid to visual image.

image. If an edge is found within a small neighborhood, the edge pixel is accepted as a valid edge of a building. If such a pixel is not found in the neighborhood, the edge is not accepted. Figure 6 shows the refined edges obtained by searching in the neighborhoods of height edges. Most of building edges in the height image are found while the unwanted edges are removed.


Environmental noise emanates from both natural and man – made sources and is collected by the receiving system an­tenna. The determination of noise parameters, such as the antenna noise figure Fa, requires careful measurement pro­grams that must account for temporal, spatial, and frequency variations of the particular noise source. In this section, some of the more important sources of environmental noise are de­scribed. The statistical data presented are based on many w = kTdv (— f y(il)dil=kTdv

^ 4n J 4n

which is the same as the total power available from a resistor at temperature T. Hence, the antenna temperature is simply T and f a = T/T0 independent of the antenna gain.

Fa for Common Natural and Man-Made Radio Noise Sources

Both natural and man-made radio noise have been measured and carefully studied by many scientists and engineers in the latter half of the twentieth century. The results of these ef­forts have been published in various journals, conference proceedings, and reports and recommendations of radio engi­neering organizations, such as the International Telecommu­nication Union (ITU) (11-14). In this article, statistical data from these studies are presented for what are considered to

2.9 x 10^

2.9 x 10^

2.9 x 1023

2.9 x 10^

2.9 x 1017 – s

2.9 x 1014 ®

2.9 x 10

w Ф

111 QQ

2.9 x 10®

2.9 x 105

1 Hz 10 100 1 kHz 10 100 1 MHz 10 100 1 GHz 10 100 1 THz


A: Maximum expected value of atmospheric noise. ~ B: Minimum expected value of atmospheric noise. – C: Atmospheric noise: value exceeded 0.5% of time. _ D: Atmospheric noise: value exceeded 99.5% of time. E: D-region daytime noise temperature. ~

F: Noise from galactic center. Part below 10 MHz – represents nighttime conditions. _

G: Noise from galatic pole. Part below 10 MHz

represents daytime conditions. ~

Emission by moist (17 g/m3) atmosphere for 0° –

elevation angle. _

Emission by dry (1 g/m3) atmosphere for 90° elevation angle. ~

J: Cosmic background (2.7 K). –

<: Quantum limit. _

3: Quiet sun, Lq Disturbed sun.

/I: Heavy rain (50 mm/h over 5 km). ~

si: Light rain (1.235 mm/h over 5 kmO. –
















Figure 2. Natural radio noise (1 Hz to 1 THz) (15).

be some of the more important environmental radio noise sources. For a more detailed treatment of these and other sources of radio noise, the reader is referred to the references.

The antenna noise figure Fa for background natural radio noise from 1 Hz to 1 THz is illustrated in Fig. 2 (15). These data show that natural radio noise depends strongly on fre­quency over the radio spectrum (nominally 3 kHz to 300 GHz). In addition, several noise sources are nonstationary in time and space (e. g., atmospheric, sun, rain). Of particular interest for communications systems operating at or below about 30 MHz is atmospheric noise, where Fa is random and is characterized by its statistics. Atmospheric noise is also non-Gaussian. The other noise sources shown in this figure are essentially Gaussian.

For RF systems operating at frequencies of several hun­dred megahertz and below, man-made noise is an important source of radio noise. Like atmospheric noise, man-made noise is both nonstationary and non-Gaussian. Figure 3 (16) shows the median antenna noise figure Fam for man-made noise in four environments and galactic noise as compared with the expected day-time and night-time levels for atmo­spheric noise. Man-made noise is strongly dependent on fre­quency and, in general, the Fam curves have a slope of —27.7 dB/decade of frequency.

Figure 4 (15) shows the details of natural radio noise over the frequency range of 100 MHz to 100 GHz. The estimated median business-area man-made noise has also been in­cluded. The E(90°) curve shows sky noise measured with a narrow beam antenna at zenith. The water and oxygen ab­sorption bands are clearly visible. The E(0°) curve is sky noise with a narrowbeam antenna directed along the earth’s surface.

It was shown that when an antenna receives blackbody radiation at a uniform temperature from all directions, Fa does not depend on the receiving antenna gain. For most envi­ronmental noise sources, however, Fa does depend on the an­tenna gain and on several other factors. Appropriate correc­tions must be applied when the radio system-receiving antenna differs significantly from that used to measure the noise.

Figure 4. Fa versus frequency (100 MHz to 100 GHz), where A = estimated me­dian business area man-made noise, B = galactic noise, C = galactic noise (toward galactic center with infinitely narrow beamwidth), D = quiet sun (J degree beamwidth directed at sun), E = sky noise due to oxygen and water vapor (very narrow beam antenna); upper curve 0° el­evation angle, lower curve 90° elevation angle, F = cosmic background, 2.7 K (15).

Frequency (Hz)


J so

rp Уо



rp Уо






where Ts is the brightness temperature of the sun at the de­sired frequency.

In Fig. 4, there are two curves associated with galactic noise. Curve B is for an omnidirectional antenna, while curve C is for an infinitely narrow beam aimed toward the galactic center. Because of the relative motion of the earth and galaxy, galactic noise is not constant in time. A more accurate deter­mination of galactic noise for other types of antennas can be obtained by using published radio sky data, which gives the brightness temperature as a function of position in the sky. Such data are available in CCIR Report 720-2 (14), which con­tains maps of the brightness temperature of the radio sky at 408 MHz and an approximate expression for the frequency dependence of the temperature.

Curves Ld, Lq, F, H, and M in Fig. 2 all refer to very nar – rowbeam antennas pointing directly at the source. Noise from such sources (sun, atmospheric gasses, the earth’s surface) are also expressed in terms of brightness temperature. These curves can be used to calculate the antenna temperature of a particular receiving antenna by integrating Eq. (25) in terms of temperature over the region occupied by the noise source:

where y0 is the gain and p(H) is the pattern of the receiving antenna; that is, у(П) = y0p(n). For example, the sun has a beamwidth of about J°. If a receiving antenna with gain y0 is aimed at the sun and the pattern is essentially constant over the intersection with the sun’s beam, the antenna tempera­ture is

f T (fi) p(fi) dfi = Y0TS(

J Sun ^

T (fi) p(fi) do,

electrically short monopole antenna. Since this type of noise most probably arrives at the receiver at relatively low eleva­tion angles and from random directions, such an azimuthally omnidirectional antenna is well suited for noise measure­ments. Predicting the antenna noise figure for other types of receiving antennas requires an assessment of the differences between the ideal short monopole antenna and the desired receiving antenna. Factors that should be considered are an­tenna efficiency, directivity, polarization, and height above the ground.

The direction of arrival for both atmospheric and man – made noise has been shown to be nonuniform, varying by as much as 10 dB with direction (17). Since the noise is nonsta – tionary, predicting Fa for high-gain antennas would likely be arduous if worst-case estimates based on the measured data do not provide sufficient accuracy. For azimuthally symmetric antennas such as a half-wave dipole, a correction factor based on the ratio of the desired antenna gain to the reference an­tenna gain can be applied to obtain the appropriate value for Fa.

Since these noise processes are nonstationary, the usual design parameter, SNR, is random and the underlying statis­tics of the noise process as a function of time and geographical location must be understood to assess radio performance properly. These characteristics are discussed in more detail in the following sections.

Another important consideration is that both atmospheric and man-made noise are non-Gaussian. Typically, communi­cation system performance is calculated based on Gaussian noise. A more detailed analysis incorporating the statistics of the actual non-Gaussian noise process may be required in ra­dio design and performance evaluations. Several publications listed in the references provide information regarding the im­pulsive nature of these noise sources and its effect on radio re­ceivers.

Atmospheric and Man-Made Noise

The most significant sources of environmental radio noise at frequencies below 1 GHz are man made and atmospheric. For these sources, the noise data were measured with a grounded

Statistics of Fa for Atmospheric Noise. Atmospheric noise is an important consideration for wireless communication sys­tems operating below 30 MHz. The main source of atmo­spheric noise is lightning. The electromagnetic energy emit­

ted by electrical storms couples into the earth-ionosphere waveguide, and hence, local noise levels can be significantly influenced by distant thunderstorms. Because of ionospheric interactions, overall atmospheric noise levels are greater at night, as shown in Fig. 3.

In Fig. 2, curves A, B, C, and D represent the expected range of Fa at the surface of the earth. These data are of the average background, taking into account all times of the day, seasons, and the entire surface of the earth. Curves A and B give the maximum and minimum values of Fa from 1 Hz to 10 kHz. In this frequency range, there is very little seasonal, diurnal, or geographic variation. Note that the variation of Fa begins to increase significantly at about 100 Hz. This is due to the variability of the Earth-ionosphere waveguide cut­off. Curves C and D give the atmospheric noise from 10 kHz to about 30 MHz, above which the noise levels are quite low. Curve C is the value of Fa exceeded 0.5% of the time, and curve D is the value of Fa exceeded 99.5% of the time. These results are derived from background atmospheric noise and do not include effects of ‘‘nearby’’ electrical storms. A compila­tion of measurements showing the peak field strength for 1 mile distant lightning as a function of frequency is given in Fig. 5 (18).

The variability of Fa, particularly in the medium frequency (MF) and high frequency (HF) communication bands (300 kHz to 30 MHz), is so large that the bounds given in Fig. 2 alone cannot be used to obtain a useful characterization of radio system performance. It is important, therefore, to know how Fa and other noise statistics vary with time and location. Starting in 1957, the average power levels and other relevant statistics were measured on a worldwide basis using a net­work of 15 stations. These measurements spanned 13 kHz to 20 MHz and considered both the time of day and the season. The results of several years of measurements were published in the National Bureau of Standards (NBS) Technical Note Series 18 (19) and later published in CCIR Report 332 (12). A numerical representation of the data contained in Report 332 is also available (20).

The published data give, for each frequency, location, sea­son, and time of day (measured in 4-h increments), the month-hour median value of Fa along with values exceed 10% (upper decile, Du) and 90% (lower decile, Di) of the time. As an example of these data, Fig. 6 shows worldwide values for the median antenna noise figure Fam in the winter between 0000 and 0400 local time. The median noise figure at other frequencies, Du, Dl, and related statistics are obtained using the curves shown in Fig. 7.

The statistical distribution of Fa and hence the radio sys­tem SNR is readily obtained from the published data. For a given season and measurement time block (4 h) it has been shown that Fa is adequately represented by two log-normal distributions (21), one above the median value and one below. As an example, the distribution of Fa for 3 MHz at Boulder, Colorado in the winter at 0000 to 0400 can be determined using the data from Figs. 6 and 7. First, the 1 MHz value of Fam at the geographic location of interest is obtained from Fig. 6 and corrected to 3 MHz using Fig. 7. Then Du and Dl as well as their standard deviations are obtained from Fig. 7. Using normal probability paper, these three points define the two intersecting lines that give the two desired log-normal distri­butions. The resulting distribution is shown in Fig. 8. Hence, if a radio system is operating at 3 MHz, the system perfor­mance can be conveniently specified in terms of the percent of time that the required SNR will be available at a particular geographic location, season, and time.

Statistics of Fa for Man-Made Noise. In 1974, Spaulding and Disney (22) presented results from many years of measure-




















Watt and Maxwell (1957) _ (using Florman’s data)

Horner and Bradley (1941)

1/ 5/- /f /2

4 68 2 4 68 2 4 68 2 4 68 2 4 68 2 4 68 2

10 kHz 100 kHz 1 MHz 10 MHz 100 MHz 1 GHz


Figure 5. Lightning emission peak field strength, 1 mile distant. (Reprinted from p. 369 of Ref. 18, by permission, © 1982 IEEE.)

Horner and Bradley (1964)

Watt and Maxwell (1957)

(expected average combined

^main stroke and precursors)

– Horner and Bradley (1964)

Estimated average level of spectral amplitude

^ Horner and Bradley (1964)
gi—Horner (1962)

– Horner and Clarke (1964)
Hallgren and MacDonald







Aiya (1955)

Aiya and Sonde (1963)

Hallgren and MacDonald (1962)-

Schafer and Goodall (1939)-

Japan URSI (1963)’

Brook and Kitagwa (1964)’

Hewitt (1957)’

_ BW BW < 10 kHz U/BW BW > 10 kHz

4 68 10 GHz

1 kHz

60 75 90 105 120 135 150 165 180 165 150 135 120 105 90 75 60 45 30 15 0 15 30 45 60

Figure 6. Expected values of atmospheric radio noise, Fam (dB above kT0b at 1 MHz) (Winter 0000-0400 LT) (12).

% locations 50


ments of man-made radio noise. They devised methods for es­timating the noise power and noise amplitude statistics that are important in the design of radio systems. These methods are described in the CCIR Reports (13) and have been widely used by industry. Figure 3 summarizes these results in terms of the median antenna noise figure Fm. As with atmospheric noise, man-made noise is both nonstationary and non – Gaussian and is a significant source of radio noise for fre­quencies below a few hundred megahertz. The antenna noise figure Fa varies both in time and location. The noise level de­pends on the type and extent of human activities, which are conveniently classified into four man-made noise environ­ments (13) described in Table 1.

The within-the-hour time variability of Fa is commonly de­scribed by two log-normal distributions (21), as described pre­viously for atmospheric noise. Values of Du and Dl are given in CCIR Report 258 as a function of frequency and environ­ment. More recently, Spaulding and Stewart (21) have ana­lyzed the data used to obtain these decile values and have found that it is appropriate to use the values Du = 9.7 dB and Di = 7 dB, independent of environmental category and frequency. Other proposed noise models described in Report 258 include a simple Gaussian model that does not describe the skewness observed in measured noise data and a more complex ^-square model.

As an example, the distributions of Fa using Du = 9.7 dB and Dl = 7 dB at 137 MHz for business, residential, rural, and quiet rural noise environments are shown in Fig. 9. These data include the contribution of Galactic noise, which is only significant in the quiet rural noise environment.

Location variability is also an important consideration when characterizing Fa. The usual assumption (22) is that Fam is the noise figure exceeded 50% of the time at 50% of the locations. Hence, the time distribution of Fa as shown in Fig. 9 is the noise power exceeded at 50% of the locations for a particular environment. If it is assumed that the location variability is Gaussian, then the value Fa that is exceeded at other than 50% of locations is obtained from

Fa = Fa + V2ot erfc 1


0.02 0.03

5 7 10

20 ЗО 50 70 100

Figure 7. (a) Variation of radio noise with frequency (winter; 000-0400 LT) (12).———————————-

Expected values of atmospheric noise.—————– Expected values of galactic noise, (b) Data

on noise variability and character (winter; 000-0400 LT) (12). crF : Standard deviation of values of Fam. Da: Ratio of upper decile to median value, Fam. crD : Standard deviation of values of Du. D. Ratio of median value, i^am, to lower decile. сгд: Standard deviation of values of Db Vdm: Ex­pected value of median deviation of average voltage. The values shown are for a bandwidth of 200 Hz. ay : Standard deviation of VA (12).

0.2 0.3 0.5 0.7 1 2 3

Frequency (MHz)

Figure 8. The distribution of Fa values for atmospheric radio noise at Boulder, Colorado. 3 MHz, for the winter season, 0000-0400 hours (21).

Table 1. CCIR Report 258 Definitions of Man-Made Noise Environments






Quiet Rural

Areas where predominant usage is for any type of business

Areas used predominantly for single or multiple family dwellings (at least five single-family units per hectare), no large or busy highways Areas where dwelling density is no more than one every two hectares No definition given

where erfc-1 is the inverse complimentary error function and ol is the standard deviation of the location distribution.

The location variability in terms of the standard deviation ol of the median value as a function of frequency and environ­ment is given in Table 2 (13). As may be expected, ol for the business environment is much larger than either the residen­tial or rural environment.

Multiple Sensors

This overview of the various measuring signals has shown that every single sensor has some specific advantages and dis­advantages for rate adaptation. For that reason, some manu­facturers pursue the strategy to use two or more sensors si­multaneously, offering the advantage of complementing each other as much as possible in their properties. A particularly attractive combination is that of the motion sensor and a met­abolic sensor. The former quickly determines a load but only imprecisely measures the level, and the latter responds with a delay but detects the degree of stress more precisely and is less sensitive to external disturbances (e. g., tremors). Cur­rently pacemakers that have a motion sensor combined with QT-interval measurement are available; others utilize the motion signal and minute ventilation measurement. The clin­ical results are promising (19,33), but it should not be over­looked that the integration of several sensors is connected to a higher consumption of power, an increased programming effort for the physician, and, quite relevantly, higher costs. Because of growing economic pressure and increasing time limitations for the pacemaker follow-up, the measurement of yet more measuring variables is questioned. Therefore, a dif­ferent strategy is favored by other manufacturers.

If nervous activity, such as the sympathetic tone, can be successfully determined with sufficient exactitude (e. g., with the intracardiac acceleration sensor or the unipolar intracar­diac impedance measurement), then access is established to the ANS and thus to a widely ramified and highly complex network of biological sensors (baroreceptors, chemoreceptors, etc.). Information from the corporeal intrinsic sensor system is thus used for rate adaptation that has already been pro­cessed in the circulatory center. This intrinsic sensor system determines the various inner and outer disturbances much more comprehensively and quickly, as well as more precisely than an artificial multisensor system. Moreover, it also cap­tures nonmetabolic influences such as emotional stress. Con­sequently, it is to be expected that those systems measuring just one good indicator of nervous activity reliably will be su­perior to those measuring several other parameters with a lesser correlation.


In its comparatively short history, pacemaker therapy has undergone a rapid development. Today, the innovations allow the physician to treat complex heart rhythm disturbances with a therapy that is reliable and specific to the individual. Another advantage in contrast to drug therapy is that pace­maker therapy has less propensity to lead to side effects. While in the early years, the life-supporting function was the prominent focus, modern rate-adaptive dual-chamber pace­makers secure a high quality of life for the patient. This is because of their ability to reestablish the synchrony of atrial and ventricular contraction and their provision of physiologi­cal rate adaptation. With minimal dimensions and weight, the present implants possess a service lifetime that corre­sponds to the life expectancy of most patients.

Distant monitoring of pacemaker-dependent patients via telemedicine will further increase the safety of the patients in the future. Extended Holter functions of the internal mem­ory will provide the physician with more precise diagnostic information. Considering time and cost limitations, the focus is no longer solely on quantity. Instead, the therapy-relevant information must be selectively extracted by a suitable choice of the parameters to be monitored, which is accompanied by expert systems support. These enhanced diagnostic features will allow pacemakers to monitor other functional aspects of the heart, such as medication monitoring or ischemia. An ap­plication that has already proven successful is allograft rejec­tion monitoring of patients who underwent heart transplanta­tion by means of the ventricular evoked response signal (51). To decrease the programming and follow-up efforts, auto­matic functions are increasingly used that guarantee an un­varying, optimal pacemaker function and thus enable the physicians to focus more attention on the patient. Some of these functions include the automatic monitoring and adjust­ment of the sensing threshold and pacing energy, which se­cures safe sensing and pacing with minimal power consump­tion. Further automatic functions will follow (e. g., automatic compensation of a possible sensor drift in a rate-adaptive system).

Paralleling this approach, several concepts are pursued to also draw tachycardic rhythm disturbances into the indica­tion spectrum. Promising methods are antitachycardia pacing and multisite pacing. A direct stimulation of the afferent va­gal nervous pathways in the myocardium can also contribute to the reestablishment of the neurohumoral balance. With the preventative measures mentioned, the access to neurohu – moral parameters makes possible the recognition of life – threatening tachycardias at the early stages, and their suppression. This is done without having to trigger an electro­shock by an implantable defibrillator. Access to the state of tone of the autonomic nervous system by intracardiac mea­suring methods will also make a major contribution to the therapy optimization for other applications, such as to ensure cardiac activity after a cardiomyoplasty (52).