Monthly Archives: July 2014



Switchgear using the arc suppression properties of insulating liquids (oils) was invented in the early 1880s. In the early days, the structure of switchgear was simple: a pair of elec­trodes were placed in insulating oil. In such switchgear the arc suppression mechanism is also simple: as the electrode spacing increases so the arc length increases and the electric arc is suppressed. This suppression results from the cooling effect of hydrogen gas produced by the decomposition of the insulating oil due to the arc. In arc suppression in insulating oil, hydrogen gas produced by decomposition of the insulating oil due to the arc plays an important role.

Arc Suppression by Hydrogen

The energy of the arc between a pair of electrodes in the insu­lating oil is dissipated by the electrodes, by conduction and radiation, evaporation and decomposition of the insulating oil, heating and expansion of gases produced by the decomposi­tion of the insulating oil, and dissociation of hydrogen. Fifty to seventy percent of produced gas is hydrogen, and the other gases are acetylene, methane, and ethane. As shown in Table 1, the thermal conductivity of hydrogen at room temperature is higher than that of other gases. At 4000°C it is about 50 W/m • K. This value is more than 5 times higher than for the other gases. Therefore, the cooling effect is larger than that of the other gases. By this cooling, the arc is suppressed at the zero-current point of alternating current. Thus the current is

Table 1. Heat Conductivity of Gases


Heat conductivity (W/m • K)









J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright © 1999 John Wiley & Sons, Inc.

cut off. Switches that utilize this arc suppression mechanism are called plane-break oil circuit breakers.

As the current increases, it becomes more difficult to sup­press the arc. Therefore, the breaking time (cutoff) of the cur­rent becomes longer. However, when the current exceeds a certain magnitude, a large amount of hydrogen—enough to suppress the arc—has been produced. At this point the break­ing time is again reduced. This means that the breaking time shows a maximum value at a certain magnitude of current.

Arc Suppression by an Explosion Chamber

Cooling may not be sufficient to suppress a high-current arc and to lessen the breaking time. In this case explosion cham­bers are used. In a simple explosion chamber a movable elec­trode of the circuit breaker (switchgear) acts as a stopper of the chamber. In the early stages of separating the electrode, the arc is enclosed in the limited space of the chamber. There­fore, the pressure in the chamber rises owing to gases pro­duced by decomposition of the insulating oil. As this process proceeds, the stopper is removed creating an exhaust hole.

Through this exhaust hole the gases in the chamber are released abruptly. By this release, flows of gases and oil are produced, and the arc is pressurized and blasted. These pro­cesses create efficient arc suppression. Furthermore, when the lengthened arc contacts insulating solids while enclosed in narrow gaps between them, more efficient arc suppression results.

Because gases are abruptly exhausted through the hole, adiabatic expansion occurs. Thus cooling is expected. In some oil circuit breakers this is the main effect utilized. Some sci­entists maintain that in oil circuit breakers with an explosion chamber, arc suppression can be entirely explained by the cooling effect owing to the adiabatic expansion. In fact, how­ever, insulating oils exhibit not only the arc suppression prop­erty resulting from the cooling effect of hydrogen, but also the substantially different suppression properties of the oils themselves. In high-current arc suppression, these two types of suppression properties are combined.

In circuit breakers with an explosion chamber, a large pressure rise is expected in the chamber in the case of high – current arc suppression, but not in the case of low-current suppression. Thus in the latter case the breaking time is longer, because the arc must be suppressed by the cooling ef­fect of hydrogen alone. Therefore, the breaking time shows a maximum at a certain magnitude of current (the critical cur­rent). But this breaking time is much shorter than that of the plain-break oil circuit breaker. A circuit breaker with an explosion chamber necessarily has plural arc suppression mechanisms.

In some circuit breakers, in the region of the critical cur­rent, the auxiliary flow of the oil is forced by a piston to sup­plement the pressure rise and the conduction cooling effect. By this means a constant breaking time is obtained over a wide range of current.

Plane-break oil circuit breakers are used for low voltages and low currents, such as 3.6 kV to 7.2 kV and 4 kA to 8 kA. Oil circuit breakers with an explosion chamber are used for high voltages and high currents. In the case of multibreak circuit breakers, 700 kV with currents of several tens of kilo – amperes have been achieved (1).


The behavior of insulating liquids under highly stressed con­ditions and under conditions of partial discharge are among the most important items in screening tests for newly devel­oped insulating liquids and also in the routine testing of liquids.

Gassing Rate

Methods of evaluating gas absorption and evolution of insu­lating oils under high stress after saturation with a gas are described in IEC 628 and ASTM D 2330. The fundamental approaches are similar to each other and amount to a modi­fied Pirelli method.

The condition used in such methods differs from actual field conditions, especially in the case of hermetically sealed equipment such as power cables, capacitors, and many power transformers.

Discharge Resistance

To evaluate the behavior of insulating liquids in a highly stressed impregnated system and to obtain numerical results for the recently developed impregnants with very high resis­tance to partial discharge, the above-mentioned methods are not sufficient. As new liquids, especially with high aromatic – ity, are developed and applied voltage stresses are progres­sively increased, a new method is needed to characterize the ability of such insulating liquids to prevent or suppress par­tial discharge under high stress. One of these methods, deter­mination of the partial discharge inception voltage with a needle and spherical ball oil gap, is described in IEC 61294. The partial discharge inception voltage obtained by this method is largely related to the chemical structure of the liq­uid and is correlative to partial discharge in impregnated in­sulating systems such as capacitor elements.

Network Convergence

The evolution and convergence towards a common core in­frastructure is sometimes called the New Generation Net­work (NGN) architecture, see figure 10.

This network evolution is supported by techniques for separation of control and switching such as the media gate­way control protocol H.248. Call session control functions and protocol collections such as H.323 enable call setup including coded and compressed voice calls and choice of coding standard to be used. The Session Initiation Protocol (SIP) has a similar but more limited scope for the setup of communication sessions between two parties and selection of coding standard using the Session Description Protocol (SDP).

The IP Multimedia Subsystems (IMS) defined by the European Telecommunications Standards Institute (ETSI) and the 3rd Generation Partnership Project (3GPP) allow video and other media forms to be exchanged and charged on a session by session basis from peer to peer in a way similar to classic phone calls.

So-called “softswitches” or media gateway controllers in­clude call session control functions for handling of voice calls and other session oriented services. They are also re­sponsible for that sessions can be connected via a physical switch or media gateway (MGW) and handle the signaling between network nodes and other networks. The call ses­sion control function establishes a call or session, and fur-

Figure 10. Converged network ar­chitecture with decoupling of access, transport, control and service func­tions.

ther manages its reserved connection path resources end to end, for example through an ATM or IP backbone net­work and for media stream processing. The MGM provide physical switching and interfaces to access nodes and other networks.

EVOLUTION TRENDS Technology Trends

Due to the effects of semiconductor process scaling, im­proved chip fabrication yield, and increasing numbers of connectivity layers – the storage capability of memory and the execution speed of processors has doubled every 18 months during the last 40 years, following the so called Moore’s law. This exponential growth of transistors-per – chip will continue, but will force new hardware architec­tures such as chip multiprocessors and systems on chip in order to keep energy use within reasonable limits. The development of optical fibers, including the introduction of wavelength multiplexing, is perhaps even faster. Thus there are many factors that lead to cheaper nodes and higher bit rates in the network. At the same time, digital coding and compression techniques have been improved that makes it possible to transmit voice with traditional telecom quality using only a fraction of the bandwidth that is used today. These developments are changing the design of both nodes and networks.

It is also important to provide increased interoperabil­ity between network standards since the end users do not want to be concerned about where a person is physically located or to which network the person is connected. The introduction of universal personal numbers can solve this and lead to a convergence of fixed and mobile telephony. The possibility of accessing short message services (SMS), fax, and e-mail via fixed and mobile devices and so-called Internet telephony are examples of services that illustrate the needs for interoperability and convergence of telecom­munication and datacommunication.

High bit rates at a low price, combined with the demand for real-time multimedia services, indicates that the net­work must either become more flexible or consist of sev­eral different but interoperating networks. Packet, cell, and new, more flexible circuit switching techniques supported by new signaling protocols (such as MPLS) can solve these needs. In order to integrate service execution, control and connectivity horizontally across multiple access networks, a layered architecture approach can be used using a com­mon transport layer based on IP and Ethernet technol­ogy over fiber rather than delivering single services such as voice telephony or data access (vertically integrated networks). This architecture supports efficient IP packet based transport ofboth signaling and payload, which is not possible with classic switches. By doing this one introduces a single IP infrastructure that can handle all network ser­vices, such as fixed and mobile communications.

Service Control and Content Access Control


High availability of the telecom network and associated services is the single most important operator and sub­scriber requirement. Normal requirements on maximum unavailability are in the order of one or a few minutes of subscriber unavailability per year. This includes down­time due to faults in the exchange and in the transmis­sion equipment and software, but also unavailability due to planned software upgrades and often also accidents out­side the control ofthe vendor, such as fires, damaged trans­mission cables, and incorrect operation of the exchange. Several methods are used to increase the availability of the exchange to the subscriber:

• Redundancy, including fault-tolerant processors

• Segmentation

• Diagnostics of hardware and software faults

• Recovery after failure

• Handling of overload

• Disturbance-free upgrades and corrections

• Robustness to operator errors

Each of these is treated briefly below.

Redundancy. In order to cope with hardware faults, re­dundant hardware is used for those parts of the switch that are critical for traffic execution. Requirements are in the order of 1000 years for the mean time between system failures (MTBSF).

Specifically, current technology requires that a redun­dant processor is available in synchronized execution (hot standby), ready for a transparent takeover if a single hard­ware fault occurs in one processor. An intelligent fault anal­ysis algorithm is used to decide which processor is faulty. In a multiprocessor system, n + 1 redundancy is normally used, where each processor can be made of single, double, or triple hardware. When one processor fails, its tasks are moved to the idle (cool standby) processor. A similar redun­dancy method is based on load sharing, where the tasks of the failed processor are taken over by several of the other

processors that are not overloaded themselves.

The group switch hardware is also normally duplicated or triplicated, because it is so vital to the exchange func­tions. The less central hardware devices, such as trunk de­vices, voice machines, transceivers, signal terminals, and code receivers, are normally pooled, so that a faulty device is blocked from use and all users can instead access the remaining devices until the faulty device is repaired.

Segmentation. To avoid system failure a fault must be kept isolated within a small area of the exchange. This is done by segmentation of hardware, with error supervision at the interfaces. In software, the segmentation is made by partitioning of and restricted access to data structures; only owners of data can change the data, where the owner can be a call process or a function.

Diagnostics of Faults. After the occurrence of a fault in hardware or software, the fault must be identified and lo­calized, its effect restricted, and the exchange moved back to its normal state of execution. For this to work, the di­agnostics must be extensive and automatic. The exchange must be able to identify the faulty software and hardware and must be able to issue an alarm, usually to a remotely located operator.

Recovery After Failure. After a fault has been detected, the effect should be restricted to only the individual call or process (for instance, an operator procedure or a location update by a mobile subscriber) or an individual hardware device. This call or process is aborted, while the rest of the exchange is not affected. The recovery must be automatic and secure. In a small fraction of events, the fault remains after the low-level recovery, or the initial fault is consid­ered too severe by the fault handling software, so that a more powerful recovery procedure must be used. The pro­cess abort can be escalated to temporary blocking of hard­ware devices or software applications and, if required, re­sult in the restart of an entire processor or a number of processors. If the restart fails in recovering the exchange into normal traffic handling, new data and software are loaded from internal or external memory.

Handling of Overload. The exchange is required to exe­cute traffic literally nonstop and when offered more traffic than it can handle, the rejection of overflow traffic should be made gracefully. ITU requires, that an exchange that is offered 150% of what it was designed for should still have 90% of its maximum traffic handling capacity. The ex­change must also be able to function without failure during extreme traffic loads. Such extreme loads can be both short peaks lasting a few milliseconds or sustained overload due to failures in other parts of the network. Overload han­dling is accomplished by rejecting excess traffic very early in the call setup, before it has used too much processor time or any of the scarce resources in the switching path. Fig­ure 9 shows the overload performance with and without an overload control function.

the exchange must not disturb ongoing traffic execution. This should be true both for fault corrections and when new functions are introduced. The architecture must thus allow traffic to be executed in redundant parts, while some parts of the exchange are upgraded.

Robustness to Operator Errors. Security against unau­thorized access is accomplished by use of passwords and physically locked exchange premises. The user interface part of the exchange can supervise that valid operation instructions are followed for operation and maintenance, and it can issue an alert or prohibit other procedures. Log­ging of operator procedures and functions for undoing a procedure can be used. If a board is incorrectly removed from the exchange, the exchange should restrict the fault to that particular board, use redundant hardware to min­imize the effect of the fault, and then indicate that this board is unavailable. A simple user interface with on-line support makes operator errors less probable.

Grade of Service. The real-time delays in the exchange must be restricted to transmit speech correctly. Packet – switched connections have problems achieving good real­time speech quality for this reason, especially during heavy usage. Circuit-switched networks have so far given the best real-time performance regarding delays and grade of service for voice and video, compared with packet data networks. ATM switching (due to its short and fixed size cell/packets) also fulfills the grade of service requirements and also defines service classes for different requirements.


There is a need for scalable exchanges regarding capacity, from the very small (such as base stations and local ex­changes in desolate areas) to the very large, mainly the hubs (transit exchanges) and MSC:s of the networks and exchanges in metropolitan areas. Furthermore, there is sometimes a requirement for downward scalability regard­ing physical size and power consumption, particularly for indoor or inner city mobile telephony.

The following are the common system limits for down­ward scalability of an exchange:

• The cost to manufacture and to operate in service will be too high per subscriber or line for small configura­tions.

• The physical size is limited by the hardware technol­ogy and by the requirements for robustness to the en­vironment, and what is cost efficient to handle.

• The power consumption is limited by the hardware technology chosen.

The following are the common system limits for upward scalability of an exchange, each treated briefly below:

• (Dynamic) real-time capacity

• (Static) traffic handling capacity

• Grade of service (delays)

• Memory limits

• Data transfer capacity

• Dependability risks

Processing Capacity. New more advanced services and techniques requires more processing capacity, this trend has been valid for (a) the replacement of analog technol­ogy with digital processor controlled functions and (b) the development of signaling systems from decadic and multi­frequency to packet mode digital signaling, including trunk signaling protocols such as ISDN user part (ISUP) together with mobile application part (MAP) and transaction capa­bility application part (TCAP) and the development of mo­bile telephony. In the charging area, the trend from pulse metering to detailed billing affects the call capacity.

The number of calls per subscriber has also increased due to lower call costs from deregulation and due to the use of subscriber redirection and answering services.

Traffic Capacity. It is very important to design and con­figure hardware and software correctly, in order to mini­mize hardware costs, and at the same time ensure suffi­ciently low congestion in the exchange and in the network. Normally, the switch fabric is virtually non-blocking, and the congestion occurs in other resources, such as the ac­cess lines, trunk lines, and other equipment. The relation between congestion probability and the amount of traffic is well known, if all devices are accessible and the traffic follows a Poisson process; that is, the times between of­fered calls are independent and exponentially distributed. In some cases, the probability can be calculated explicitly. In more complex device configurations with non-Poisson traffic, the congestion probabilities are most easily calcu­lated by simulation techniques.

Memory. The amount of memory per subscriber line or trunk line is another way to measure the complexity of a telecom application. The trend in this area is similar to that of processing capacity, and the same factors are responsible for the large increase in memory needs. Due to the real­time requirements, fast memory is used extensively, and secondary memory is only used for storage of backups, log files and other data where time is not critical.

Transfer Capacity. A third part of the switching sys­tem capacity is the data transfer from the exchange to other nodes, for example, to other exchanges and network databases, billing centers, statistical post processing, and nodes for centralized operation. There has been a growing demand for signaling link capacity due to large STPs, for transfer capacity from the exchange to a billing center due to detailed billing and large amounts of real-time statis­tics, and to transfer capacity into the exchange due to the increased amount of memory to reload at exchange failure.

Dependability Risks. Although dependability has in­creased for digital exchanges, there is a limit as to how large the nodes in the network can be built. First, the more hardware and the more software functions assembled in one exchange, the more errors there are. The vast majority of these faults will be handled by low-level recovery, trans­parent to the telecom function, or only affect one process. However, a small fraction of the faults can result in a major outage that affects the entire exchange during some time.

As an example, assume that the risk of a one-hour com­plete exchange failure during a year for one exchange is 1%. If we add the functionality of an SCP to an HLR node, then we more than double the amount of software, and presumably the number of faults, in the node. The risk of a major outage should be larger in a new exchange introduc­ing new software with new faults. Only if unavailability due to completely stopped traffic execution is much less than the total effect of process abortions and blocked hard­ware devices can we build exchanges of unlimited software complexity.

The second reason for a limited complexity from a de­pendability point of view is that network redundancy is required and only can be used if there are several transit exchanges in the network.

Life-Cycle Cost

Since the 1980s, the operating cost has become larger than the investment cost of an exchange. Thus, the emphasis on efficient operation and maintenance has increased, regard­ing both ease of use and utilization of centers that remotely operate a number of exchanges that are not staffed. For ease of use, the telecommunication management network (TMN) was an attempt by ITU to standardize the operator interface. After several years this standard is still not used much. Instead, the operator interface is to a large extent dependent on the exchange manufacturer as well as the requirements from the telecom operator company. Several open and proprietary user interfaces are common.

For central operation, more robust methods of remote activities have evolved. Software upgrades and corrections, alarm supervision and handling, collection of statistics and charging data, and handling and definition of subscriber data are all made remotely. Transmission uses a multitude of techniques and protocols. Open standard protocols have taken over from proprietary protocols.

In addition, important parts of the life-cycle cost are (a) product handling for ordering and installation and (b) spare part supply.


The architecture ofcomputer controlled exchanges is influ­enced to a large extent by the architecture and technology ofboth the switching system and the (central) control sys­tem and how they are related via decentralized (regional) control systems (see Fig. 4).

A modular architecture can lower the costs of system handling and make it easier to adapt the system to the changing world of telecommunications. In a truly modular system each module is fully decoupled and independent of the internal structure of other modules. There are different forms of modularity, for example:

• Application modularity. Make it easier to combine several larger applications in one node.

• Functional modularity. The system defined in terms of functions rather than implementation units. Functions should be possible to add, delete, and change without disturbing the operation of the sys­tem.

• Software modularity. The software modules should be programmed independently of each other, and they should interact only through defined interfaces and protocols. In this way, new or changed modules can be added without changing existing software.

• Hardware modularity. Supports that new hard­ware can be added or changed without affecting other parts of the exchange.

On the highest level the system architecture of the exchange can be divided into various application mod­ules in analogy to how telecommunications nodes inter­act and communicate, using protocols enabling modules to be added or changed without affecting the other modules.

Typically the implementation of a telecommunications ex­change can be divided into:

• Application modules. These implement various telecommunication applications much like virtual nodes using standardized interfaces to other appli­cation modules. Application modules act as clients to resource modules.

• Resource modules. These modules coordinate the use of common resources available to applications by means of well-defined interfaces to the users. Re­source modules act as servers to application modules.

• Control modules. These modules are responsible for the operating system functions, input-out-put func­tions, basic call service functions, and so on.

Application Modules

The application modules implement various telecommu­nication applications and have standardized interfaces to resource modules. In general an application consists of ac­cess and services. Examples of application modules are:

• Analog access module

• Digital access module

• Mobile access module

• PSTN user services module

• ISDN user services module

• MSC user service module

• Home location register (HLR) module

Resource Modules

The resource modules typically handle and coordinate the use of common resources and may contain both soft­ware and hardware. The most important part is the group switch. Trunks and remote and central subscriber switches (RSS and CSS respectively) are connected to the group switch. The trunks are used to connect the switch to other switches, to data networks, mobile base stations, etc. The subscriber switch handles the subscriber calls and concen­trates the traffic (see Fig. 5).

Group Switch. The main function of the group switch is selection, connection, and disconnection of concentrated speech or signal paths. The group switch often has a gen­eral structure.

The overall control of the group switch is performed by the central processor system. The regional processors take care of simpler and more routine tasks, such as periodic scanning of the hardware, whereas the central control sys­tem handles the more complex functions. Associated func­tions included in the group switching resource module are network synchronization devices and devices to create mul­tiparty calls.

Subscriber Switch. The subscriber switch handles selec­tion and concentration of the subscriber lines, its main functions are as follows:

Figure 4. Typical architecture of stored program controlled exchange.

Figure 5. Switch architecture.

• Transmission and reception of speech and signaling data to and from the subscriber equipment (for exam­ple, on-and off-hook detection).

• Multiplexing and concentration of the subscriber lines, to save hardware and make more efficient use of the communication links between the subscriber stage and the group switch.

The architecture should be modular and enable to com­bine PSTN and ISDN access in the subscriber stage. The subscriber switch can be colocated with the group switch in the exchange (central subscriber switch, CSS) or located at a distance from the exchange (remote subscriber switch, RSS).

Remote Subscriber Multiplexer. The remote subscriber multiplexer (RSM) is an add-on subscriber access node, used in the access network, which can cater small groups of subscribers. It provides both mobile and standard tele­phony connections. The RSM multiplexes and concentrates the traffic to the central or remote subscriber switch but does not carry out traffic switching functions.

Trunk and Signaling. This resource module includes the circuits for connecting trunks and signaling devices to the group switch. The module should handle the adaptation to different signaling systems, namely common channel sig­naling as well as various register and line signaling sys­tems.

Traffic Control. This resource module contains the traffic handling and the traffic control functions of the exchange. This module is responsible for finding the most suitable route between calling and called subscribers and of verify­ing that call establishment is allowed.

Operation and Maintenance. This resource module en­ables tasks such as supervision of traffic, testing of the transmission accessibility and quality, and diagnostics and fault localization of devices or trunks.

Common Channel Signaling. This resource module in­cludes the signaling terminals and the message transfer part (MTP) functions for common channel signaling sys­tems such as SS7.

Charging. This resource module is used in exchanges that act as charging points. Both pulse metering and spec­ified billing (toll ticketing) can be offered. It should be pos­sible to charge both calls and services, and the charging should be based on:

• Usage

• Provision/withdrawal of subscriber services and sup­plementary services

• Activation/deactivation of subscriber services and supplementary services

Control Modules

The primary function of the control module is to provide the real-time processing and execution environment re­quired to execute software in application modules and re­source modules used to perform traffic-handling functions and call services. The processing can be centralized where one processor takes care of all tasks, or distributed where the processing of information is distributed over several processors.

Execution of telecom software imposes stringent real­time requirements on the control system. Calls appear stochastically, short response times are needed, and over­load situations must be handled. The main control modules are: the central processor(s); the data store to store call data; and the program store to store the actual programs.

In order to achieve an efficient overall control system, it can be divided into:

• Central control. One or several processors that per­form the non-routine, complex program control and data-handling tasks such as execution of subscriber services, collection of statistics and charging data, and updating exchange data and exchange configuration.

• Regional control. A set of distributed processors that perform routine, simple and repetitive tasks but

also some protocol handling. They are of different types optimized for their main tasks, for example, input/output processing. They often have strict real­time and throughput requirements, for instance for protocol handling, and may have customized hard­ware support in the form of ASIC, FPGA and DSP circuits for this purpose.

Switching Techniques

Until about 1970 most switches were analog and based on electromechanical rotor switches and crossbar switches. Since then the utilization of digital techniques has be­come dominant. Digital switches have been based on syn­chronous time and space multiplexed circuit switch tech­nology. For pure data communication, packet switching technology is often used. In order to be able to support voice and data sharing a common infrastructure, an asyn­chronous transfer mode (ATM) has been developed. Syn­chronous time and space multiplexed circuit switch tech­nology is based on synchronous transfer mode (STM) car­rier and synchronization technology (STM transport tech­nique is also often used as a carrier for ATM).

A typical STM-based switch architecture is made up of a combination of time and space switches (T and S, respec­tively). A space switch connects physical lines by changing of positions in the space domain, see Fig 6a and a time switch changes the ordering sequence of data (voice) sam­ples by changing of positions in the time domain, illus­trated in Fig. 6b.

The elements T and S can be combined in several ways to realize a switch and make it configurable in many ways. Usually time switching is used in input and output stages and space switching is often used in central parts of a switch. This basic time-space-time (TST) switch structure can be used both in subscriber and in group switching stages. The first part of a TST switch is a time switch, which interchanges time slots between the external incoming dig­ital paths and the space switch. The space switch connects the time switches at the input and the output. The last part of the TST switch is a time switch, which connects the time slots between the external outgoing digital paths and the space switch (see Fig. 7).

The time switch moves data contained in each time slot from an incoming bit stream to an outgoing bit stream but with a different time slot sequence. To accomplish this, the time slot needs to be stored in memory (write) and read out of the data store (DS) memory and be placed in a new posi­tion (read). The reads and the writes need to be controlled, and the control information needs to be stored in a control store (CS) memory as well. The timing of DS and CS is controlled by a time controller (TC). Examples of control actions are time slot busy or time slot idle.

A typical space switch consists of a cross-point matrix made up of logic gates that realize the switching of time slots in space. The matrix can be divided into a number of inputs and a number of outputs and is synchronized with the time switching stages via a common clock and a control store.

Figure 8. Typical ATM switch structure with input and output ports and control space switch.

Switch architectures based on asynchronous transfer mode (ATM) handle small packets called cells with a given
fixed size (53 bytes) divided into header (5 bytes) and pay­load (48 bytes). The header contains virtual path identifiers (VPIs) and virtual channel identifiers (VCIs), where a vir­tual path (VP) is a bundle of virtual channels (VC). Traffic can be switched at the VP or VC level cell by cell. Associated with a VP or a VC is a quality of service (QoS) contract. In order to be able to guarantee the switching of cells accord­ing to the contract without unacceptable cell loss a number of queues are used at the input ports and output ports of the switch. Between input and output ports (including bufferqueues and cell multiplexers and demultiplexers) a space matrix often is used, as in Fig. 8. However, other types of central switching structures sometimes are used, such as fast fiber buses and Banyan networks.

Other switching techniques are used in a telecommu­nication exchange in addition to STM and ATM. These include Ethernet and Rapid IO switches as well as IP routers, each with their own characteristics. RapidIO is highly efficient internally in nodes with hard latency and real-time requirements and for small packets, while Eth­ernet switches show good enough performance for high throughput for large packets. Embedded IP routers, for IP forwarding, are useful for nodes that border IP networks or are part of these. Sometimes, several switches can be needed inside a node.