System X (telephony)

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

System X was the 2nd national digital telephone exchange system to be used in the United Kingdom. The first was a UXD5-Glenkindie Scotland 1979.[1]

History[edit]

Development[edit]

System X was developed by the Post Office (later to become British Telecom), GEC, Plessey, and Standard Telephones and Cables (STC) and first shown in public in 1979 at the Telecom 79 exhibition in Geneva Switzerland. In 1982, STC withdrew from System X and, in 1988, the telecommunications divisions of GEC & Plessey merged to form GPT, with Plessey subsequently being bought out by GEC & Siemens. In the late 1990s, GEC acquired Siemens' 40% stake in GPT and, in 1999, the parent company of GPT, GEC, renamed itself Marconi.

When Marconi was sold to Ericsson in January 2006, Telent plc retained System X and continues to support and develop it as part of its UK services business.

Implementation[edit]

The first System X unit to enter public service was in September 1980 and was installed in Baynard House, London and was a 'tandem junction unit' which switched telephone calls amongst ~40 local exchanges. The first local digital exchange started operation in 1981 in Woodbridge, Suffolk (near BT's Research HQ at Martlesham Heath). The last electromechanical trunk exchange (in Thurso, Scotland) was closed in July 1990—completing the UK's trunk network transition to purely digital operation and becoming the first national telephone system to achieve this. The last electromechanical local exchanges, Crawford, Crawfordjohn and Elvanfoot, all in Scotland, were changed over to digital on 23 June 1995 and the last electronic analogue exchanges, Selby, Yorkshire and Leigh on Sea, Essex were changed to digital on 11 March 1998.

In addition to the UK, System X was installed in the Channel Islands and several systems were installed in other countries, although it never achieved a significant export market.

In recent years [ambiguous] BT have started a programme of rationalisation of their System X estate to retire exchange cores (processors & switches) and re-parent concentrators onto other exchanges. Newer types of exchanges known as "Super DLEs" are being implemented as part of this process. These switches are in fact disused Mark 2 trunk switches (DMSUs) that have been converted to host concentrators, and are being used to offload the older and more costly Mark 1 switches. SystemX software is being modified by Telent to cater for the high numbers of concentrators being hosted on exchanges now.

System X units[edit]

System X covers three main types of telephone switching equipment. Many of these switches reside all over the United Kingdom. Concentrators are usually kept in local telephone exchanges but can be housed remotely in less populated areas. DLEs and DMSUs operate in major towns and cities and provide call routing functions. The BT network architecture designated exchanges as DLEs / DMSUs / DJSUs etc. but other operators configured their exchanges differently depending on their network architecture.

With the focus of the design being on reliability, the general architectural principle of System X hardware is that all core functionality is duplicated across 2 'sides' (side 0 & side 1). Either side of a functional resource can be the 'worker' with the other being an in-service 'standby'. Resources continually monitor themselves and should a fault be detected the associated resource will mark itself as 'faulty' and the other side will take the load instantaneously. This resilient configuration allows for hardware changes to fix faults or perform upgrades to be performed without interruption to service. Some critical hardware such as switchplanes and waveform generators are triplicated and work on an 'any 2 out of 3' basis. The CPUs in an R2PU processing cluster are quadruplicated to retain 75% performance capability with one out of service instead of 50% if they were simply duplicated. Line cards providing customer line ports or the 2Mbps E1 terminations on the switch have no 'second side' redundancy, but of course a customer can have multiple lines or an interconnect have multiple E1s to provide resilience.

Concentrator unit[edit]

The concentrator unit consists of four main sub-systems: line modules, digital concentrator switch, digital line termination (DLT) units and control unit. Its purpose is to convert speech from analogue signals to digital format, and concentrate the traffic for onward transmission to the digital local exchange (DLE). It also receives dialled information from the subscriber and passes this to the exchange processors so that the call can be routed to its destination. In normal circumstances, it does not switch signals between subscriber lines but has limited capacity to do this if the connection to the exchange switch is lost.

Each analogue line module unit converts analogue signals from a maximum of 64 subscriber lines in the access network to the 64 kilobit/s digital binary signals used in the core network. This is done by sampling the incoming signal at a rate of 8 kS/s and coding each sample into an 8-bit word using pulse-code modulation (PCM) techniques. The line module also strips out any signalling information from the subscriber line, e.g., dialled digits, and passes this to the control unit. Up to 32 line modules are connected to a digital concentrator switch unit using 2 Mbit/s paths, giving each concentrator a capacity of up to 2048 subscriber lines. The digital concentrator switch multiplexes the signals from the line modules using time-division multiplexing and concentrates the signals onto up to 480 time slots on E1s up to the exchange switch via the digital line termination units. The other two time slots on each channel are used for synchronisation and signalling. These are timeslots 0 and 16 respectively.

Depending on hardware used, concentrators support the following line types: analogue lines (either single or multiple line groups), ISDN2 (basic rate ISDN) & ISDN30 (primary rate ISDN). ISDN can run either UK-specific DASS2 or ETSI(euro) protocols. Subject to certain restrictions a concentrator can run any mix of line types, this allows operators to balance business ISDN users with residential users to give a better service to both and efficiency for the operator.

Concentrator units can either stand alone as remote concentrators or be co-located with the exchange core (switch & processors).

Digital local exchange[edit]

The Digital Local Exchange (DLE) hosts a number of concentrators and routes calls to different DLEs or DMSUs depending on the destination of the call. The heart of the DLE is the Digital Switching Subsystem (DSS) which consists of Time Switches and a Space Switch. Incoming traffic on the 30 channel PCM highways from the Concentrator Units is connected to Time Switches. The purpose of these is to take any incoming individual Time Slot and connect it to an outgoing Time Slot and so perform a switching and routing function. To allow access to a large range of outgoing routes, individual Time Switches are connected to each other by a Space Switch. The Time Slot inter-connections are held in Switch Maps which are updated by Software running on the Processor Utility Subsystem (PUS). The nature of the Time Switch-Space Switch architecture is such that the system is very unlikely to be affected by a faulty time or space switch, unless many faults are present. The switch is a 'non-blocking' switch.

Digital main switching unit[edit]

The Digital Main Switching Unit (DMSU) deals with calls that have been routed by DLEs or another DMSU and is a 'trunk / transit switch', i.e. it does not host any concentrators. As with DLEs, DMSUs are made up of a Digital Switching Subsystem and a Processor Utility Subsystem, amongst other things. In the British PSTN network, each DMSU is connected to every other DMSU in the country, enabling almost congestion-proof connectivity for calls through the network. In inner London, specialised versions of the DMSU exist and are known as DJSU's - they are practically identical in terms of hardware - both being fully equipped switches, the DJSU has the distinction of carrying inter-London traffic only. The DMSU network in London has been gradually phased out and moved onto more modern "NGS" switches over the years as the demand for PSTN phone lines has decreased as BT has sought to reclaim some of its floor-space. The NGS switch referred to is a version of Ericsson's AXE10 product line, phased in between the late '90s and early '00s.

It is common to find multiple exchanges (switches) within the same exchange building in large UK cities - DLEs for the directly-connected customers and a DMSU to provide the links to the rest of the UK.

Processor utility subsystem[edit]

The Processor Utility Subsystem (PUS) controls the switching operations and is the brain of the DLE or DMSU. It hosts the Call Processing, Billing, Switching and Maintenance applications Software (amongst other software subsystems). The PUS is divided into up to eight 'clusters' depending on the amount of telephony traffic dealt with by the exchange. Each of the first four clusters of processors contains four central processing units (CPUs), the main memory stores (STRs) and the two types of backing store (primary (RAM) and secondary (hard disk)) memory. The PUS was coded with a version of the CORAL66 programming language known as PO CORAL (Post Office CORAL) later known as BTCORAL.

The original processor that went into service at Baynard house, London, was known as the MK2 BL processor. It was replaced in 1980 by the POPUS1 (Post Office Processor Utility Subsystem). POPUS1 processors were later installed in Lancaster House in Liverpool and also, in Cambridge. Later, these too were replaced with a much smaller system known as R2PU or Release 2 Processor Utility. This was the four CPU per cluster and up to 8-cluster system, as described above. Over time, as the system was developed, additional "CCP / Performance 3" clusters were added (clusters 5, 6, 7 and 8) using more modern hardware, akin to late-1990s computer technology, while the original processing clusters 0 to 3 were upgraded with, for example, larger stores (more RAM). There were many very advanced features with this fault-tolerant system which helps explain why these are still in use today – like self fault detection and recovery, battery-backed RAM, mirrored disk storage, auto replacement of a failed memory unit, the ability to trial new software (and roll back, if necessary) to the previous version. In recent times, the hard disks on the CCP clusters have been replaced by with solid-state drives to improve reliability.

In modern times, all System X switches show a maximum of 12 processing clusters; 0–3 are the four-CPU System X-based clusters and the remaining eight positions can be filled with CCP clusters which deal with all traffic handling. Whilst the status quo for a large System X switch is to have four main and four CCP clusters, there are one or two switches that have four main and six CCP clusters. The CCP clusters are limited to call handling only, there was the potential for the exchange software to be re-written to accept the CCP clusters, but this was scrapped as being too costly of a solution to replace a system that was already working well. Should a CCP cluster fail, System X will automatically re-allocate its share of the call handling to another CCP cluster, if no CCP clusters are available then the exchange's main clusters will begin to take over the work of call handling as well as running the exchange.

In terms of structure, the System X processor is a "one master, many slaves" configuration – cluster 0 is referred to as the base cluster and all other clusters are effectively dependent to it. If a slave cluster is lost, then call handling for any routes or concentrators dependent to it is also lost; however, if the base cluster is lost then the entire exchange ceases to function. This is a very rare occurrence, as due to the design of System X it will isolate problematic hardware and raise a fault report. During normal operation, the highest level of disruption is likely to be a base cluster restart, all exchange functions are lost for 2–5 minutes while the base cluster and its slaves come back online, but afterwards the exchange will continue to function with the defective hardware isolated.

During normal operation, the exchange's processing clusters will sit between 5-15% usage, with the exception of the base cluster which will usually sit between 15-25% usage, spiking as high as 45% - this is due to the base cluster handling far more operations and processes than any other cluster on the switch.

Editions of System X[edit]

System X has gone through two major editions, Mark 1 and Mark 2. This refers to the switch matrix used.

The Mark 1 Digital Subscriber Switch (DSS) was the first to be introduced. It is a time-space-time switch setup with a theoretical maximum matrix of 96x96 Time Switches. In practice, the maximum size of switch is a 64x64 Time Switch matrix. Each time switch is duplicated into two security planes, 0 and 1. This allows for error checking between the planes and multiple routing options if faults are found. Every timeswitch on a single plane can be out of service and full function of the switch can be maintained, however, if one timeswitch on plane 0 is out, and another on plane 1 is out, then links between the two are lost. Similarly, if a timeswitch has both plane 0 and 1 out, then the timeswitch is isolated. Each plane of the timeswitch occupies one shelf in a three-shelf group – the lower shelf is plane 0, the upper shelf is plane 1 and the middle shelf is occupied by up to 32 DLTs (Digital Line Terminations). The DLT is a 2048 kb/s 32-channel PCM link in and out of the exchange. The space switch is a more complicated entity, but is given a name ranging from AA to CC (or BB within general use), a plane of 0 or 1 and, due to the way it is laid out, an even or odd segment, designated by another 0 and 1. The name of a space switch in software, then, can look like this. SSW H'BA-0-1. The space switch is the entity that provides the logical cross connection of traffic across the switch, and the time switches are dependent to it. When working on a space switch it is imperative to make sure the rest of the switch is healthy as, due to its layout, powering off either the odd or even segment of a space switch will "kill" all of its dependent time switches for that plane. Mark 1 DSS is controlled by a triplicated set of Connection Control Units (CCU's) which run in a 2/3 majority for error checking, and is monitored constantly by a duplicated Alarm Monitoring Unit (AMU) which reports faults back to the DSS Handler process for appropriate action to be taken. The CCU and AMU also play part in diagnostic testing of Mark 1 DSS.

A Mark 1 System X unit is a vast construct with suites of 8 racks in length, and it can be over 15 suites from end to end. This is far from ideal, as each of those suites needs to be powered and costs quickly add up. Further to that is the consideration that all the powered equipment generates heat, which will require more power costs to remove from a room – these are the two main reasons that Mark 1 exchanges are being closed down in favour of Mark 2.

Mark 2 DSS ("DSS2") is the later revision, which continues to use the same processor system as Mark 1, but made serious and much needed revisions to both the physical size of the switch and the way that the switch functions. It is an optical fibre-based time-space-time-space-time switching matrix, connecting a maximum of 2048 2Mbps PCM systems, much like Mark 1; however the hardware is much more compact.

The four-rack group of the Mk1 CCU and AMU is gone, and replaced neatly by a single connection control rack, comprising the Outer Switch Modules (OSMs), Central Switch Modules (CSMs) and the relevant switch/processor interface hardware. The Timeswitch shelves are replaced with Digital Line Terminator Group (DLTG) shelves, which each contain two DLTGs, comprising 16 Double Digital Line Termination boards (DDLTs) and two Line Communication Multiplexors (LCMs), one for each security plane. The LCMs are connected by optical fibre over a forty megabit link to the OSMs. In total, there are 64 DLTG's in a fully sized Mk2 DSS unit, which is analogous to the 64 Time Switches of the Mk1 DSS unit. The Mk2 DSS unit is a lot smaller than the Mk1, and as such consumes less power and also generates less heat to be dealt with as a result. It is also possible to interface directly with SDH transmission over fibre at 40Mbps, thus reducing the amount of 2Mbps DDF and SDH tributary usage. Theoretically, a transit switch (DMSU) could purely interface with the SDH over fibre with no DDF at all. Further to this, due to the completely revised switch design and layout, the Mk2 switch manages to be somewhat faster than the Mk1 (although the actual difference is negligible in practice). It is also far more reliable, having many less discrete components in each of its sections means there is much less to go wrong, and when something does go wrong it is usually a matter of replacing the card tied to the software entity that has failed, rather than needing to run diagnostics to determine possible locations for the point of failure as is the case with Mk1 DSS.

Message Transmission Subsystem[edit]

A System X exchange's processors communicate with its concentrators and other exchanges using its Message Transmission subsystem (MTS). MTS links are 'nailed up' between nodes by re-purposing individual 64kbps digital speech channels across the switch into permanent paths for the signalling messages to route over. Messaging to and from concentrators is done using proprietary messaging, messaging between exchanges is done using C7 / SS7 messaging. UK-specific and ETSI variant protocols are supported. It was also possible to use channel associated signalling, but as the UK and Europe's exchanges went digital in the same era this was hardly used.

Replacement system[edit]

Many of the system X exchanges installed during the 1980s are over 30 years old and still in use, giving an idea of their good reliability. The system was originally designed for 15 years of service, and as such has long exceeded its expectations but in recent years has started to deteriorate, with the old plastic shelf runners becoming brittle due to heat exposure. Many exchanges have never been turned off in their entire life, and will only have had process restarts for software upgrades every couple of years or so to give in excess of 99.9998% reliability.

System X was scheduled for replacement with Next Generation softswitch equipment as part of the BT 21st Century Network (21CN) programme. Some other users of System X – in particular Jersey Telecom and Kingston Communications – replaced their circuit-switched System X equipment with Marconi XCD5000 softswitches (which are the NGN replacement for System X) and Access Hub multiservice access nodes. However, the omission of Marconi from the BT 21CN supplier list, the lack of a suitable replacement softswitch to match System X reliability, and the shift in focus away from telephony onto broadband all led to much of the System X estate being maintained. It is only recently that other manufacturers have started producing 'exchanges' that offer the customer feature-rich functionality that SystemX offers - for many years the alternatives weren't capable of replicating (let alone exceeding) SystemX's functionality.

See also[edit]

References[edit]

  1. ^ "History of ATM".