Clocking and timing in the public network is based on four levels of accuracy, traceable to worldwide time standards. These highly accurate clocking sources are generically referred to as stratum 1 through stratum 4, where stratum 1 is the most accurate. Before 1984, the US portion of the network was primarily the responsibility of AT&T. Network clocking information was distributed throughout the network infrastructure whereby a single clock signal source was sent through the network hierarchy. The most accurate clock signal was distributed to a set of switches that passed it on to lower levels in the hierarchy with the lowest and least accurate clock signal reaching end office switches serving subscribers.
This all changed in 1984 when the network was broken up into the seven regions and equal access to the long distance network was declared. The seven regions were divided into 200 plus local access and transport areas (LATAs) where traffic originating and terminating in different LATAs had to be transported by the long distance carriers. By the time of the breakup, the other long distance carriers already had networks running from independent clocks. With AT&T’s network running on its own clock, this left each of the seven local exchange carriers no choice except to design and build their own clock systems into each LATA’s network.
Original T-carrier or T-1 clocking was a simple matter of two channel banks inter-operating with each other over a four-wire connection. As long as clock stability and recovery were within range, there was no problem. When the clock signal ranged out of bounds of clock recovery, something called a slip, meaning an out-of-lock condition occurred, followed by re-synchronization of the receiving clock. From a practical perspective, this causes a barely audible click in speech. The impact is hardly noticeable in telephone conversation and will likely cause a re-send in data applications, but it can be a killer in content transport applications, especially where the application is something like an MPEG transport stream.
Over time as digital components made their way into switching systems and higher-order multiplex equipment, clocking signals were distributed through the network itself. When that happened, all of a sudden, network clocking was dependent on the network. If the network failed, then clocking failed. To solve this issue, the concept of clock hold-over was created. Hold-over simply set some design and operational rules in place that said, ‘‘When clocking signal is lost, switch to a local source and hold the stability within specified range until the master clock signal is restored, then switch back to the master clock signal.’’
Most pieces of equipment in use have at least one clock. Typically, the design includes external reference capability and configuration parameters that allow the user to control the operation so that it either stays on internal reference, or alternatively locks to an external source. If it is programmed to lock to an external source and it fails, the equipment automatically reverts to the internal clock source.
Communications network clocking architecture is multi-level or strata. There are four levels: stratum 1, 2, 3, or 4. Stratum 1 is the most accurate and stable. Stratum 1 clocks, by definition, do not rely on and may not have or need external references but simply an ability to be calibrated on some periodic basis—say on 1-year intervals. Stratum 2, 3, and 4 are less accurate and stable than stratum 1, and they typically have provisions for external reference input.
From a strict timing perspective, stratum 1 clocks are at the core of the network, while stratum 4 clocks are at the edge of the network such as in a PBX, channel bank, data multiplex equipment or router. Going back through the chain, stratum 4 clocks depend on a stratum 3 clock, which depends on a stratum 2 clock, which depends on a stratum 1 clock, the master clock for the network.
It’s a given that long-term, undisturbed connections through a network depend on continuing, long-term, undisturbed common clock signals and their distribution (or linking to every single network element), which carry a digitized payload. Numerous strategies are used or chosen from to formulate clocking in a network. Regardless, the network operates synchronous or non-synchronous. T-carrier networks are said to run plesiochronous. Taken literally, and in subjective practical terms, it means nearly synchronous. Therefore, it is not synchronous, as understood by an experienced radio or television engineer.
Such operation is characterized by the fact that it experiences clock slip behavior sooner or later. Clock slips mean that the clock and everything that uses it vary from its reference by an amount sufficient to cause the payload to be out of coincidence with its original clock by one clock cycle. It’s not a matter of whether or not clock slips occur, but when. Even when all the network elements run with external reference to a common, higher order accuracy and stability clock, slips can happen from various effects such as temperature changes and atmospheric variations in the case of wireless media. Truly synchronous transmission media became a reality with SONET/SDH network transmission standards and along with that came a significant improvement in network stability.
How to deal with clock distribution and stability is an overall, end-toend network issue. Dealing with it from a design level perspective requires obtaining a set of facts about the capabilities and limitations of the common network equipment and the network service provider’s clocking and distribution infrastructure, and the specific part of that infrastructure providing use of facilities and services to all the sites included in the overall network. More specifically, decisions about network interface equipment should not be made without a clear definition and understanding of the performance characteristics of the equipment with and without external clock reference. The second most important piece or set of information is a definition and understanding of the communications network-timing source and its relationship to the network channel interface at each site.
Once a picture as outlined above has been obtained, it can be assessed in light of the requirements for content transport. The broad perspective includes a situation on the one hand where it may be acceptable to rely totally on service provider network timing sources as the master timing and clocking reference, while the opposite extreme is to equip each site with a set of stratum 1 clock references. Obviously, somewhere in between is likely the practical approach for the enterprise. Don’t forget that network clocking and synchronization is an entirely different matter than station clocking and synchronization. As long as the communications network runs on an accurate enough clock and assuming it is configured to carry live or real-time program content, the content will be clocked from the originating site, with its embedded clock intact, into the communications network, and then moving it from there to one or more sites, extracting it from the telecom clock time base and transferring it to the time base of the receiving site.
One last tip is the fact that a communications network referenced to a global positioning source (GPS) or other source of universal time becomes attractive if the content being transported is also GPS referenced. A good middle of the road, practical design trade off is to reference the MPEG codec-to-station sync and reference the network interface equipment to the service provider facility. If the network bit-error rate is of sufficient level, then the network will be stable and always timed to the same source as the station is. If clock reference is lost, then the network becomes subject to the internal references of the interface equipment, and clock slips become more of a problem because they cause disturbances in picture and sound quality and may cause loss or corruption of closed caption or other similar content.
No comments:
Post a Comment