Regardless of the type of transmission media (wire, radio, or fiber), bits are sent and bits are received. Every receive port in the network has to deal with an incoming serial bit stream that includes clocking and payload bits. Clocking and data, also called payload, have their timing and phasing relationships established at the point of creation. Along the way, the serial bit stream may be multiplexed with additional serial bit streams, cross-connected to a different carrier, or switched at the circuit, cell, or packet layer. Yes, this is layer 1 and 2 of the OSI stack. Separating clock information from payload, or mucking around with the time relationship between signal transitions amounts to errors. If, for whatever reason, a network element loses its synchronization reference and wanders outside holding limits, anything and everything using it as a synchronization reference is out of time with the larger network, converting valuable data to invaluable trash.
Clocking and data recovery are a critical function. Considering the clocking concept from the perspective of a receive port on a network element or the receive end of a transmission path gives the ability to look backward toward the source and forward toward other network elements and facilities dependent on the clock for proper operation and delivery of the associated payload.
If there is a single point in the entire end-to-end, top-to-bottom process that is the most critical in moving digitized information through a network, it has to be at each and every receive point in the network. Receiving the bitstream intact and then extracting clocking information must be done before the payload can be extracted. Even if the receiver is clocked externally, clock and data signals in the incoming stream must be separated and defined. Payload framing depends on timing relationships, and guess what, timing depends on clocking. Serial data streams just aren’t real if they don’t have a mechanism delineating framed data and clock signals. Understanding clock and data recovery as a stand-alone function is one of the keys to understanding digital communications networks. Figure 1 is a block diagram of a typical clock and data recovery function found in almost every network element.
Figure 1: Clock and Data Recovery Functional Block Diagram
The functions in Figure 1 include receiving light or radio waves from the media, detecting and converting it to electrical voltage transitions. This requires a lens or optical interface in the case of light waves. The equivalent in radio waves would be an antenna. The purpose of either is to detect and focus the received energy into a signal containing the time variations between and during presence and absence of signal. The signal is then passed to a detector, which converts the signal into a series of transitions representing the serial bitstream originally sent by the transmitter.
The raw bitstream is then fed to a clock extraction circuit and a buffer. The clock is extracted and used for two purposes: one is a clock signal representing the original clock used when the data was created or multiplexed, and the other drives a data recovery circuit. The data recovery circuit extracts and re-clocks the data to restore as precisely as possible the transitions between 1s and 0s making up the data. At this point, the data is simply just that. We don’t know anything about framing or channelization and nothing about protocols, byte size, packing, or error correction. These are all to be determined with additional processing, and take place through the upper layers of the ISO stack.
The clocking signal is representative of the original clock sent by the transmitter. It also has a lot of unknowns, but the important point to recognize is that this signal represents timing from another node or remote site outside the physical boundaries of the receiving equipment site. This signal could be important in the overall scheme of things or it could be irrelevant. For example, it could be used as a timing reference for the entire site. If that were the case, then many other considerations become important. If it is used as a timing reference for the entire site, there is no question that it will continue being proliferated to other sites because it becomes the clock that turns up as the receive clock after clock and data recovery at other sites receive signals from the instant site.
Naturally, this begs a few questions: ‘‘Where did that clocking signal originate?’’ ‘‘What is its level of accuracy?’’ ‘‘Is it a network clock?’’ Clocking in communications networks is as critical and necessary as synchronizing and time code in television audio and video. Timing in communications networks can be as complicated or as simple as timing in digital audio and video systems. Clock accuracy in communications networks is built around a four-level hierarchy with the most accurate clock at the first level and least accurate at the fourth level. Originally developed by Bell Labs for AT&T’s digital network in the 1960s, adopted and standardized by ANSI, ITU, and other standards-making bodies, the system is referred to as stratum 1 through stratum 4. In the beginning, there was only one stratum 1 clock in operation for the entire network at any given time. This clock was distributed to regional switching and operating centers and used to time and synchronize network elements at that level in the hierarchy and so on to local central offices at the bottom of the hierarchy. Each clock level depended on the one above it for synchronization. If it lost the reference signal from above, the local clock was permitted to operate within a range wider than the higher reference until the higher-level reference was once again available.
As you can imagine, there’s not much of a disturbance when the reference clock disappears. But what about when the reference clock returns or if it is intermittent? Dealing with these situations led to the development of a hold-over specification, which basically requires the clock to remain within, or hold its frequency of operation for a given time and, in the interest of overall stability, not switch back to the higher hierarchy reference clock until after some period of stable operation, and if possible, switch gracefully, sometimes called hitless switching.
Primary reference clock (PRC) products compliant with ANSI and ITU standards are available from many sources. These clock sources can run independently, or they can be referenced and locked to other sources traceable to the World Timing Standard for time-of-day, called Coordinated Universal Time (UTC).[2] UTC is the result of combining TAI (International Atomic Time) and Universal Time 1 (UT1). TAI is a timing reference derived by averaging outputs from the clocks of approximately 100 countries. These clocks keep the timing relationship to each other within 2 to 3 millionths of a second over a year. UT1 provides a correction to compensate for the difference in solar time and TAI caused by slightly elliptical orbit and polar inclination, both of which affect solar time.
Distributing the PRC to all the elements in early digital network infrastructure was expensive, cumbersome, and complex. However, advances in lowering the cost of clock reference products and the availability of the global positioning system (GPS) and Internet-based references running under network time protocol (NTP) have dramatically reduced cost on all fronts and improved reliability and accuracy of the sources and their references.
Why all the fuss and bother? Essentially, signals derived from multiplexing low-speed digital bit streams into higher-order aggregate bit streams, some of which provide private line, and others that provide circuit-, cell-, and packet-switched services must be synchronized and timed exactly the way digital audio and video signals are timed and synchronized, including SMPTE time code, to bit level accuracy within a frame. The only differences are that Telecom digital signals aren’t subject to switching transitions such as a split screen or cross-fade, and telecom and digital program content run on entirely different time bases. One other point is that this timing accuracy has absolutely nothing to do with the transmission media—it makes no difference if the transmission is satellite or terrestrial radio, optical transmission, or baseband electrical signal transport.
One thing to pay attention to in systems where the bit stream is modulated onto a carrier is the capability of a particular modulator to lock the carrier frequency generator to an external source such as the incoming digital signal or PRC. On the receive end, it may be appropriate to lock the local receiver beat frequency oscillator in the receiver to either a PRC or the incoming digital signal clock. Including these capabilities in a piece of equipment is not without cost; however, those that do provide that capability will be capable of better bit error performance across the transmission facility because of the absence of asynchronous cross talk causing clock slips.
Lastly, be careful not to confuse network primary reference clock with program clock reference in Moving Picture Experts Group (MPEG). The two are completely different animals and have nothing to do with each other. When MPEG2 program streams are transported through networks, the PCR is just another set of bits in the payload.
No comments:
Post a Comment