Sonet and sdh demystified pdf free download






















A byte frame is used for the transmission of Path Access Point Identifiers. SDH uses this octet for the same purpose but uses a different message format, defined in G. This octet is used to provide parity over the payload.

Conceptually, this octet is similar to the B1 and B2 octets. SDH uses this octet for the same purpose but excludes the fixed stuff octets which will be described when we talk about payload mappings when calculating the parity.

STS path signal label C2 — This octet indicates the type of traffic carried in the payload. There are many other values — see T1. Path status G1 — This octet is mainly used to send status back to the transmitter about the path status. This octet is used to convey back to an originating STS path terminating equipment PTE the path- terminating status and performance. This feature permits the status and performance of the complete duplex path to be monitored at either end or at any point along that path.

As illustrated in Figure 22, Bits 1 through 4 convey the count of interleaved bit blocks that have been detected in error by the Path BIP-8 code B3. This count has nine legal values, namely 0 to 8 errors. The remaining seven possible values represented by these four bits can only result from some condition unrelated to the forward path and shall be interpreted as zero errors.

Bits 5, 6 and 7 provide codes to indicate both an old version compliant with earlier versions of this standard and an enhanced version of the STS Path remote defect indication RDI- P. The enhanced version of RDI-P allows differentiation between payload, connectivity, and server defects.

Bit 7 is set to the inverse of bit 6 to distinguish the enhanced version of RDI-P from the old version. Bit 8 is unassigned at this time. SDH uses this byte for the same purpose. This octet is allocated for user communications, similar to the F1 octet in the transport overhead. Multiframe indicator H4 — The uses of this octet will be described in later sections. This octet provides a generalized multiframe indicator for payloads.

This indicator is used for two purposes. The first is for VT-structured payloads, which will be described in a later section.

Growth octets Z3, Z4 — Ignore these two octets. These two octets are reserved for future standardization. SDH defines Z3 as growth. SDH refers to Z4 as K3. Bits of K3 are reserved for growth. In option 2, bits of N1 are used to provide maintenance information including remote error indication, outgoing error indication, remote defect information, outgoing defect information, and TC access point identifier.

For more information regarding Tandem Connection Maintenance, refer to T1. SDH uses this byte for a similar purpose. As a result, the B3 byte must be recalculated at each point where this channel is altered. Any errors received at this point must be included in the resultant newly calculated B3 P. Due to their timing requirements, Tandem Connection Maintenance messages get precedence of use for the Tandem Connection Data Link, and may preempt other messages on that channel.

Since Tandem Connection Terminating Equipments are not required to perform store-and- forward or Layer 2 termination functions on non-Tandem Connection Maintenance messages, some or all of the preempted messages may be lost and require retransmission. We also talked about interleaving multiple STS-1 streams into a higher stream on the line, e.

But with interleaved streams, each STS-1 stream maintains its own identity and is limited to Suppose, however, that you need a data stream faster than The answer is with concatenated payloads.

With concatenated payloads, multiple STS-1 payloads are joined together and treated as a single payload. For example, a common concatenated payload is to join three STS-1 payloads together. An STS-3c would look like Figure 8 — it would consist of columns with nine rows and 27 columns of transport overhead. A reasonable question at this point is how does the system know that the data stream is concatenated?

The remaining H1 and H2 octets are used to indicate that the payload is concatenated. The second H1 octet is paired with the second H2 octet to produce a bit string similar to that produced by the first H1, H2 octets.

To indicate concatenation, the second and third pointers are set up as follows: the first four bits are set to the new data flag value , the next two bits can be anything, and the last 10 bits are set to all ones.

Since 10 ones form a value greater than , which would normally be invalid, it together with the NDF in the first four bits provides the indicator mechanism that the SPE which would normally be pointed to by this pointer is joined to the previous SPE.

We could look at an STS-1 stream as providing 86 columns of octets, by nine rows, 8, times per second, for a payload rate of An STS-3c could be thought of as columns of octets total columns minus 9 columns of transport overhead, minus one column of payload overhead , by nine rows, 8, times per second for a payload rate of Within that payload, we could put any traffic we wanted.

However, there are some additional complexities in the payload area. So we need to look at what facilities were built into the payload area to handle the different DS-N and E rates and how the differing clocks of the plesiochronous traffic are accommodated. The specific traffic which the designers were interested in carrying is shown in Table 2.

This was done to reduce problems of cross border traffic. They might purchase an E1 leased line 2. But when the circuit got to the US, it would either have to be reduced to DS-1 1. And at the time SONET was designed, higher rates were too expensive to use for this type of application.

Hopefully, the reader who decides not to finish reading this section will get some value by reading a portion of it. One column is taken by the payload overhead POH leaving 86 columns. Next, we break the 86 columns into seven groups of 12 columns. Now seven groups of 12 columns is only 84 columns, leaving two extra columns. These two columns are columns 30 and , where the POH is counted as column 1. This interleaves the VTGs. Each group of twelve columns is known as a virtual tributary group VTG.

It consists of 12 columns, equaling a gross bit rate of 6. Remember that we want to carry a number of different plesiochronous rates, from 1. The gross bit rate is good for the DS-2 at 6. We can accommodate all of the rates specified in Table 2, however, simply be subdividing the 12 columns. For example, three columns, which are called a VT Four columns, which are called a VT-2, give a gross bit rate of 2. Six columns, called a VT-3, give a gross bit rate of 3. And twelve columns, called a VT-6, give a gross bit rate of 6.

One restriction is that a VTG can only contain one type of mapping, i. Each color represents one VT To begin, the C2 octet in the POH signals that the payload contents are virtual tributaries VT by containing the hex code 0x02 But we immediately run into a problem.

The three columns used to transport a DS-1 have only 27 octets per frame which occurs 8, times per second. A DS-1 has 24 octets per frame which occurs 8, times per second plus an extra framing bit.

But how are we going to indicate the superframe? We use the last two bits in the H4 octet in the POH. These two bits count from 00 to 11 and then roll over to 00 again 00, 01,10, 11, There is one special thing about the count in the H4 octet. The value in the H4 octet indicates the superframe number for the next payload, not for the payload associated with this H4. So an H4 with the value 00 in the last two bits indicates that the first frame of a superframe will occur in the next SONET payload.

So now that we can identify the frames of the superframe, we take the first octet in the VTG which is also the first octet of the first VT in the VTG and use it for overhead. Since we have a superframe of four frames, we have four overhead octets, known as V1, V2, V3, and V4. This word is interpreted in a very similar fashion to the way the word created by combining the H1, H2 octets in the transport overhead.

The VT Each virtual tributary has the first octet taken as overhead the Vx octets. Once it has processed one superframe it will have the V1, V2 octets for that VT. Finding the start of the VTs is easy, of course, because of the interleaving. They are together at the beginning of the VTG. The V3 octet is used in the same fashion as the H3 octet.

It normally does not contain data but will when a negative justification is done. The octet after the V3 octet is used for a positive justification, just like the octet after the H3 octet. The V4 octet has no meaning and is reserved for future standardization. And just what is this VT payload and where is it?

Well, we started with 27 octets in each VT We now have 26 octets remaining in each payload frame of the superframe. So what we have left is four frames of 26 octets each, or a total of octets. Since there are payload octets, the pointer value for a VT The operation of the V1, V2 octets is exactly the same as the H1, H2 octets.

Likewise, the new data flag NDF can be used in exactly the same fashion as was described for the H1, H2 octets. The V1, V2 pointer points to the first octet of the octet VT superframe. If this octet is located immediately after the V2 octet, the pointer value is zero. Since there are only octets in the VT superframe, the highest value of the pointer is , which means that the first octet of this payload immediately precedes the V2 octet of the next superframe.

And since we have a four-frame multiframe, the VT payload is divided into four sections, or frames, of 26 octets each. Each payload frame of the VT superframe contains one overhead octet at its beginning, leaving 25 octets in each frame. These overhead octets are the V5, J2, Z6 and Z7 octets and are defined below. Much of the text for the definition of each octet below is taken from the T1.

Remember that a value of 11 for bits 5 and 6 of the V1, V2 word indicates that we are mapping a DS The bit assignments of the V5 byte are specified in the following paragraphs and are illustrated in Figure Bits 1 and 2 of V5 are used for error performance monitoring. A bit-interleaved parity BIP scheme is specified.

Bit 1 is even-parity calculated over all odd-numbered bits 1, 3, 5, and 7 in all bytes in the previous VT SPE. Similarly, bit 2 is even-parity calculated over the even-numbered bits. Bit 4 of V5 is reserved for mapping-specific functions. RFI-V is generated by setting bit 4 of V5 to one. RFI-V is cleared by setting bit 4 of V5 to zero. Eight binary values are possible in these three bits.

See the T1. Bit 8 of V5, in combination with bits 5, 6 and 7 of Z7, provide codes to indicate both an old version and an enhanced version of the VT Path remote defect indications RDI-V. Bit 8 of V5 is set equal to bit 5 of Z7. This allows old equipment to receive and interpret an RDI-V indication. This octet is used to transmit repetitively a VT Path Access Point Identifier so that a path receiving terminal can verify its continued connection to the intended transmitter.

SDH uses this byte for the same purpose, but uses the format specified in G. Growth Z6, Z7 — Ignore these two octets. These octets actually have some complex meanings that will not be described here. See T1. But wait! A DS-1 can be used for carrying 24 voice calls, or it can be used to carry data, such as Internet traffic. Also, a DS-1 is not just a stream of octets. Additionally, the voice samples may have associated signaling to indicate the status of the voice call.

It turns out that we can communicate all of this by using one octet of each VT frame, leaving 24 octets to carry the DS-1 payload.

Note: Understanding the following couple of paragraphs requires that the reader have a good knowledge of DS-1 framing and signaling. You can skip them without great loss. For example, a DS-1 using superframe SF framing has a superframe of twelve bit frames. The status of four channels can be sent each frame.

See [T. For our discussion, just accept that they need to be transported as part of the DS This type of mapping is known as byte-synchronous mapping. Byte- synchronous mapping preserves the format of the DS-1 signal, allowing any DS-0 to be extracted anywhere along the communication chain.

Asynchronous mapping, which I describe next, does not maintain the DS-1 payload octet identity — it just takes a group of bits, usually bits and puts them in a frame of the VT superframe.

Asynchronous mapping is very common. A great many DS-1 circuits do not carry channelized voice i. The terminating equipment at the ends of the circuit will be able to obtain frame synchronization and extract the payload. Because of this, the bits in the second octet of each VT frame have a different meaning from byte synchronous mapping. The octet after the overhead octets is used to transport the framing bits and to accommodate jitter. The R bits are unused fixed stuff. The O bits are reserved for future standardization and should be considered the same as the R bits.

The I bits are simply information bits, needed because we normally carry bits in a frame. Note that the last frame of the superframe has different bit assignments in the second octet. The purpose of these signaling bits is to allow the last frame to carry either , , or bits to adjust for clock differences. The three C1 bits are used to control the function of the S1 bit while the three C2 bits are used to control the function of the S2 bit.

If all of the C1 bits are zero, it indicates that the S1 bit should be treated as data and its content included in the output data stream. If all of the C1 bits are one, it indicates that the S1 bit should be treated as a stuff bit and its content should not be included in the output data stream.

The C2 bits are used in the same way to control the use of the S2 bit. Majority voting applies two out of three in case of a bit error. So why do we do this. Normally, the last frame of the VT superframe should contain bits. By convention, bit S2 is used to carry an information bit under this normal situation. If the DS-1 clock is running fast, however, we use the S1 bit to carry an extra information bit for this one frame of the superframe. The last frame of the VT superframe will carry bits in this case, or bits in the VT superframe.

If the DS-1 clock is running slow, we can signal that the S2 bit is a stuff bit and only carry bits in the last frame of the VT superframe, or bits in the VT superframe.

This allows us to make one clock adjustment every VT superframe. The mapping of E1 2. Higher levels of signals, e. Now, we move to examine the mapping of some signals which do not require virtual tributaries — these signals fill the STS-1 SPE.

This leaves 84 columns by nine rows for usable payload. The SPE is broken into nine frames, one per row — see Figure Additionally, the C1 octet carries five information bits, providing a total of information bits per row. Since there are nine of these frames per SPE and the SPE repeats 8, times per second, these bits would provide a bit rate of A DS-3 operates at How are we going to carry that rate when we only have a capacity of So the use of this stuff bit allows us to achieve the proper bit rate for a DS-3 and also allows us to accommodate clock differences between the DS-3 signal and the SONET signal.

This leaves 84 columns by nine rows for payload. All three types of traffic require that the octet boundary of the traffic be available. If zero bit insertion were done, octet alignment would be lost very quickly Beyond that one requirement, the payload of the SONET frame is simply viewed as an octet transport mechanism. The traffic is not examined in any way, nor is there any requirement for any kind of alignment on SPE boundaries. As an example, ATM cells are taken one octet at a time with each octet placed in the next available octet in the SPE without regard for any boundaries in the cell or the SPE, other than maintaining octet alignment.

Eight columns of payload is equal to a little more than 4. This is true for SDH, also. This value was specified in the latest version of G. And most equipment actually handles traffic in 16 bit words. While linear networks have some applicability, by far the most common topology in the network is the ring, as shown in Figure 34 and Figure Rings are used because they provide an alternate path to communicate between any two nodes.

For example, in the linear network, even if two sets of fiber were used between the nodes, it is possible for all of the fibers to be cut at the same time, unless great pains are taken with routing the fiber. And in most cases, the limitations on rights-of-way do not permit this kind of route diversification.

A two-fiber ring can be operated in either of two ways: 1 as a unidirectional ring, or 2 as a bi- directional ring. With a unidirectional ring, traffic could be limited to one fiber and always flows the same way around the ring. The second fiber is simply the protection fiber and is used in a special way to provide backup explained later. The diagram shown here is for a unidirectional fiber ring.

In this type of ring, all the traffic could be carried on one fiber. A two-fiber ring can also operate as a bi- directional ring. With unidirectional traffic there can be a difference in the transmit and receive propagation delay between two nodes. For example, in Figure 34, if node B sends traffic to node A, the propagation delay is one link. However, when node A sends traffic to node B, the traffic has to go through nodes D and C to reach node B, leading to a longer propagation delay.

When data is sent between nodes A and B, it simply flows over the two fibers connecting the two nodes. Bi-directional rings do not buy us any additional capacity, however. In order to be able to provide backup, each fiber in a bi-directional ring can only be used to half its capacity — the second half of the capacity is reserved for backup. This type of ring is always bi-directional.

Four fiber rings are always operated as bi-directional rings. In this case, we get the full rate that can be put on the working fibers, but the protection fibers are not used under normal conditions. No matter how we slice it, rings always require twice as much bandwidth as the amount of traffic carried in the ring, if we want to provide full backup With four fibers, it is possible to do a link recovery if one, and sometimes two, of the fibers fails between two nodes.

The more common case is when all the fibers between two nodes are cut. In this situation, the bi-directional ring provides restoration by routing the traffic over the protection fibers, in the opposite direction around the ring. There are two backup systems used on fiber — path and line. In a UPSR system, all of the traffic is transmitted in both directions around the ring, in one direction on the working fiber, and in the other direction on the protection fiber. This is known as 1:n backup.

These types of systems cannot fully backup a ring, however. If all of the fibers are cut, some traffic must be discarded. The only restriction in path switching is that both the entry and exit nodes for a path are operating at the same level.

This monitoring is based on a number of things. The BIP octets, available at every level of the multiplexing hierarchy, provide insight into the number of bit errors on the path.

More serious errors can occur, such as the failure of one fiber or detection of an alarm indication signal AIS on a path. Figure Path switching on a unidirectional ring. On the Tx side of the traffic, the path is sent over both counter rotating rings. Since the receiver of each channel is monitoring both paths on both fibers, switching between fibers is immediate, with no loss of data.

Additionally, restoral is accomplished by the receiver, without any coordination with the transmitter — no APS communication channel is needed in UPSR.

This is an immediate indication to the device on the other end of the path that the data in that path is bad. Another reason for doing this is that paths may be re-used in a unidirectional ring. Certain types of failures could occur which could cause data received in a path to be coming from the wrong source.

The receiving device may not recognize the misconnection and deliver incorrect data. If AIS is put on the path, there is no question the data is invalid. Path switching on a unidirectional ring has a number of advantages. It is simple for fiber cuts.

No coordination is needed between the receiver and transmitter — it is fully implemented at the receiver for loss of ADMs, additional processing is required. It provides hitless restoral. No data is lost on a restoral unless an ADM fails or is separated from the ring by multiple fiber failures. On the negative side, it is expensive because two sets of path equipment are needed wherever a circuit is dropped at an ADM. Also, unidirectional rings have the disadvantage of asymmetric delay — the time it takes to go one way around the ring is usually different than the time it takes to go the other way around the ring.

Because of this, buffering must be done at the path-terminating site. And as rings get larger and faster, more buffering must be done. This is the primary reason why unidirectional rings are primarily used in metropolitan networks and with lower line rates. In the backbone, especially in large rings, bi- directional rings are much more common. The defining characteristic of bi-directional rings is that the traffic between two nodes flows in two directions, rather than in only a single direction, as occurs in unidirectional rings.

For the simplest case of two nodes, e. Bi-directional rings can be either two-fiber or four fiber. In a two fiber bi-directional ring, each fiber can only carry half its capacity, reserving the other half for backup also known as protection. In a four fiber bi-directional ring, two of the fibers are reserved for protection.

When a fiber failure occurs in a two fiber bi-directional ring, the only recovery possible is a ring switch or as the standards call it, a line switch , sending data in the opposite direction over the two fibers.

In a four fiber bi-directional ring, a single fiber failure can usually be recovered from by doing a span switch, simply switching to the protection fiber over that one link. Failure of multiple fibers will usually require a ring switch. Look at Figure 37 as we discuss what happens in a bi-directional ring in order to accomplish a ring switch.

The detail diagrams show how the nodes reroute the traffic. Loss of a single fiber is of course possible but the recovery is a bit more complex. Note what happens once the bridge occurs.

Look at ADM B first. The signals arriving on fiber 1 would ordinarily be transmitted to ADM A on fiber 1. Now, however, the signal is put on the protection fiber number 4, which carries it in the reverse direction around the ring to ADM A. ADM A then takes the signal on fiber 4 and bridges it to fiber 1. Those two ADMs handle the fault and the rest of the ring keeps doing the same thing.

The specifications require that the ring switch and restoration of service occur within 50 ms. When all the fibers are cut, both ADMs see the failure at the same time and take the same actions. If only one fiber is lost, only one ADM will see the failure. In the case of a single fiber loss, the two ADMs will attempt a link switch first by attempting to switch to the protection fiber s across the link between them.

If that is not possible, they will do a ring switch as described above. Recovery on a two fiber bi-directional ring is essentially the same as on a four fiber bi-directional ring except that a line switch is not possible — only a ring switch is possible.

Signaling with the K1, K2 octets is exactly the same. Traffic is routed back around the ring, not on a separate fiber, but in the unused capacity of the other fiber which carries traffic in the opposite direction.

Just as in the four fiber bi- directional ring, the protection channels simply bring the traffic the long way around the ring until it gets to the node that it would have arrived at if the fiber failure had not occurred. I will not detail the meaning of the bits in the K1, K2 octets. The purpose of these octets has been described in the text and the interested reader can obtain the details in ANSI or ITU standards or the Telcordia documents.

Many issues were glossed over and many boundary conditions were ignored. High-speed communications is the future. Once you understand the naming conventions, however, you can read the standard and make perfect sense of it But if you approach it without any prior knowledge, it can be very confusing and difficult to understand. And the standards people, being good engineers, appear to have provided the maximum flexibility which leads to maximum confusion.

But enough talk. This is almost the same as the transport overhead in SONET, except that row four is excluded from the section overhead. So the section overhead is the first nine columns, less row four.

The figure is not to scale — the overhead is shown much larger. However, the G. A truly frustrating task. As you might suspect, an AU-3 consists of 87 columns, plus three octets of pointers. An AU-4 consists of columns, plus 12 octets of pointers. Dry: A Memoir Augusten Burroughs. Related Audiobooks Free with a 30 day trial from Scribd.

Empath Up! Tejaswi Jadhav at lecturer in college. Sohail Ahmed. Show More. Views Total views. Actions Shares. No notes for slide. Permission required for reproduction or display. Note 3. They correspond to both the physical and the data linkThey correspond to both the physical and the data link layers. Note 8. Each frame is a two-dimensionalof frames. Note Each byte is made of 8 bits.

The data rate is Example STS-3 frame? STS-n frame? Solution The user data part in an STS-1 frame is made of 9 rows and 86 columns. So we have Example Solution The number can be expressed in four hexadecimal digits as 0xA. This means the value of H1 is 0x02 and the value of H2 is 0x8A. Example We can roughlycarrying loads from other networks. SONET is designed to carry broadband payloads.

Current digital hierarchy data rates, however, areCurrent digital hierarchy data rates, however, are lower than STS To make SONET backward- compatible with the current hierarchy, its framecompatible with the current hierarchy, its frame design includes a system of virtual tributaries VTs.

Adesign includes a system of virtual tributaries VTs. A virtual tributary is a partial payload that can bevirtual tributary is a partial payload that can be inserted into an STS Types of VTs Topics discussed in this section:Topics discussed in this section: Total views 3, On Slideshare 0.

From embeds 0. Number of embeds 6. Downloads Shares 0. Comments 0. Likes



0コメント

  • 1000 / 1000