IPv6, specified in RFC 2460, was created to address a few design issues with the previous IPv4. The major issue to be addressed is the limited number of IPv4 addresses. As the Internet grew, many devices that were not counted on originally to have networking support were given it, and IP addresses were allocated in large chunks to organizations, whereas many of the later addresses in the chunks went unused, being reserved for future growth. The people behind IPv6 decided, not without controversy, that more addresses were needed. As a result, they created the most defining feature of IPv6.
Each address in IPv6 is 128 bits. The address fields are split up into very large ranges and subfields, with the understanding that these large fields are to be used to simplify network allocation. IPv6 addresses are written in hexadecimal notation, rather than decimal, and are separated every four digits by colons. For example, one address might be 1080:0:0:0:8:800:200C:407A, where it is understood that leading zeros can be omitted. There is a shortcut, as well, where long ranges of zeros can be written with the double colon, ::. Thus, 1080::8:800:200C:407A specifies the same address as the earlier one.
As with IPv4, there are a few ranges, specified in slash notation, which are set aside for other purposes. The address ::1 represents the loopback address. Addresses of the form FE80::/10 are link-local addresses. Addresses of the form FC00::/8 are private addresses. The multicast address space is of the form FF00::/120. Finally, for backward compatibility, IPv6 specifies how to embed IPv4 addresses into this space. If the left 96 bits of the address are left zero, the right 32 bits are the IPv4 address. This allows the machine using 192.168.0.10, say, to use the IPv6 address ::192.168.0.10 (they allow the dotted decimal notion just for this). This means that the machine ::192.168.0.10 understands and can receive IPv6, but was assigned only an IPv4 address by the administrator. On the other hand, machines that speak only IPv4 and yet have had packets converted to IPv6 by some router are also given an address. If 192.168.0.10 belonged to this group, it would receive the IPv6 address ::FFFF:192.168.0.10. The FFFF is used to signify that the machine cannot speak IPv6.
The IPv6 header is given in Table 1.
Version/Flow | Payload Length | Next Header | Hop Limit | Source | Destination | Options | Data |
---|---|---|---|---|---|---|---|
4 bytes | 2 bytes | 1 byte | 1 byte | 16 bytes | 16 bytes | optional | variable |
The Version/Flow field (Table 2) species important quality-of-service information about the flow. The version, of course, is 6. The Traffic Class specifies the priority of the packet. The Flow Tabel specifies which flow this packet belongs to. The Payload Tength specifies how long the packet is from the end of the IPv6 header to the end. Thus, this is the length of the options and the data. (Note that, in IPv4, the options are counted in the header, not the payload.) The Next Header field specifies the type of the header following the IPv6 header, or if there is no IPv6 option following, then this specifies the protocol of the higher-layer unit this packet carriers. The Hop Limit is the TTL, but for IPv6. The Source and Destination addresses have the same meaning as in IPv4.
Version | Traffic Class | Flow Label | |
---|---|---|---|
Bit: | 0-3 | 4-11 | 12-31 |
IPv6 is routed in the same way as IPv4 is, although there is a lot more definition in how devices learn of routes. In IPv6, devices are able to learn of routers by their own advertisements, using a special protocol for IPv6 administrative communications (ICMPv6, as opposed to ICMPv4 used with IPv4).
IPv6 is a major factor in government or public organization networks, and has an impact in voice mobility in those environments. Many private voice mobility networks, however, can still safely use IPv4.
UDP
The User Datagram Protocol, or UDP, is defined in RFC 768. The purpose of UDP is to provide a notion of ports, or mailboxes, on each IP device, so that multiple applications caexist on the same machine. A UDP port is a 16-bit value, assigned by the application opening the port. Packets arriving for a UDP port are placed into a queue used just for the application that has the port open: these queues, and the ports they are attached to, are generally called sockets. UDP-based applications often have well-known, assigned port numbers. Common UDP applications for voice mobility are SIP on port 5060, DNS on port 53, and RADIUS, on port 1812.
Every socket has a port, even those that do not need a well-known one. Ports can be assigned automatically, according to whatever might be free at the time, These are calledephemeral ports.
UDP embeds directly into an IPv4 or IPv6 packet. The format of the UDP header is shown in Table 3.
Source Port | Destination Port | Length | Checksum | Data |
---|---|---|---|---|
2 bytes | 2 bytes | 2 bytes | 2 bytes | variable |
The Source Port is a 16-bit value of the socket sending the UDP packet. It is allowed to be 0, although that is rarely seen; ephemeral ports are far more common. The Destination Port is that of the socket that needs to receive the packet. The length field specifies the entire length of the UDP datagram, from the Source Port to the end of the Data. This is redundant in IP, because IP records the length of its payload, and the UDP packet is the only thing that needs to fit. The checksum is an optional field for UDP, which covers the data of the packet, as well as the UDP header and a few fields of the IPv4 or IPv6 header (such as source, destination, protocol, and length).
UDP suffers from the same problems as the underlying IP technology does. Packets can get dropped or reordered. Applications that depend on UDP, such as SIP and RTP, need to make plans for when packets do get lost. This is a major portion of voice mobility.
TCP
The Transmission Control Protocol (TCP) is the heavy-duty older sibling of UDP. TCP, specified in a number of RFCs and other sources, is a protocol designed to correct for the vagaries of IP's underlying delivery, for use in data applications.
Unlike UDP and IP, TCP provides the view to the using application that it is a byte stream, not a packet datagram service. Of course, TCP is implemented with packets. This means that TCP must ensure that packet loss and reordering do not get revealed to the end application, and so some notion of reliable transport is necessary. Furthermore, because TCP is the dominant protocol for data, it must deal with trying to avoid overwhelming the network that it is being used in. Therefore, TCP is also charged with congestion control— being able to avoid creating congestion that brings down a network—while finding the best throughput it can.
The header structure for TCP is given in Table 4.
Source Port | Dest. Port | Sequence | Ack. | Flags | Window | Checksum | Urgent | Options | Data |
---|---|---|---|---|---|---|---|---|---|
2 bytes | 2 bytes | 4 bytes | 4 bytes | 2 bytes | 2 bytes | 2 bytes | 2 bytes | Optional | variable |
The Source and Destination ports are similar to TCP, and well-known ports are allocated in the same range. No TCP port can be zero, however, and the UDP port and TCP port with the same number are actually independent; only convention suggests that an application use the same number for both. Examples of well-known TCP ports are SSH on 22 and HTTP on 80. The Sequence and the Acknowledgement fields are used for defining the flow state. The Window field specifies how many bytes of room the receiver has to hold onto out-of-order data. The checksum, mandatory, covers the data, TCP header, and certain fields of the IP header. The urgent field is almost always zero, but was conceived as a way that TCP could send important data in a side channel. Options are possible after that, and then the data comes. Unlike UDP, TCP does not provide the length explicitly, as IP already does.
The flags (see Table 5) are divided up into the Data Offset, which specifies how long the options will be by when the first bit of data will appear. The CWR and ECE flags are not often used, and are for network congestion notification. The URG flag is for whether the Urgent field is meaningful. The ACK flag is used for every packet that is a response to another. The PSH flag is set when this particular packet was the result of the application saying that it wants to flush its send buffer. Small writes to the sending TCP socket do not cause packets to come out right away, unless that feature is specifically requested. Rather, the sender's operating system holds on to the data for a bit, hoping to get a larger chunk, which is more efficient to send. The application can flush that holding on, however, and the resulting packet will have the PSH bit set. RST is set when the sender has know idea about the socket the packet is coming in for. SYN is used to set up a TCP flow, and FIN is used to tear it down.
Data Offset | Reserved | CWR | ECE | URG | ACK | PSH | RST | SYN | FIN | |
---|---|---|---|---|---|---|---|---|---|---|
Bit: | 0-3 | 4-7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 |
TCP needs to keep track of flow state, in order to provide the appearance of a stream. The TCP stream is a two-way channel, symmetric in the sense that no one side is favored over the other. Each side keeps sender state and receiver state. A part of that state is that every byte in the TCP stream, since the stream began, is given an increasing sequence number. As packets are pushed out to the network by the sender, the sender keeps copies of those packets. When the receiver gets a packet, it acknowledges it by sending a packet with the ACK flag set, and the Acknowledgment field to the sequence number of the highest byte it has received before a break in the sequence occurs. This acknowledgment can come back as a part of the next return-direction data packet, but if none are queued, then ACKs are generated in their own, otherwise empty, packets every 200ms. This process is called delayed acknowledgment. If the acknowledgment is never received by the sender, the sender has to assume that either the original data packet, or the acknowledgment itself, got lost. The sender will then retry the packet some time later. Once a packet has an acknowledgment received for it, the sender will finally free up the packet. On the receive side, the receiver cannot send information back to the application unless there is a contiguous run of bytes at the head of the reassembly buffer. If not, the buffer holds onto the bytes, and advertises the hole in the next acknowledgment.
TCP uses sophisticated flow control techniques to prevent the sender from sending too much. The basic flow control technique is that the sender cannot have any more outstanding packets sent than the window size it hears on any given TCP packet in return. This prevents the receive buffer from being overrun. On top of that, however, TCP engages in congestion control. The sender specifically tries to measure the round-trip time of the network, and its loss rate. Because TCP is a handshaking protocol, and the sender cannot send when the window is full unless it receives an acknowledgment first, TCP will perform most optimally if it can stick enough packets in the wire to fill the round trip time. If, after a round trip time elapses, an acknowledgment does not come in for a packet, the sender can assume the packet did not arrive and retransmit it right away. However, what if the network is congested? In that case, switches and routers can start dropping packets. TCP reacts to that packet loss. Its first method is to avoid flooding the line to begin with. With TCP, past success begets future success. To make that work, TCP starts of slowly, sending one packet at a time. Every acknowledgement gives it more confidence, and a reason to send one more packet than before in the next round trip. This process, called slow start, continues until the network finally drops a packet. Once a packet is dropped, the sender will notice it, because subsequent packets that are sent to the receiver will cause duplicate acknowledgments, as the hole that the loss created prevents the receiver from acknowledging the later sequence numbers, and yet the receiver is required to send an acknowledgment. The back-to-back duplicates cause TCP to back off, by cutting its congestion window—the number of packets it thinks it can have outstanding every round trip—in half. The sender then tries to ease back in, by growing its congestion window once every round trip time. This process is finely tuned to ensure that the network does not become overly crowded by aggressive behavior. In the early days of the Internet, this did, in fact, happen and was the motivation behind introducing congestion control.
Because TCP refuses to allow any loss, it is required to block the sender and receiver until it can resolve outstanding packet matters. This makes TCP generally inappropriate for voice mobility. Interestingly, TCP can be used for the signaling protocols, such as with Secure SIP, as long as the applications that use it are prepared to handle cases on lossy networks where the application gets stuck. Also, TCP is being used increasingly for video, mostly because of applications such as consumer-oriented video sharing services, which make the assumption that simplicity is best.
No comments:
Post a Comment