Showing posts with label Voice Mobility with Wi-Fi. Show all posts
Showing posts with label Voice Mobility with Wi-Fi. Show all posts

Wednesday, February 29, 2012

Active Voice Quality Monitoring



A large part of determining whether a voice mobility network is successful is in monitoring the voice quality for devices on the network. When the network has the capability tomeasure this for the administrator on an ongoing basis, the administrator is able to devote attention to other, more pressing matters.
Active voice quality monitoring comes in a few flavors. SIP-based schemes are capable of determining when there is a voice call. This is often used in conjunction with SIP-based admission control. With SIP calls, RTP is generally used as the bearer protocol to carry the actual voice, SIP-based call monitoring schemes can measure the loss and delay rates for the RTP traffic that makes up the call, and report back on whether there are phones with suffering quality. In these monitoring tools, call quality is measured using the standard MOS or R-value metrics.
SIP-based schemes can be found in a number of different manifestations. Wireline protocol analyzers are capable of listening in on a mirror port, entirely independent of the wireless network, and can report on upstream loss. Downstream loss, however, cannot be detected by these wireline mechanisms. Wireless networks themselves may offer built-in voice monitoring tools. These leverage the SIP-tracking functions already used for firewalling and admission control, and report on the quality both measured by uplink and downlink loss. Purely wireless monitoring tools that monitor voice quality can also be employed. Either located as software on a laptop, or integrated into overlay wireless monitoring systems, these detect the voice quality using over-the-air packet analysis. They infer the uplink and downlink loss rates of the clients, and use this to build out the expected voice quality. Depending on the particular vendor, these tools can be thrown off when presented with WPA- and WPA2-encrypted voice traffic, although that can sometimes be worked around.
Voice call quality may also be monitored by measurements reported by the client or other endpoint. RTCP, the RTP Control Protocol, may be transmitted by the endpoints. RTCP is able to encode statistics about the receiver, and these statistics can be used to infer the expected quality of the call. RTCP may or may not be available in a network, based on the SIP implementation used at the endpoints. Where available, RTCP encodes the percentage of packets lost, the cumulative number of packets lost, and interarrivai packet jitter, all of which are useful for inferring call quality. At a lower layer, 802.11k, where it is supported, provides for the notion of traffic stream metrics. These metrics also provide for loss and delay, and may also be used to determine call quality. However, 802.11k requires upgrades to the client and access point firmware, and so is not as prevalent as RTCP, and nowhere near as simple to set up as overlay or traffic-based quality measurements.

[*]Of course, there had to be a catch. Some devices can carry two calls simultaneously, if they renegotiate their one admitted traffic stream to take the capacity of both. Because WMM Admission Control views flows as being only between clients and access points, the ultimate other endpoint of the call does not matter. However, this is not something you would expect to see in practice.

Sunday, February 26, 2012

Spectrum Management | Voice Mobility with Wi-Fi



Spectrum management is the technology used by virtualization architectures to manage the available wireless resources. Unlike radio resource management, which is focused on adjusting the available wireless resources on a per-access-point basis, to ensure that the clients of that access point receive reasonable service without regard to the neighbors, spectrum management takes a view of the entire unlicensed Wi-Fi spectrum within the network, and applies principles of capacity management to the network to organize and optimize the layout of channel layers. In many ways, spectrum management is radio resource management, applied to the virtualized spectrum, rather than individual radios.
Spectrum management focuses on determining which broad swaths of unlicensed spectrum are adequate for the network or for given applications within the network. One advantage of channel layering is that channels are freed from being used to avoid interference, and thus can be used to divide the spectrum up by purposes. Much as regulatory bodies, such as the FCC, divide up the entire radio spectrum by applications, setting aside one band for radio, another for television, some for wireless communications, and so on, administrators of virtualization architectures can use spectrum management to divide up the available channels into bands that maximize the mutual capacity between applications by separating out applications with the highest likely bandwidth needs onto separate channel layers.
The constraints of spectrum management are fairly simple. A deployment has only a given number of access points. The number and position of the access points limits the number of independent channel layers that can be provided over given areas of the wireless deployment area. It is not necessary for every channel layer to extend across the entire network—in fact, channel layers are often created more in places with higher traffic density, such as libraries or conference centers. The number of channel layers in a given area is called the network thickness. Spectrum management can detect the maximum number of channel layers that can be created given the current deployment of access points, and is then able to determine when to create multiple layers by spreading channel assignments of close access points, or when to maximize signal strength and SNR by setting close access points to the same channel. Thus, spectrum management can determine the appropriate thickness for each given square foot. For 802.1 In networks, spectrum management is able to work with channel widths, as well as band and channel allocations, and is thus able to make very clear decisions about doubling capacity by arranging channels as needed.
Spectrum management also applies the neighboring-interference-avoidance aspects that RRM uses to prevent adjacent networks from being deployed in the same spectrum, if it can at all be avoided. Because there is no per-channel performance compromise in compressing the thickness of the network, spectrum management can avoid some of the troublesome aspects of radio resource management when dealing with edge effects from multiple, independent networks. Furthermore, spectrum management is not required to react to transient interference, as the channel layering mechanism is already better suited to handle transient changes through RF redundancy. This allows spectrum management to reserve network reconfigurations for periods of less network usage and potential disruption (such as night), or to make changes at a deliberate pace that ensures network convergence throughout the process.

Thursday, February 23, 2012

Voice-Aware Radio Resource Management



The concept of voice-aware radio resource management is to build upon the measurements used for determining network capacity and topology, and integrate them into the decision-making process for dynamic microcell architectures.
Basic radio resource management is more concerned with establishing minimum levels of coverage while avoiding interference from neighboring access points and surrounding devices. This is more suitable for data networks. Voice-aware RRM shifts the focus towards providing a more consistent coverage that voice needs, often adjusting the nature of the RRM process to avoid destroying active voice calls. Voice-aware RRM is a crucial leg of voice mobility deployments based on microcell technology. (Layered or virtualized deployments do not use the same type of voice-aware RRM, as they have different means of ensuring high voice quality and available resources.)
The first aspect of voice-aware radio resource management is ironically to disable radio resource management. Radio resource management systems work by the access points performing scanning functions, rather similar to those performed by clients when trying to hand off. The access point halts service on a channel, and then exits the channel for a short amount of time to scan the other channels to determine the power levels, identities, and capacities of neighboring access points. These neighboring access points may be part of the same network, or may belong to other interests and other networks. Unlike with client scanning, in which the client can go into power save to inform the access point to buffer frames, however, access point scanning has no good way for clients to be told to buffer frames. Moreover, whereas client scanning can go off channel between the packets of the voice call, only to return when the next packet is ready, an access point with multiple voice calls will likely not have any available time to scan in a meaningful way. In these cases, scanning needs to be disabled. In RRM schemes without voice-aware services, administrators often have to disable RRM by hand, thus nullifying the RRM benefits for the entire network. Voice-aware RRM, however, has the capability to turn off scanning on a temporary basis for each access point, when the access point is carrying voice traffic. There are unfortunately two downsides to this. The first is that RRM is necessary for voice networks to ensure that coverage holes are filled and that the network adapts to varying density. Disabling the scanning portion of RRM disables RRM, effectively, and so voice-aware RRM scanning works best when each given access point does not carry voice traffic for uninterrupted periods of time. Second, RRM scanning is usually the same process by which the access points scan for wireless security problems, such as rogue access points and various i ntrusions. Disabling scanning in the presence of voice leaves access points with voice more vulnerable, which is unacceptable for voice mobility deployments. Here, the solution is to deploy dedicated air monitors, either as additional access points from the same network vendor, but set to monitor rather than serve, or from a dedicated WLAN security monitoring vendor, as an independent overlay solution.
The second aspect of voice-aware radio resource management is in using coverage hole detection and repair parameters that are more conservative. Although doing so increases the likelihood of co-channel interference, which can have a strong downside to voice mobility networks as the network scale and density grows, it is necessary to ensure that the radio resource management algorithms for microcells do not leave coverage holes stand by idle. Coverage holes disproportionately affect the quality of voice traffic over data traffic. Increasing the coverage hole parameters ensures that these coverage holes are reduced. Radio resource management techniques often detect the presence of a coverage hole by inferring them from the behavior of a client. RRM assumes that the client is choosing to hand off from an access point when the loss becomes too high. When this assumption is correct, the access point will infer the presence of a coverage hole by noticing when the loss rate for a client increases greatly for extended periods of time. This is used as a trigger that the client must be out of range, and informs the access point to increase its power levels. It is better for voice mobility networks for the coverage levels to be increased prior to the voice mobility deployment, and then for the coverage hole detection algorithm to be made less willing to reduce coverage levels. Unfortunately, the coverage hole detection algorithms in RRM schemes are proprietary, and there are no settings that are consistent from vendor to vendor. Consult your microcell wireless network manufacturer for details on how to make the coverage hole detection algorithm be more conservative.
The final aspect of voice-aware RRM is for when proprietary extensions are used by the voice client and are supported by the network. These extensions can provide some benefit to microcell deployments, as they allow the network to alter some of the tuning parameters that clients use to hand off. Unfortunately, the aspects of voice-aware radio resource management trade off between coverage and quality of service, and so operating these networks can become a challenge, especially as the density or proportion of network use of voice increases. Monitoring tools for voice quality are especially important in these networks

Monday, February 20, 2012

Power Control | Voice Mobility with Wi-Fi


Power control, also known as transmit power control (TPC), is the ability of the client or the access point to adjust its transmit power levels for the conditions. Power control comes in two flavors with two different purposes, both of which can help and hurt a voice mobility network. The first, most common flavor of power control is vested in the client. This TPC exists to allow the client to increase its battery life. When the client is within close range to the access point, transmitting at the highest power level and data rate may not be necessary to achieve a similar level of voice performance. Especially as the data rates approach 54Mbps for 802.11a and 802.11g, or higher for 802.1 In, the preamble and per-packet backoff overhead becomes in line with the over-the-air resource usage of the voice data payload itself. For example, the payload of a voice packet at the higher data rates reduces to around 20 microseconds, on par with the preamble length for those data rates. In these scenarios, it makes sense for the client to back off on its power levels and turn off portions of the radio concerned with the more processing-intensive data rates, to extend battery life while in a call. To do this, the client will just directly reduce its transmit power levels, as a part of its power saving strategy. This mechanism can be used for good effect within the network, as long as the client is able to react to an increase in upstream data loss rates quickly enough to restore power levels should the client have turned power levels down too low for the range, or if increasing noise begins to permeate the channel.
The other TPC is vested within the network. Microcell networks, specifically, use access point TPC to reduce the amount of co-channel interference without having to relocate or disable access points. By reducing power levels, cell sizes in every direction are reduced, keeping in line with the goals of microcell. Reducing co-channel interference is necessary within microcell networks, to allow a better isolation of cells from fluctuations in their neighboring cells, especially those related to the density of mobile clients.
Network TPC has some side effects, however, that must be taken into account for voice mobility deployments. The greatest side effect is the lack of predictability of coverage patterns for the access points. This can have a strong effect on the quality of voice, because voice is more sensitive than data to weak coverage, and areas where voice performs poorly can come and go with the changing power levels, of both the access point the client is associated to and of the neighbors. Unfortunately, power levels in microcell networks usually fluctuate on the order of a few seconds or a few minutes, especially when clients are associated, as the network tries to adapt its coverage area to avoid causing the increase in packet rate and traffic caused by the clients from affecting neighboring cells. Site surveys, which are performed to determine the coverage levels of the network, are always snapshots in time and cannot take TPC into account. However, the TPC variation is necessary for proper microcell operation, and unfortunately needs to happen when phones are associated and in calls. Therefore, it can cause a strong network-induced variation in call quality. It is imperative, in microcell deployments, for the coverage and call quality to be continuously monitored, to ensure that the TPC algorithms are behaving properly. Follow the manufacturer's recommendations, as you may find in Voice over WLAN design guides, to ensure that problems can be detected and handled accordingly. 

Thursday, February 16, 2012

Understanding the Balance | Load Balancing



Explicit in the concept of load balancing is that it is actually possible to balance load—that is, to transfer load from one access point to another in a predictable, meaningful fashion. To understand this, we need to look at how the load of a call behaves from one access point to another, assuming that neither the phone nor the access points have moved.
Picture the environment in Figure 1. There are two access points, the first one on channel 1 and the second on channel 11. A mobile phone is between the two access points, but physically closer to access point 1. The network has two choices to distribute, or balance, the load. The network can try to guide the phone into access point 1, as shown in the top of the figure, or it can try to guide the phone into access point 2, as shown in the bottom of the figure.
 
Figure 1: Load Balancing across Distances
This is the heart of load balancing. The network might choose to have the phone associate to access point 2. We can imagine that access point 1 is more congested—that is, it has more phone calls currently on it. In the extreme case, access point 1 can be completely full, and might be unable to accept new calls. The advantage of load balancing is that the network can use whatever information it sees fit—usually, loads—to guide clients to the right access points.
There are a few wrinkles, however. It is extremely unlikely that, in a non-channel-layered environment, the phone is at the same distance from each of the two access points. It is more likely that the phone is closer to one access point than another. The consequence of the phone being closer to an access point is that it can get higher data rates and SNR, which then allows it to take less airtime and less resources. It may turn out that, if the network chooses to move the phone from access point 2 to access point 1, that the increase in data rate because the phone is closer to access point 1 allows possibly two calls in for access point 2. In this case, the same call produces unequal load when it is applied to different access points, all else being equal.
For this reason, within-band load balancing has serious drawbacks for networks that do not use channel layering. Load balancing should be thought of as a way to distribute load across equal resources, but within-band load balancing tends to work rather differently and can lead to performance problems. If the voice side of the network is lightly used—such as having a small CAC limit—and if the impact of voice on data is not terribly important, then this sort of load balancing can work to ease the rough edges on networks that were not provisioned properly. However, for more dense voice mobility networks, we need to look further. The concept of load balancing among near equals does exist, however, with band load balancing. Band load balancing can be done when the phones support both the 2.4 GHz and 5 GHz bands (some newer ones do) and the access points are dual-radio, having one 2.4 GHz radio and another 5 GHz radio in the same access point. In this case, the two choices are collocated: the client can get similar SNR and data rates from either radio, and the choice is much closer to one-to-one. Figure 2 illustrates the point.

 
Figure 2: Load Balancing across Bands
A variant of band load balancing is band steering. With band steering, the access point is not trying to achieve load balancing across the two bands, but rather is prioritizing access to one band over the other—usually prioritizing access to the 5GHz band for some devices. The notion is to help clear out traffic from certain devices, such as trying to dedicate one band for voice and another for data. Using differing SSIDs to accomplish the same task is also possible, and works across a broader range of infrastructures.
There are differences between the two bands, of course, most notably that the 5 GHz band does not propagate quite as far as the 2.4GHz band. The 5GHz band also tends to be unevenly accessed by multiband phones: sometimes, the phones will avoid the 5GHz band unless absolutely forced to go there, leading to longer connection times. On the other hand, the 2.4GHz band is subject to more microwave interference. And, finally, this mechanism will not work for single-band phones. (The merits of each band for voice mobility are summarized later in this chapter.) Nevertheless, band load balancing is an option for providing a more even, one-to-one balance.
For environments with even higher densities, where two channels per square foot are not enough, or where the phones support only one band or where environmental factors (such as heavy microwave use) preclude using other bands, channel layering can be employed to provide three, four, or many more choices per square foot. Channel layering exists as a benefit of the channel layering wireless architecture, for obvious reasons, and builds upon the concept of band load balancing to createcollocated channel load balancing. The key to collocated channel load balancing is that the access points that are on different channels are placed in roughly the same areas, so that they provide similar coverage patterns. Because channels are being taken from use as preventatives for co-channel interference and are instead being deployed for coverage, channel layering architectures are best suited for this. In this case, the phone now has a choice of multiple channels per square foot, of roughly similar, one-to-one coverage. Figure 3 illustrates this.

 
Figure 3: Collocated Channel Load Balancing with Channel Layering
Bear in mind that the load balancing mechanisms are in general conflict with the client's inherent desire to gain access to whatever access point it chooses and to do so as quickly as possible (see Section 6.2.2). The network is required to choose an access point and then must ignore the client, if it should come in and attempt to learn about the nonchosen access points. This works reasonably well when the client is first powered up, as the scanning table may be empty and the client will blindly obey the hiding of access points as a part of steering the load. On the other hand, should the client already have a well-populated scanning table—as voice clients are far more likely to do—load balancing can become a time-consuming proposition, causing handoff delays and possible call loss. Specifically, what can happen is that the client determines to initiate a handoff and consults the information in its scanning table, gathered from a time when all of its entries were options, based on load. The client can then directly attempt to initiate a connection with an access point, sending an Authentication or Reassociation frame (depending on whether the client has visited the access point before) to an access point that may no longer wish to serve the client. The access point can ignore or reject the client at that point, but usually clients are far less likely to abandon an access point once they choose to associate than when they are scanning. Thus, the client can remain outside the access point, persistently knocking on the door, if you will, unwilling to take the rejection or the ignoring as an answer for possibly long periods of time. This provides an additional reason why load balancing in an environment where multiple handoffs are likely can have consequences for the quality of voice calls.

Sunday, February 12, 2012

Load Balancing | Voice Mobility with Wi-Fi



Load balancing is the ability for the network to steer or direct clients towards more lightly loaded access points and away from more heavily loaded ones. Client decides to which access point the client will connect. However, the network has the ability to gently influence or guide the client's decision.
First, let's recap what is meant by wireless load. The previous discussion on admission control first introduced the concept of counting airtime or calls. This is one measure of load—a real-time one. However, this counts only phones in active calls. There is likely to be far more phones not in active calls, and these should be balanced as well. The main reason for balancing inactive phones is that the network has little ability, once the phone starts a call, to transfer the phone to another access point without causing the call to fail going through. To avoid that, load balancing techniques attempt to establish a more even balance up front. The thinking goes that if you can get the phones evenly distributed when the connect to the network, then you have a better shot at having the calls they place equally distributed as well.


Mechanics of Load Balancing

Let's start with the basic mechanics of load balancing. Because the client chooses which access point to associate to, based on scanning operations, the only assured way to prevent a client from associating to an overloaded access point is for that overloaded access point to ignore the client. The access point can do this in a few ways. When the client sends probe requests, trying to discover whether the SSID it wants is still available on the access point, the access point can ignore the request, not sending out a probe response. Hopefully, the client will not enter on the basis of that alone. However, the client may have scanned before, when it could have (but chose not to) enter the access point, and may remember a prior probe response. Or, it can see the beacon, and so it knows that the access point is, in fact, providing the service in any event.
To prevent the client from associating, then, the access point has no choice but to ignore or reject Authentication and Association Request messages from the client. This will have the desired effect of preventing a burdensome load from ending up on the access point, but may not cause the client to choose the correct access point quickly.
Assuming, for the moment, that load balancing is effective in causing clients to distribute their load evenly, we need to look at what the consequences of balancing load are.

Tuesday, January 31, 2012

Voice Mobility with Wi-Fi Capacity


How the Capacity is Determined


Through either admission control scheme, the network needs to keep track of how much capacity is available. From the previous discussions on the effects of RF variability and cellular overlap, you can appreciate that this is a difficult problem to completely solve. As devices get further away from the access points, data rates drop. Changing levels of interference, from within the network or without, can cause increasing retransmissions and easily overrun surplus bandwidth allowances.
In the end, networks today adopt one of two stands, and may even show both to the user. The more complicated stand for the network—but simpler for the user—is for the network to automatically take the variability of RF into account, and to determine its own capacities. In systems that do this, there is no notion of a static maximum number of calls. Instead, the system accepts however many calls as it can handle. If conditions change, and fewer calls can be handled in the system, the network reserves the right to proactively end a client's reservation, often in concert with load balancing.
The other stand, simpler for the network but far more complicated for the user, is for the administrator to be required to enter the maximum number of calls per access point (or some other static metric). The idea here is that the administrator or installer is assumed to have gone through a planning process to determine how many calls can besafely allowed per access point, while still leaving room for best effort data. That number is usually far lower than the best-case maximum capacity, and is designed to be a low water mark: barring external changes, the network will be able to achieve that many calls most of the time. This number is then manually input into the wireless network, which then counts the number of calls. If the maximum number of calls is reached on that access point, the system will not let any more in. These static metrics may be entered either as the number of calls, or a percentage of airtime. Systems that work as a percentage of airtime can sometimes take in a padding factor to allow for calls that are roaming into the network.
Setting these values can be fraught with difficulty. Pick a number that's too low, and airtime is being wasted. Pick a number that's too high, however, and sometimes call quality will suffer. Even percentage of airtime calculations are not very good, because they may not take into account airtime that is unusable because of variable channel conditions or co-channel interference that the access point cannot directly see, such as client-to-client interference 
All in all, you might find vendors recommending setting the values to a low, safe value that allows for voice to work even if there is plenty of variability in the network. This works well for networks that are predominantly data-oriented, but voice-only networks cannot usually afford that luxury.

Friday, January 27, 2012

WMM Admission Control | Voice Mobility with Wi-Fi



Building on even more of the specification in the 802.11e quality-of-service amendment is WMM Admission Control. This specification and interoperability program from the Wi-Fi Alliance, which is required to achieve Voice Enterprise certification, uses an explicit layer-2 reservation scheme. This scheme, in a similar vein as the lightly used RSVP protocol (RFC 2205), requires the mobile device to reach out and request resources explicitly from the access point, using a new protocol built on top of 802.11 management frames.
This protocol is heavily dependant on the concept of a traffic specification (TSPEC). The TSPEC is created by the mobile phone, and specifies how much of the air resources either or both directions of the call (or whatever resource is being requested) will be taken. The access point processes the request as an admission controller (a function often placed literally on the controller, by coincidence), which is in charge of maintaining an account of which clients have requested what resources and whether they are available.
The overall protocol is rather simple. The mobile device, usually when it determines that it has a call incoming our outgoing, will send an Add Traffic Stream (ADDTS)Request message (a special type of Action management frame) to the access point, containing the TSPEC that will be able to carry the phone call. The access point will decide whether it can carry that call, based on whatever scheme it uses (see following discussion), and send an ADDTS Response message stating whether the stream was admitted.
WMM Admission Control can be set to mandatory or optional for each access category. For example, WMM Admission Control can be required for voice and video, but not for best effort and background data. What this would mean is that no client is allowed to transmit voice or video packets without first requesting and being granted admission for flows in those access categories, whereas all clients would be allowed to freely transmit best effort and background data as they see fit. Which access categories require admission control is signaled as a part of the WMM information element, which goes out in beacons and some other frames.
For WMM Admission Control, it is worth looking at the details of the concepts. The main concept is one of a traffic stream itself, and how it is identified and recognized. Traffic streams are represented by Traffic Identifiers (TID), a number from 0-7 (the standard allows up to 15, but WMM limits this to only 7) that represents the stream. Each client gets its own set of eight TIDs to use.
Each traffic stream, represented by its TID, maps onto real traffic by naming which of the eight priority values in WMM will belong to this traffic stream. Thus, if the phone intends to send and knows it is going to receive priority 7—recall that this is the highest of the two voice AC priorities—it can establish a traffic stream that maps priority 7 traffic to it, and get both sides of the call. In order for that to work, the client can specify whether the traffic stream is upstream-only, downstream-only, or bidirectional. It is possible for the client to request both an upstream-only and downstream-only stream mapping to the same priority (different TIDs, though!), if it knows that the airtime used by the downstream side is different than the upstream side—useful for video calls—or it may request both at once in one TID, with the same airtime usage. All of this freedom leads to some complexity, but thankfully there is a rule preventing there from being more than one downstream and one upstream flow (bidirectional counts as one of each) for each access category. Thus, the AC_VO voice access category will only have one admitted bidirectional phone call in it at any given time.[*]
The client requests the traffic stream using the TSPEC.
Table 1 shows the contents of the TSPEC that is carried in an ADDTS message.
Table 1: WMM admission control TSPEC 
TS Info
Nominal MMSDU Size
Maximum MSDU Size
Minimum Service Interval
Maximum Service Interval
Inactivity Interval
Suspension Interval
Service Start Time
3 bytes
2 bytes
2 bytes
4 bytes
4 bytes
4 bytes
4 bytes
4 bytes

Minimum Data Rate
Mean Data Rate
Peak Data Rate
Maximum Burst Size
Delay Bound
Minimum PHY Rate
Surplus Bandwidth Allowance
Medium Time
4 bytes
4 bytes
4 bytes
4 bytes
4 bytes
4 bytes
2 bytes
2 bytes
There's quite a lot of information in a TSPEC, so let's break it down slowly, using the example of a 20 millisecond G.711 (nearly uncompressed) one-way traffic flow:
  • The TS Info field (see Table 2) identifies the TID for the stream, the priority of the data frames that belong to this stream, what direction the stream is going in (00 = up, 01 = down, 10 = reserved, 11 = bidirectional), and whether the AC the stream belongs to is to be WMM Power Save delivery enabled (1) or not (0). The rest of the fields are not used in WMM Admission Control, and have specific values that will never change (Access Policy = 01, the rest are 0).
    Table 2: The TS info field 
     
    Traffic Type
    TID
    Direction
    Access Policy
    Aggregation
    WMM Power Save
    Priority
    TSInfo Ack Policy
    Schedule
    Reserved
    Bit:
    0
    1-4
    5-6
    7-8
    9
    10
    11-13
    14-15
    16
    17-23
  • The Nomimal MSDU Size field mentions the expected packet size, with the highest-order bit set to signify that the packet size never changes. G.711 20ms packets are 160 bytes of audio, plus 12 bytes of RTP header, 8 bytes of UDP header, 20 bytes of IP header, and 8 bytes of SNAP header, creating a data payload (excluding WPA/WPA2 overhead) of 208 = 0×D0. Because the packet size for G.711 never changes, this field would be set to 0×80D0.
  • The Maximum MSDU Size field specifies what the largest a data packet in the stream can get. For G.711, that's the same as the nominal size. There is no special bit for fixed sizes, so the value is 208 = 0×00D0. This can also be left as 0, as it is an optional field.
  • The Inactivity Interval specifies how long the stream can be idle—no traffic matching it—in microseconds, before the access point can go ahead and delete the flow. 0 means not to delete the flow automatically, and that's the common value.
  • The Mean Data Rate specifies, in bits per second, what the expected throughput is for the stream. For G.711, 208 bytes every 20 milliseconds results in a throughput of 83200 bits per second.
  • The Minimum Data Rate and Peak Data Rate specify the minimum and maximum throughput the traffic stream can expect. These are optional and can be set to 0. For G.711, these will be the same 83,200 bits per second.
  • The Minimum PHY Rate field specifies what the physical layer data rate assumptions are for the stream, in bits per second. If the client is assuming that the data rate could drop as low as 6Mbps for 802. Hag, then it would encode the field at 6Mbps = 6,000,000bps = 0×005B8D80.
  • The Surplus Bandwidth Allowance is a fudge factor that the phone can request, to account for that packets might be retransmitted. It's a multiplier, in units of l/8192nds. A value of 1.5 times as an allowance would be encoded as 0×3000 = 001.1000000000000, in binary.
  • The other fields are unused by the client, and can be set to 0.
In other words, the client simply requests the direction, priority, packet size, data rate, and surplus allowance.
The access point gets this information, and churns it using whatever algorithms it wants— this is not specified by the standard, but we'll look at what sorts of considerations tend to be used. Normally, we'll assume that the access point knows what percentage of airtime is available. The access point will then decide how much airtime the requested flow will take, as a percentage, and see whether it exceeds its maximum allowance (say, 100% of airtime used). If so, the flow is denied, and a failing ADDTS Response is sent. If not, the access point updates its measure of how much airtime is being used, and then allows the flow. The succeeding ADDTS Response has a TSPEC in it that is a mirror of the one the client requested, except that now the Medium Time field is filled in. This field specifies exactly how much airtime, in 32-microsecond units per second, the client can take for the flow.
The definition of how much airtime a client uses is based on what packets are sent to it or that it sends as a part of a flow. Both traffic sent by the client to the access point and sent by the access point to the client are counted, as well as the times for any RTSs, CTSs, ACKs, and interframe spacings that are between those frames. Another way of thinking about it is that the time from the first bit of the first preamble to the last bit of the last frame of the TXOP counts, including gaps in between. In general, you will never need to try to count this. Just know that WMM Admission Control requires that the clients count their usage. If they exceed their usage in the access category they are using, they have to send all subsequent frames with a lower access category—and one that is not admission control enabled—or drop them.
One advantage of WMM Admission Control is that it works for all traffic types, without requiring the network to have any smarts. Rather, the client is required to know everything about the flows it will both send and receive, and how much airtime those flows will take. The network just plays the role of arbiter, allowing some flows in and rejecting others. Thus, if the client is sufficiently smart, WMM Admission Control will work whether the protocol is SIP, H.323, some proprietary protocol, or even video or streaming data. The disadvantage of that, however, is that the client is required to be smart, and all of its pieces—from wireless to phone software—have to be well integrated. That pretty much eliminates most softphones, and brings the focus squarely on purpose-built phones. Furthermore, the client needs to know what type of traffic the party on the other side of the call will send to it. Some higher-level signaling protocols can convey this, such as with SDP within SIP, but doing so may be optional and may not always be followed. For a phone talking to a media gateway, for example, the phone needs to know exactly how the media gateway will send its traffic, including knowing the codec and packet rate and sizing, before it can request airtime. That can lead to situations in which the call needs to be initiated and agreed to by both parties before the network can be asked for permission to admit the flow, meaning that the call might have to be terminated by the network midway through ringing, if airtime is not available. Because WMM Admission Control is so new—by the time of publication, WMM Admission Control should be launching shortly and large amounts of devices may not yet be available—it remains to be seen how well all of the pieces will fit together. It is notoriously difficult for general-purpose devices to be built that run the gamut of technologies correctly, and so these new programs might be more useful for highly specific purpose-built phones.

Tuesday, January 24, 2012

SIP-Based Admission Control | Voice Mobility with Wi-Fi



The first method is to rely on the call setup signaling. Because the most common mechanism today is SIP, we can refer to this as SIP-based admission control. The idea is fairly simple. The access point, most likely in concert with a controller if the architecture in use has one, uses a firewall-based flow-detection system to observe the SIP messages as they are sent from the phones to the SIP servers and back. Specifically, when the call is initiated, either by the phone sending a SIP Invite, or receiving one from another party, the wireless network determines whether there is available capacity to take the call. If there is available capacity, then the wireless network lets the messages flow as usual, and the call is initiated.
On the other hand, if the wireless network determines that there is no room for the call, it will intercept the SIP Invite messages, preventing them from reaching the other party, and interject its own message to the caller (as if from the called party, usually), with one of a few possible SIP busy statuses. The call never completes, and the caller will get some sort of failure message, or a busy tone.
Other, more advanced behaviors are also possible, such as performing load balancing, once the network has determined that the call is not going to complete.
The advantage of using SIP flow detection to do the admission control is that it does not require any added sophistication on the mobile devices than they would already have with SIP. Furthermore, by having that awareness from tracking the SIP state, the network can provide a list of both calls in progress and registered phones not yet in a call. The disadvantage is that this system will not work for SIP calls that are encrypted end-to-end, such as being carried over a VPN link.