by Ted Romer
I would say, compared to Windows XP’s TCP/IP stack, the Vista stack has been redesigned from the ground up, instead of merely patched. The result is that there are a number of features in the new stack, (which Microsoft calls “The Next Generation TCP/IP Stack”), which are a bit exciting. (Windows Server Longhorn also runs the same stack as Vista.)
There’s been some pretty interesting changes with Receive Window Auto-Tuning and Compound TCP, which provide more aggressive scaling for the network window. Windows scaling is now enabled by default, and is also configured automatically.
If we think about a TCP conversation right now, a sender is given a window as to how much they can send without acknowledgement. The service slowly ramps up in a linear fashion, and if something happens (like a packet loss) we go back to a slow start and slowly ramp up again.
The good thing about Compound TCP is that if a packet loss were to happen we can ramp back up more aggressively and more efficiently use the bandwidth we have. Compound TCP is enabled by default in Longhorn, but disabled in Vista (it can be enabled though!)
As for Receive Window Auto-Tuning, under Windows XP and Server 2003, you could always configure the window size, but you had to go into the registry, know what you were doing, and it didn’t work in all situations. (A good analogy would be custom performance tuning an automobile for racing and then using it offroad.) In this case, Receive Window Auto-Tuning can adapt to network changes based on bandwidth and delay, and application retrieval rate which is actually factored into the receive window. Theoretically, you can get better usage of your link and better throughput – without needing to go into the registry. In fact, the next generation TCP/IP stack no longer uses registry values. So the registry tweaking utilities people like to use today are not going to work.
You’d have increased utilization in some cases as links are used more efficiently.
Receive-side scaling is also pretty interesting. Every packet that comes inbound has to take a CPU clock cycle. When you factor in 10 gigabit Ethernet, there’s no way a single-core CPU can take advantage of that. The way the TCP/IP stack was written in XP, we couldn’t take advantage of multiple CPUs – it’s all done serially (due to limitations in the NDIS 5.1 driver). The nice thing about Vista’s receive-side scaling is that it takes advantage of additional processors, lessening a burden on the main CPU core. It basically enables packet receive-processing to scale with the number of available processors. Packet processing can now be done in parallel while preserving in-order delivery.
Another feature is the ECN support they offer on the host. It’s not enabled by default, but an explicit congestion notification can be configured. RFC2474’s TOS field was redone to the DS field (Differentiated Services Code Point –DSCP). It was an 8 bit field, and the Diff Serv took up 6 of that; and there were two bits, the ECN bits, to mark the ECN field so that if the network is congested, when the packet comes back to the sender, it will tell the sender to slow down in order to proactively prevent packet loss. ECN bits can be marked by routers that support it. This can help be more proactive and possibly help prevent tail-drop in queues.
Changes to the network are a certainty, as traffic patterns change. More aggressive utilization of the links means there will certainly be changes in the way traffic flows on the network; hopefully Microsoft has fully factored in delay, otherwise this could cause traffic congestion. Since receive windows are scaled automatically, we can potentially consume high bandwidth high latency links more efficiently. Utilization will increase (in some cases consuming links) and we will now be getting our moneys worth out of our high bandwidth links. It’s also important to keep in mind that although the utilization graphs may look scary, sessions will be on the wire for a shorter amount of time. A file download can be faster and will no longer take as much time on the wire as it once did. A side effect is that we could be seeing increased amounts of packet loss due to queuing delays caused by the influx of traffic. I think that this change in traffic patterns make the push for QoS much more important. Because of that, Vista comes with policy-based QoS. (Quality of Service.) This is a nice addition because it marks independently of the application, meaning your apps don’t have to be coded to mark the packets (like in 2003 and XP).
When you have more efficient use of your network, you really need to take QoS into account. Vista’s ability to use centrally configured group-policies to push out policies to specific users or servers, and allows tagging of packets with the Diffserv code point values, so that our network infrastructure can see the marking and react to it in different ways – whether it’s VoIP traffic, or TCP/IP business critical traffic, or web-surfing traffic. (Granted, this QoS doesn’t guarantee anything, it just marks the packet in Windows and it is up to your network infrastructure to honor those tags.) It now allows us to throttle outbound traffic at a client or server. For example, you can throttle the bandwidth of a particular subnet to a particular server, giving some departments more access to the servers that they need. You can even restrict outgoing bandwidth for certain peer-to-peer applications like bit torrent. This shaping can also be handy when applied to servers, allowing less bandwidth for certain users/departments, and more for others.
While consumers may debate whether Vista is a worthwhile upgrade, I believe it to be important for enterprise customers who will best be able to put Vista’s capabilities to their fullest potential.
Of course, I’m getting it for DirectX 10 games, but that’s just me.
Technorati Tags: Vista TCP/IP networking enterprise Microsoft
Ted Romer is a QA Network Engineer at NetQoS