The Broadband Guide
search advanced
FAQs Categories:

Should I enable TCP Offloading ?

Newer Windows variants and Network Adapter drivers include a number of "TCP Offloading" options. Windows 8/2012 server, for example includes:
Chimney Offload
Checksum Offload
Receive-Side Scaling State (RSS)
Receive Segment Coalescing State (RSC)
Large Send Offload (LSO)

In addition to the OS level TCP offloading options, Network Adapter drivers have some of those, like "Checksum offload" and "Large Send Offload (LSO)" as well. Even if offloading is turned off at the OS level, the NIC driver can still use its own variant of offloading, check the driver properties as well!

Whether you should use TCP Offloading options is a tricky question depending on your usage, and which specific offloading you plan to use. It is generally recommended to keep some of them on for client machines because of improved throughput and lower CPU utilization (except LSO), and turn more of them off for servers, buggy NIC drivers, or when experiencing problems.

This recommendation stems from some buggy NIC drivers, that, when combined with TCP Offloading and multi-threaded applications can cause havoc to the NIC driver. It seems to be related to applications switching threads causing the NIC driver to switch its active TCP Offload connection in the NIC hardware and that switching process is prone to failure or excessive delay.

In conclusion, yes, TCP Offloading speeds up the connection and reduces CPU utilization when it works, use it in client machines, and with newer OS variants where bugs have been corrected, but be very careful in server environments, especially with LSO, and with multi-threaded applications. Test, then test again!

For server issues, see:
TCP Offloading again?!
Symantec Clearwell server recommendations

Specific hardware recommendations:
In Realtek Gigabit Network Adapters, disable: Flow Control (Rx & Tx DISABLED). Disabling Flow Control can reduce timeouts and considerably improve throughput under Windows 8, most likely due to buggy implementation at the driver level.

For Intel/Broadcom Adapters, Large Send Offload (LSO) can cause issues, disable it at the adapter driver level, and possibly in the OS TCP/IP network stack.

  User Reviews/Comments:
by anonymous - 2017-04-23 13:47
in my case Intel drivers and windows 8.1 support this feature very well. in addition the offload reduces CPU usage by approximately 10% and increases sustained throughput by approximately 7-8% for large file transfers on a LAN. I have also used this feature for over a year with no negative drawbacks.

For these reasons I would recommend people trying this feature rather than simply disabling it without any testing in their set-up.

Also for this reason I feel it is a mistake for the TCP/IP Optimizer utility to disable this feature as part of the optimal profile. I feel it should offer keep this feature enabled and offer advice if people hit issues using it.
by Philip - 2017-04-23 14:09
I have a few questions on specifics...
Which Intel NIC?
Is it a separate card or motherboard on-board one?
Which TCP Offloading options specifically (Intel NICs have a few of them)?

Large Send Offload (LSO) ? (IPv4/IPv6)
TCP Checksum Offload ? (IPv4/IPv6)
UDP Checksum Offload ? (IPv4/IPv6)
Something else ?
Which ones make the difference for you ?
by anonymous - 2017-04-23 17:01
These are my settings, its an intel nic:
- D54250WY
- bios INTEL - 2C, WYLPT10H.86A.0044.2016.1214.1710, American Megatrends - 4028E
- Haswell (so a couple years old):
- OS 6.3.9600 (win8.1 pro 64-bit)

Intel(R) Ethernet Connection I218-V
Intel Driver Date 2016-07-26 Version NDIS 6.30

Flow Control Rx & Tx Enabled
Interrupt Moderation Disabled
IPv4 Checksum Offload Rx & Tx Enabled
Jumbo Packet Disabled
Large Send Offload V2 (IPv4) Enabled
Large Send Offload V2 (IPv6) Enabled
Maximum Number of RSS Queues 2 Queues
Protocol ARP Offload Enabled
Protocol NS Offload Enabled
Packet Priority & VLAN Packet Priority & VLAN Enabled
Receive Buffers 256
Receive Side Scaling Enabled
Speed & Duplex 1.0 Gbps Full Duplex
TCP Checksum Offload (IPv4) Rx & Tx Enabled
TCP Checksum Offload (IPv6) Rx & Tx Enabled
Transmit Buffers 512
UDP Checksum Offload (IPv4) Rx & Tx Enabled
UDP Checksum Offload (IPv6) Rx & Tx Enabled
Wake on Magic Packet Disabled
Wake on Pattern Match Disabled
Adaptive Inter-Frame Spacing Disabled
Energy Efficient Ethernet On
Enable PME Enabled
Interrupt Moderation Rate Off
Legacy Switch Compatibility... Disabled
Log Link State Event Enabled
Gigabit Master Slave Mode Auto Detect
Locally Administered Address --
Reduce Speed On Power Down Enabled
System Idle Power Saver Enabled
Wait for Link Auto Detect
Wake on Link Settings Disabled

* Jumbo is disabled and I haven't tested these settings with 9k or above.

SettingName : Internet * This is the profile in use for all connections.
MinRto(ms) : 300
InitialCongestionWindow(MSS) : 4
CongestionProvider : CTCP
CwndRestart : False
DelayedAckTimeout(ms) : 50
DelayedAckFrequency : 2
MemoryPressureProtection : Disabled
AutoTuningLevelLocal : Normal
AutoTuningLevelGroupPolicy : NotConfigured
AutoTuningLevelEffective : Local
EcnCapability : Enabled
Timestamps : Disabled
InitialRto(ms) : 2000
ScalingHeuristics : Disabled
DynamicPortRangeStartPort : 49152
DynamicPortRangeNumberOfPorts : 16384
AutomaticUseCustom : Disabled
NonSackRttResiliency : Disabled
ForceWS : Disabled
MaxSynRetransmissions : 2
AutoReusePortRangeStartPort : 0
AutoReusePortRangeNumberOfPorts : 0

I haven't tried many combinations of offload of/off I tend to have them all on and find this gives very good performance in the range of ~110MB/s sustained transfer rate. I can't get much more than this.
by anonymous - 2017-04-23 19:07
should have said - might be worth making people aware of jumbo frame issues too. I guess a lot of folks might try enabling this and either have huge issues esp. with home kit or if it does work for them they might be using it for the wrong reasons and not realize things like latency implications etc. we successfully use jumbo to squeeze a few more MB/s from nfs for large file transfers in our hypervisor environment but most people should probably stay clear of this for home use/.
by anonymous - 2017-04-23 19:12
Also I've had some success with SR-IOV in virtualisation environments however some checksum offloads have caused issues on occasion. things like ftp on windows start failing in these kinds of cases looking at wireshark data clearly shows the checksum calculations have failed.
by anonymous - 2017-09-11 21:59
Anyone recommend disabling on a Broadcom Nic on a HV Host? Just the VMs?
by anonymous - 2020-02-08 01:37
Flow Control helped a lot on my new system, its February 2020, drivers are Janurary 2020... so this feature is tried and tested. It keeps my connections from timing out and all kinds of drop out issues with multiple threads downloading from multiple servers, it helps so much. My connection was basically unusable when downloading 10 megabytes per second, without flow control enabled. Also connecting to my router IP web UI was timing out with flow control disabled. This feature is working great on realtek devices that support the latest drivers; Realtek also updates their drivers on a month to month basis, sometimes sooner, they do not neglect their products, they are excellent.
News Glossary of Terms FAQs Polls Cool Links SpeedGuide Teams SG Premium Services SG Gear Store
Registry Tweaks Broadband Tools Downloads/Patches Broadband Hardware SG Ports Database Security Default Passwords User Stories
Broadband Routers Wireless Firewalls / VPNs Software Hardware User Reviews
Broadband Security Editorials General User Articles Quick Reference
Broadband Forums General Discussions
Advertising Awards Link to us Server Statistics Helping SG About