TCP Optimizer 4 Documentation - Windows 7, 8, 10, 2012 Server2015-04-12 (updated: 2017-03-14) by Philip
This documentation is for version 4 of the SG TCP Optimizer. Some of the settings may be specific to Windows 8, 8.1, 2012 Server (including R2), and Windows 10, and not present in earlier versions of Windows. Please also see the TCP Optimizer FAQ for answers to frequently asked questions.
The TCP Optimizer documentation for Windows XP/2000/2003/NT (and TCP Optimizer 2.x) is available -here-.
Table of Contents
4. Advanced Settings
The TCP Optimizer is a program designed to provide an easy, intuitive interface for tuning broadband-related TCP and IP related parameters under all current (and some past) Windows versions. Version 4 of the TCP Optimizer supports all Windows variants from XP/NT/2000/2003 through Windows Vista/7/2008 Server, to the newer Windows 8, 2012 Server, as well as Windows 10. Some of the settings under all those Operating Systems are quite different, and the program will show only supported options for the detected Operating System it is running on. The TCP Optimizer takes into account all related RFCs, the Microsoft TCP/IP implementation oddities. verifies all relevant Registry locations for the same TCP/IP parameters, uses PowerShell cmdlets with newer Windows versions, implements all tweaks listed in our speed tweak articles, and, in general makes the whole "tweaking for speed" experience a breeze.
Below, we will cover all the settings available in the TCP Optimizer. Some of the settings may only be available under Windows 8 and newer operating systems.
If you do not feel like reading the entire documentation below, or you simply need the tweaks NOW, just follow these short instructions:
Start the program with administrative permissions: right-click on the program, choose "Properties" -> Compatibility tab -> tick "Run this program as an administrator" -> OK
The TCP Optimizer can do all the rest of the work for you and optimize your internet connection. A preview of all relevant changes is available before they are actually applied. The program can be used to easily apply custom values, and test with different settings, if you'd prefer. To do so, you may have to read the rest of the documentation and our tweaking articles to understand what the different settings mean, and what their exact effect is.
Please read the following chapters for information on all the specific parameters in the program.
Note: You should be logged in with your main account (some settings are account-specific), and run the program with administrative privileges so that it has sufficient permissions to make all the necessary changes.
Below is a short description of all the settings in the "General Settings" tab of the TCP Optimizer under current Windows versions.
This slider is intended for choosing your maximum possible internet connection speed, as advertised by your Internet Service Provider (ISP). You should not use your current speed, or any speed test results here, rather what the maximum theoretical speed of your connection is. Note that speed is expressed in Mbps, denoting Megabits per second (not to be confused with Megabytes).
Changing the value in the connection speed slider will have some effect on the optimal TCP Window value. Under older Windows variants, it directly calculates the RWIN value optimal for the connection speed. Under newer Windows OSes, it may change the auto-tuning algorithm (restricted for speeds under 1Mbps, normal for most broadband connections, experimental for speeds over 90Mbps, currently). Note that the "experimental" TCP Window auto-tuning setting should be used with caution.
A list of all present/active network interfaces recognized by the system. If a specific network adapter is selected using the pull-down menu, its IP address will be displayed in the lower-right portion of this section. You can also choose to modify all network adapters at the same time, or none of their individual setting.
This section allows you to set a custom MTU value. Generally MTU should be set at 1500, with the exception of PPPoE connections, and some DSL modems/ISPs. It is only necessary to edit the MTU value in such special cases. For example, the maximum MTU value for Windows PPPoE encapsulation is 1480 (or, as high as 1492 in some cases).
Note: In some rare cases, it is possible that your desired network device is not correctly identified by the Optimizer. That does not affect the program performance much, and you should simply choose "Modify All Network Adapters" in such cases. We'd also appreciate your feedback with such devices, so that we can improve the program.
This setting tunes the TCP Receive Window auto-tuning algorithm in Windows. A small TCP Receive Window can limit high-speed, high-latency transfers, such as most broadband internet connections. We recommend setting this to "normal" for most connections, and, make sure that you disable "Windows Scaling heuristics" below so that Windows does not automatically modify this parameter.
There are a couple of exceptions to setting TCP auto tuning to "normal":
1. If your connection speed is less than 1 Megabit per second, you can set it to "highlyrestricted".
If this is left enabled, Windows can restrict the TCP Receive Window at any point in time it decides that the network conditions justify it. When Windows restricts the TCP Receive Window, it does not always return to normal. It is highly recommended to set this parameter to "disabled", so that user-set TCP auto tuning settings are retained over time.
Traditionally, TCP avoids network congestion by gradually increasing the TCP Send Window at the beginning of connections. With broadband connections, these algorithms do not increase the TCP Window fast enough to fully utilize the available bandwidth. Compound TCP (CTCP) is a newer congestion control method that increases the TCP Send Window more aggressively for broadband connections (with large RWIN and BDP). CTCP attempts to maximize throughput by monitoring delay variations and packet loss.
This should be set to "CTCP" in most common scenarios.
CTCP (Compound TCP) increases the TCP Receive Window and amount of data sent. It can improve throughput on higher latency and broadband internet connections.
RSS enables parallelized processing of received packets on multiple processors, while avoiding packet reordering. It separates packets into "flows" and uses different processors for processing each flow.
This should be enabled if you have two or more processor cores, and only has an effect if the Network Adapter can handle RSS.
Receive Segment Coalescing allows the Network adapter to coalesce multiple TCP/IP packets that arrive within a single interrupt into larger packets (up to 64KB) so that the network stack has to process fewer headers. This reduces I/O overhead and CPU utilization.
This should be enabled for pure throughput, and disabled for pure gaming/latency.
Direct Cache Access (DCA) allows a capable I/O device, such as a network controller, to deliver data directly into a CPU cache. The objective of DCA is to reduce memory latency and the memory bandwidth requirement in high bandwidth (Gigabit) environments. DCA requires support from the I/O device, system chipset, and CPU(s).
Recommended: enabled with Gigabit network adapters and hardware that supports it.
Note: The impact of DCA is more significant with older CPUs
This setting specifies the default time-to-live (TTL) value set in the header of outgoing IP packets. The TTL determines the maximum amount of time in seconds (and the number of hops) that an IP packet may live in the network without reaching its destination. It is effectively a limit on the number of routers that an IP packet is allowed to pass through before being discarded. It does not directly affect speed, however a value that's too small can cause packets to be unable to reach distant servers at all. A very large value, on the other hand might take too long to recognize lost packets.
Recommended value is 64
ECN (Explicit Congestion Notification, RFC 3168) is a mechanism that provides routers with an alternate method of communicating network congestion. It is aimed to decrease retransmissions. In essence, ECN assumes that the cause of any packet loss is router congestion. It allows routers experiencing congestion to mark packets and allow clients to automatically lower their transfer rate to prevent further packet loss. Traditionally, TCP/IP networks signal congestion by dropping packets. When ECN is successfully negotiated, an ECN-aware router may set a bit in the IP header (in the DiffServ field) instead of dropping a packet in order to signal congestion. The receiver echoes the congestion indication to the sender, which must react as though a packet drop were detected. ECN is disabled by default in modern Windows TCP/IP implementations, as it is possible that it may cause problems with some outdated routers that drop packets with the ECN bit set, rather than ignoring the bit.
Recommended: disabled in general. Enable with caution, because some routers may drop packets with the ECN bit set and introduce packet loss/issues. Enabling ECN can reduce latency in some games with ECN-capable routers, and improve throughput in the presence of packet loss. ECN is also recommended if using CoDel algorithm to combat latency by dropping slowest packets on congested links.
Note: Known issue with profile logon to some EA games (likely router ECN-support issue).
Setting allows the network adapter to compute the checksum when transmitting packets and verify the checksum when receiving packets to free up CPU, reduce PCI traffic. Checksum offloading is also required for some other stateless offloads to work, including Receive Side Scaling (RSS), Receive Segment Coalescing (RSC), and Large Send Offload (LSO).
TCP Chimney Offload allows for offloading the TCP processing work from the host computer's CPU to the network adapter. This helps improve the processing of network data on your computer without the need for additional programs or any loss to manageability or security. Programs that are currently bound by network processing overhead will generally scale better when used with TCP Chimney Offload. Enabling this setting had some negative effects in the past because of buggy network adapter drivers, however its implementation has gotten much better with time. It is useful for CPU-bound client computers and very fast broadband connections, not recommended in server environments.
Recommended: disabled (because of buggy implementations and issues with it, also now considered deprecated by Microsoft now)
When enabled, the network adapter hardware is used to complete data segmentation, theoretically faster than operating system software. Theoretically, this feature may improve transmission performance, and reduce CPU load. The problem with this setting is buggy implementation on many levels, including Network Adapter Drivers. Intel and Broadcom drivers are known to have this enabled by default, and may have many issues with it.
Timestamps are a RFC 1323 option that is intended to increase transmission reliability by retransmitting segments that are not acknowledged within some retransmission timeout (RTO) interval. The problem with timestamps is that they add 12 bytes to the 20-byte TCP header of each packet, so turning them on causes considerable overhead.
Note: Under Windows Vista/7, under TCP 1323 Options we recommend leaving only TCP "Window Scaling" enabled.
NetDMA (TCPA) enables support for advanced direct memory access. In essence, it provides the ability to more efficiently move network data by minimizing CPU usage. NetDMA frees the CPU from handling memory data transfers between network card data buffers and application buffers by using a DMA engine. It must be enabled/supported by your BIOS and your CPU must support Intel I/O Acceleration Technology (I/OAT).
Recommended: use either TCP Chimney Offload or NetDMA, but not both.
NetDMA is not supported under Windows 8 and newer.
This section covers the "Advanced settings" tab of the program under current Windows versions.
By default, the HTTP 1.1 specs in RFC 2616 recommend no more than 2 concurrent connections between a client and a web server. Similarly, HTTP 1.0 recommends up to 4 concurrent connections (HTTP 1.0 does not support persistent connections, so it benefits from more concurrent connections). Traditionally, Internet Explorer used the RFC recommendations, however, since IE8, Firefox 3, and Chrome 4, most major browsers have departed from the recommendations in search of faster web page loading speed by increasing the number of parallel connections to web servers for both HTTP 1.0 and 1.1 to 6.
We recommend pushing this further to 8-10 concurrent connections per web server, because of the complexity of web pages and the number of elements justify opening multiple connections, especially with broadband internet connections. Note that increasing the number of connections past 10 is not recommended, as some web servers limit the number of concurrent connections per IP, and may throttle or drop excessive connections, causing incomplete pages and worse user experience, among other issues.
This is intended to increase the priority of DNS/hostname resolution, by increasing the priority of four related processes from their defaults. It is important to note that this increases the priority of all four related processes compared to the hundreds of other running processes, while keeping their order. It is important to note that the "optimal" values we recommend are chosen in such a way as not to conflict with the priorities of other processes, so, while other numbers may work, you should be careful if departing from those values.
Refer to our Host Resolution Priority Tweak article for more details.
The two values in this section control the way the system attempts to reestablish a connection.
Max SYN Retransmissions - sets the number of attempts to reestablish a connection using SYN packets.
Retransmit timeout (RTO) determines how many milliseconds of unacknowledged data it takes before the connection is aborted. It can help reduce delays in retransmitting data. The default timeout for Initial RTO of 3000ms (3 seconds) can usually be lowered to ~2s for low-latency modern broadband connections, unless you're in a remote location. Decreasing this number too aggressively on connections with higher latency (satellite, remote locations) can increase premature retransmissions. The RTO limit should not be triggered on a regular basis. The Min RTO default/recommended value is 300ms.
See: RFC 6298
This is designed to prevent caching of failed DNS lookups.
MaxNegativeCacheTtl: determines how long an entry recording a negative answer to a query remains in the DNS cache (Windows XP/2003 specific)
NetFailureCacheTime: determines for how long the DNS client stops sending queries when it suspects that the network is down. During that time, the DNS client returns a timeout response to all queries. If the value of this entry is 0, this is disabled and DNS continues to send queries to an unresponsive network.
NegativeSOACacheTime: determines how long an entry recording a negative answer to a query for an SOA (Start of Authority) record remains in the DNS cache.
This section deals with QoS policy and the Windows "QoS Packet Scheduler".
NonBestEffortLimit: The Windows QoS Packet Scheduler under Windows 7/8/8.1 reserves 20% of bandwidth by default for QoS-aware applications that request priority traffic. Note this only has effect in the presence of running QoS applications that request priority traffic, like Windows Update, for example. Setting this to zero prevents Windows from reserving 20% of bandwidth for such applications.
Do not use NLA: This undocumented setting is part of tcpip.sys that allows you to set QoS DSCP values. Microsoft requires that Windows 7/8 systems have joined a domain, and that the domain is visible to the particular network adapter in order to be able to use local group policy to set DSCP values. Setting this to one removes the limitation, allowing you to set DSCP without being part of a domain, and for all network adapters. DSCP can be entered via local group policy using gpedit.msc
Network Throttling Index: Windows uses a throttling mechanism to restrict the processing of non-multimedia network traffic. The idea behind such throttling is that processing of network packets can be a resource-intensive task, and it may need to be throttled to give prioritized CPU access to multimedia programs. In some cases, such as Gigabit networks and some online games, for example, it is beneficial to turn off such throttling all together for achieving maximum throughput.
SystemResponsiveness: Multimedia applications use the "Multimedia Class Scheduler" service (MMCSS) to ensure prioritized access to CPU resources, without denying CPU resources to lower-priority background applications. However, this also reserves 20% of CPU by default for background processes, your multimedia streaming and some games can only utilize up to 80% of the CPU. The Optimizer can reduce that reserved CPU for background processes from the default of 20% to free up more CPU resources for games.
Note: In some server operating systems (Windows 2008 Server), the SystemResponsiveness may be set to 100, instead of 20 by default. This is by design, giving higher priority to background services over multimedia.
Nagle's algorithm is designed to allow several small packets to be combined together into a single, larger packet for more efficient transmissions. While this improves throughput efficiency and reduces TCP/IP header overhead, it also briefly delays transmission of small packets. Disabling "nagling" can help reduce latency/ping in some games. Keep in mind that disabling Nagle's algorithm may also have some negative effect on file transfers. Nagle's algorithm is enabled in Windows by default.
TcpAckFrequency: 1 for gaming and Wi-FI (disables nagling), small values over 2 for pure throughput.
See also: Gaming Tweaks article.
When using Windows to serve many/large files over the local network, it is possible to sometimes run into memory allocation errors related to the Windows share, especially with clients that use different operating systems. When this happens, you can usually see the following error in the Event Viewer System log:
Event ID: 2017 "The server was unable to allocate from the system nonpaged pool because the server reached the configured limit for nonpaged pool allocations." It is also possible to get an error indicating that "Not enough server storage is available to process this command".
To avoid those errors, you need to change the way Windows allocates memory for network services and file sharing. The settings in this section of the program optimize the machine as a file server so it would allocate resources accordingly.
LargeSystemCache: we recommend setting this to 1 for LAN large file transfers to allow the cache to expand beyond 8MB. However, some ATI videocards may have a driver issue with corrupt cache and degraded application performance with this enabled, so, for gaming, we recommend to leave it at zero/off.
Size: 1 minimizes used memory, 2 balances used memory, 3 is the optimal setting for file sharing and network applications
Short lived (ephemeral) TCP/IP ports above 1024 are allocated as needed by the OS. The Windows 8/2012 defaults are usually sufficient under normal network load. However, under heavy network load it may be necessary to adjust these two registry settings to increase port availability and decrease the time to wait before reclaiming unused ports.
MaxUserPort: denotes the maximum number of ports to use, recommended: 16384 to 65534 decimal as necessary.
TcpTimedWaitDelay: time to wait before reclaiming ports, in seconds. Default time before reclaiming ports, we recommend using a value of 30 seconds. The default is 120-240 seconds, depending on your version of Windows.
This section contains a Bandwidth * Delay calculator. The BDP is a very important concept in TCP/IP Networking. It is directly related to the TCP Window (RWIN) value, in that it represents a limit to the possible throughput. BDP plays an especially important role in high-speed / high-latency networks, such as most broadband internet connections. It is one of the most important factors that affect TCP/IP throughput.
The Bandwidth*Delay Product, or BDP for short determines the amount of data that can be in transit in the network. It is the product of the available bandwidth and the latency, or RTT.
The BDP simply states that:
What does in all mean ? The TCP Window is a buffer that determines how much data can be transferred before the server waits for acknowledgement. It is in essence bound by the BDP. If the BDP (or RWIN) is lower than the product of the latency and available bandwidth, we can't fill the line since the client can't send acknowledgements back fast enough. A transmission can't exceed the (RWIN / latency) value, so RWIN needs to be large enough to fit the maximum_available_bandwidth x maximum_anticipated_delay.
Even though the TCP Receive Window value can't be modified directly in modern Windows variants, you can still adjust how aggressively the TCP auto-tuning algorithm increases the RWIN value.
This section of the program helps test the latency of your internet connection. You can choose a number of hosts, a number of pings per host, and ICMP packet size. After clicking start, the tool will consecutively ping all hosts, then provide maximum and average latency measurements in milliseconds, as well as packet loss indication (if present).
This tool can be used to effectively estimate the maximum anticipated latency for BDP/RWIN calculations. In order to do that, we recommend using a larger number of hosts than the default 5, and a larger packet size (since larger packets tend to have a bit higher latency). Then, as an estimate of your maximum anticipated latency, rather than using the Maximum RTT, use the average RTT, multiplied by two.
The File Pull-down menu contains a number of options for backing up, as well as exporting and importing all the related TCP Optimizer settings. Those options can be shared between users, they contain information about empty keys, values to remove/add/edit and all relevant parameters to clone the exact state of the related settings to another machine, or save them for your own reference later. It also allows for resetting TCP/IP and Winsock to repair network connections that are experiencing problems.
The Preferences menu, pictured on the right has two sections. The first one, "Maximum Latency" is a number in milliseconds, that is used in calculating the optimal RWIN value. It affects the "Optimal settings" recommendations of the program, so if you're not sure what it does, leave it at the default of 300ms. Basically, the larger this number, the larger RWIN values the program is going to recommend under "Optimal settings" for the same connection speed, and vice versa. This is more relevant under Windows versions that support directly setting the RWIN value.
The second section in the Preferences menu, "Latency tab: hosts to ping" contains a list of URLs, used in the Latency section of the program for measuring current RTT (round trip time, delay, ping, latency) to multiple hosts.
The Help Menu of The Optimizer simply contains a link to this documentation, as well as the Software License Agreement, and some general information about the program.
This screen offers preview of all changes before they are actually applied. The lower-left also allows users to create a backup before applying any changes. If you experience any issues with the TCP Optimizer, ticking the "Create Log" option in the bottom-left corner allows for all commands to be logged along with the operating system response to them. This log of all executed commands can greatly help in debugging program issues and viewing output from the operating system. Creating this log can help us troubleshoot any issues you experience with the program. Backing up current settings for easy reversal is also recommended for new users.
Some changes may work without rebooting, however, the majority will only take effect after rebooting your computer.
By downloading, using, copying and (re)distributing the TCP Optimizer, you agree to the Software End User License Agreement, incorporated here by reference.