Saturday, May 28, 2011

Optimization Tips: Using HTTP Compression


 we're going to take a look at HTTP compression and how this simple technique can deliver a valuable performance boost to your website.

What is HTTP compression?
Since the turn of the century, almost all browsers have supported compressed content via the "content-encoding" (or "transfer-encoding") specs defined in HTML 1.1. When a browser requests a page from a server, it always tells the server if andwhat kind of content encoding it can handle. This information is communicated in the "accept-encoding" request header. If the server is configured correctly, it can respond to this header value and automatically compress (think Zip) the HTTP content before sending it to the browser. When the content arrives, the browser decompresses the content and renders the page.

The overall goal of HTTP compression is to reduce the number of bytes that must be transmitted over the tubes between your server and the user's machine. Since transmission time is often the slowest bottleneck in loading a page, and since bandwidth directly relates to your costs as a site operator, reducing the bytes you transmit to the user can save you money and improve your site's performance.

How do I use it with ASP.NET?
There are a number of ways to use HTTP compression with ASP.NET. If you have full access to your server, IIS 6 provides some basic compression support. Compression of static content is turned-on by default in IIS 6, but you'll need to take extra steps to enable compression of dynamic content. If you're already in a Windows Server 2008 environment, IIS 7 provides great HTTP compression support, enabling you to define which files get compressed based on their MIME type in your configuration files. Still, to use IIS' compression support, you need to have complete access to your server (specifically IIS), and to get great support in IIS you need to be running Windows Server 2008. (NOTE: For a good overview of compression features in IIS 7, see this blog post on the IIS blogs.)

Fortunately, there are alternatives for developers that want to deploy HTTP compression support withtheir applications (or for those devs that don't have access to IIS, such as in shared hosting environments). Several open source projects and commercial products exist on the interwebs that provide HttpModules to compress your content based on configuration rules. Once such commercial tool that I've used successfully in the past is ASPAccelerator.NET.

For today's tests, that is the approach that will be used to compress our test site. And since we're interested in seeing how HTTP compression helps "real world" site performance, we're going to be doing our testing a little differently this week than we have in the past.

Help Desk demo testing with Gomez
 If you work for a large, Fortune 500 company chances are you are using their services to actively monitor and measure your company's web performance. Gomez is one of the largest providers of website performance measurement and tracking, and as such, they have some of the best tools available for measuring a site's performance.

What you may not know is that Gomez provides a free "instant site test" that you can use to take one-off measurements of your site's performance as seen by one of Gomez's 12,000 test computers. The free service allows you to supply a URL and select a test city (L.A., New York, Chicago, Beijing, or London), and then Gomez generates a detailed report showing you how responsive your site is. This is a great (free) way to see the real world effects of latency and bandwidth on your site, which makes it a perfect way to measure the effects of HTTP compression on our test site.

And to conduct today's tests, we'll be setting-down our fun InterWebs demo in favor of using the slightly bulkier Telerik Help Desk demo. This demo displays a fair amount of data pulled from a SQL Server 2005 database, and it uses many RadControls, such as RadGrid, RadTreeView, RadMenu, RadCombobox, RadSplitter, RadAjax, and RadCalendar. RadWindow and RadEditor are their, too, but they won't have an impact on today's tests of the primary Help Desk landing page.

How will the tests be run?
To conduct today's tests, we'll start applying the techniques we covered in parts 1 and 2 of this seriesto the Help Desk demo, collecting 5 Gomez test results from their Chicago test server (the demo today will be hosted on a DiscountASP server in California). All tests will be run as if on empty browser cache (thanks to Gomez) and the average of the 5 tests will be reported. Once we've layered-on our previous performance improving techniques, we'll enable HTTP compression with ASPAccelerator and see what kind performance benefit it provides.

Make sense? Then let's run our tests and analyze the results.

The test resultsWith the tests run and the averages collected, let's take a look at the effect our different performance improving techniques had on our live test site:

While the absolute numbers here aren't great, they're also not important. This is a completely unoptimized application running on shared hosting with very little server resources, communicating from California to Chicago. What are important are the trends, and the they reveal some very interesting facts:
  1. If you didn't believe the impact of leaving complication debug = true from part two of this series, believe it now. Simply setting debug to false reduced the time it took for our page to load by almost 30% (7 seconds)! ALWAYS set debug to false in production.
  2. The same advice is true for the RadManagers (RadStyleSheetManager and RadScriptManager). By simply adding these to my page (and doing nothing else), I further reduce page load time by over 50%. Combined with the first step, 60 seconds of work has shaved almost 70% of the loading time off of my page.
  3. Finally, adding the HTTP compression shaves another 30% off my page load time, producing a page that takes almost 80% less time to load than the original version! And I haven't changed any code.
The bandwidth comparison
Just as important as page load time in today's tests is measuring how much HTTP compression saves me on total bandwidth. If there is anything in this largely free marketplace we call the Internet that increases our costs as our sites become more popular it's bandwidth (and the servers required to fill it, of course). So if HTTP compression- in combination with our other performance improving techniques- can reduce the bandwidth we consume to send the same pages to our users, we'll directly impact our bottom line. Let's see how bandwidth varied over each stage of our test:As you can see, HTTP compression had a huge effect on the number of bytes we needed to send over the line to load the same exact page.
  • With no "treatments" applied, we sent over 1 MB of data to our user's browser on the first visit! That's a lot of data.
  • Setting debug to false reduced our bandwidth usage by about 15%, and adding the RadManagers only slightly lowered bytes sent from there.
  • When we added the HTTP compression, though, we reduced the number of bytes sent over the wire by over 75%! That's a huge bandwidth savings that should directly translate into higher throughput on on our servers and improved page response times.
We can validate the improved page response times with the overlaid page load time stats. As the bytes are reduced with HTTP compression, we see page load time decrease. But notice how much page load time drops when the RadManagers are added to the page even though they don't significantly impact the page's footprint. Remember, all the Managers are doing are combining data, not compressing. They deliver huge performance gains by reducing the number of links on the page, thus reducing the number of requests the browser needs to make to load all of our images, stylesheets, and JavaScript.

Gotchas
As with all performance tips, you do have to be smart in your application. While HTTP compression is generally a very efficient operation that doesn't significantly task the CPU, you should be careful on high volume sites. The extra CPU cycles required to perform the compression could make site performance worse than it would with compressed traffic. In most low- to medium-volume scenarios, though, you can compress without concern. In the coming weeks, I'll highlight a hardware solution that can solve the woes of compression on high-volume sites (in addition to providing a number of other helpful performance boosters).

You also need to be careful with HTTP compression and Ajax. Some HTTP compression tools do not play nice with Ajax requests, breaking callback functionality on the page. ASPAccellerator provides support for the ASP.NET AJAX framework, but not all tools may be as helpful. Choose wisely.

Optimization Tip SummaryWhat have you learned today? Quite simply this:
  • The rules from parts 1 and 2 of this series are still among the most important for improving page performance.
  • Adding HTTP compression to your page with the RadControls for ASP.NET AJAX can measurably improve page performance and significantly lower bandwidth requirements.
Hopefully these tests help you understand and visualize the benefit HTTP compression can play in squeezing every bit of performance out of your site. In the coming weeks, I'll try to do some comparison tests of different HTTP compression methods- IIS 6/7, port80, ASPAccelerator, open source- to give you more guidance if you're trying choose a compression provider. Until then, compress away and do your part to help reduce the bits clogging the Internet's tubes!

No comments: