We have an ASP.Net app, that behaves strangely under IIS6. The app itself is straightforward ASP.Net 2.0 Webforms deal, nothing too weird is going on in there (there are couple HTTP Modules in the pipeline, but I wouldn't consider those weird :) ). The thing I don't understand is the page execution times, or, more specifically, the difference between the time reported by ASP.Net tracing (trace.axd) and observed by the client (Fiddler). When the app is run on developer's box (WinXP, IIS5.1), the times reported by ASP.Net and Fiddler are very close:
Page exec time : 0.0919834 Fiddler Total Sequence time: 0.1560980
I can understand 60ms being spent getting 5KB worth of data from IIS to Fiddler (both of which run on the same machine, BTW). Now, when we move the code to the server (Win2k3, IIS6), the picture changes dramatically:
Page exec time : 0.1702014 Fiddler Total Sequence time: 0.5156283
This is same page, and Fiddler is again running on the same machine with the code. Why does it suddenly take 350ms to deliver the same 5KB?
PS. On both machines the results are obtained by accessing the URL via the actual machine's hostname, e.g. http://machinename/app/page.aspx (as opposed to http://localhost/app/page.aspx).
PPS. Configuration-wise, the setups of a dev box and the server are as close as we could make them -- both use the exact same web.config. Both hit the DB (sql server) with integrated auth, and, consequently, the app runs under a domain account. The app uses forms authentication, and does NOT impersonate (i.e. it always runs under the same account). Now, the way this works on IIS5 is different from IIS6 -- on IIS5 the account is specified in tag in the machine.config, and on IIS6 it's the AppPool setting. The setup seems pretty typical for both environments, and I can't imagine it causing 350ms delays...
-
Do a trace route on the URL you are calling and compare them. I am betting with the developer machine you are staying internal to the machine, but on the production machine you are going external and then coming back in through the IP Address.
If this is the case try adding this to your hosts file (c:\windows\system32\drivers\etc\hosts
www.mysite.com 127.0.0.1
This will make sure your request doesn't venture outside of the machine to make the request. You should see the response times to start to come in line with each other.
Update
Given the new updates. If the server is under load, while testing on production this could account for the difference, because it is actively trying to deliver more requests than the development machine which is only trying to deliver 1.
Or it could be because you are testing two different version of IIS, 5.1 on XP and 6.0 on 2003. Really can't account for the differences unless the two environments are running the same software.
: Nope. We are using IE7 to drive Fiddler in both cases, and since IE7 bypasses proxies on requests to localhost, we have to use the actual hostname in both cases.: BTW, what do you mean by "venture outside"? The only "venturing" I can think of is a DNS request, and it's all cached long before the time of the test...Nick Berardi : so you are actually calling http://localhost on your webserver and dev machine and getting these response differences? Because that changes you whole question and probably should have been mentioned.: We are calling http://machinename in both cases. However, even if it was http://localhost, I don't see how does it make a shred of a difference either way.Nick Berardi : Well it makes a difference if you are testing localhost vs your URL. That was my question to you. Since your are doing both by localhost, are you doing them while the server is processing real requests?: No, the server is idle. It's a dedicated "perf" box, that does nothing but run performance tests (we actually like to know how does the app perform before we push it into production).Nick Berardi : makes sense. just wasn't in the question so I had to ask. Have you tried setting up Windows 2003 on the developer machine to see how that performs. So that you are using the same version of IIS.: What's puzzling is not the raw performance per se, but the large difference between the times reported by ASP.Net and observed by Fiddler on W2k3. -
Is the app running in identical release configurations on both boxes?
EDIT: The Request pipeline changed enormously between IIS5 and IIS6, trace.axd is only going to see the ASP.NET portion of it, not the new app pool and HTTP.Sys components.
I'd imagine that configuration can be adjusted on IIS6 a bit, but you are probably looking at the difference of a lightweight non production webserver (IIS5) and a robust webserver with individual application pools to manage and more layers of abstraction.
: Added PPS to the question. Basically, the setups are "as close as we could make them": If this were the case, then EVERY .aspx page would take at least 350ms to come back. Which is obviously not the case. -
After expending one of the precious few support incidents we've got with our MSDN subscription, I finally know the correct answer to "where is all this time spent" question. In short, the time is spent in HTTP modules we have in our pipeline. The time measurements reported by ASP.Net trace.axd record only the time spent in the .aspx page itself, modules are NOT included.
One easy way to ascertain this (and see how long does every module take to do its thing) is to use the ETW (Event Tracing for Windows). Here is the explanation (I strongly suspect that this post was written after they looked at our case :)) One thing I can add to the excellent description above is that I used the SvcTraceViewer instead of LogParser to analyze the trace output.
Update: the above approach also works on Windows Server 2008, just make sure that you have Tracing installed.