|
I did. It is the lowest hanging fruit :-)From: Thorbjoern Ravn Andersen
If there is one thing experience has taught me, it is that you must measure instead of guessing when analyzing performance problems.
Maybe we spent too much time talking about session limits and expiration. Hmmm, who came up with that idea in the first place ;-)
Your point about measurement is well taken. Walden pointed it out too. Unfortunately, my information in this case is from a user's perspective, and a developer's perspective - not that of a site administrator. I don't have profiling data.First of all, what is the actual application doing stuff written in? Apache itself is so fast to serve static pages that it can easily saturate a 100 Mbps network connection.
But if you're willing to indulge in a somewhat hypothetical discussion, what specific measurements would you look at? Under IBM i, I think most operators would check CPU utilization & page faults, first - just because that's easy. Suppose the Tomcat server is consuming 95%. Would that mean much? What would you check after that?Generally if a modern PC (I'm not familiar enough with the AS/400 to say if it applies here too) has so much computing power so if Tomcat uses 95% there is something doing unneccesary work (busy looping e.g). Page faults are interesting, as the Classic JVM is basically unbounded, so the heap may grow very large which doesn't go well with small pools.
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.