×
The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.
From: Thorbjoern Ravn Andersen
It is quite normal to use http 1.1 connections ...
HTTP 1.1 connections are commonly called "persistent connections":
http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html
I was thinking in terms of the Apache KeepAlive and KeepAliveTimeout directives. Some years ago, conventional wisdom was to set the timeout value to a few seconds. But now I wonder if sites are setting the timeout value higher to accomodate an increase in asynchronous requests - triggered by keyboard input and timers, for example.
There were a few comments in previous messages that referred to distributing workloads across multiple hardware tiers, and how that was a requirement of the Internet.
Google comes to mind. It wouldn't surprise me if they retired more than 1,000 servers every month - perhaps shipping them off to China as toxic e-waste.
You hear reports of server farms running CPU's at 15-20%, which seems to be confirmed by the types of comments in this thread.
It makes me wonder how long before social consciousness is aroused to the point where we start relying more on intelligent software, and less on commodity hardware, to run servers at 60-90% and still provide good response to users?
-Nathan
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact
[javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.