|
>But why are they down? Is their servers down? Their ISP connections? >Hacked DNS? Human error on updating the pages or configuration? Just >because someone like IBM or MSN web sites are down are usually >not do to the >servers but some other factor. I am doughbt that either use a single >server. If it is mission critical application that can never >be down, you >should have at least 2 servers, two internet connection with >two ISPs, which >is impossible. A load balanced server farm by design would >give you better >uptime than any single server solution. Nothing can be running all the >time. This is the way that telephone switches generally work. There are two processors (LPAR?) with separate software loads, often at different release levels. That way neither a hardware issue nor a software issue can bring the switch down. To bring this back on topic for the 400, would an LPARd machine running V5R1 and V4R5 be able to avoid the single point of failure problem, or would you really need two separate cabinets? Take destruction of the machine room by fire, flood or other catastrophe out of the equation for the purposes of this discussion. I believe that the number one cause of failure in modern computing systems is not hardware (cheap or not) but software. No, I cannot back this up with published numbers, but I seem to recall far more fixpacks to repair a given issue than I recall a hardware replacement. The recent disk drive scare is an exception to the norm. --buck
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.