× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



"grid computing" was a buzzword, but that doesn't mean it's gone
away. Virtualization and "cloud computing" are the latest buzzwords
that mean pretty much the same thing. My reading is that the main
difference is that instead of virtualizing/distributing work packets,
the goal is to virtualize and distribute entire virtual systems.

It doesn't make sense to go with a grid solution in a single
enterprise except for supercomputing applications. The value of
a grid would be for offering grid services to multiple clients,
driving fewer boxes at higher levels of utilization (and in this
respect, it is a greener solution).


-----Original Message-----
From: web400-bounces@xxxxxxxxxxxx on behalf of Joe Pluta
Sent: Sat 8/9/2008 3:09 PM
To: Web Enabling the AS400 / iSeries
Subject: Re: [WEB400] What's the latest thinking of the besttwo or threewebdevelopment languages/environ...

Thanks, Ralph. I was going to reply here, but I'm glad you beat me to it.

The idea that having multiple servers somehow magically provides
redundancy is pretty silly. Instead, as you point out, it actually
provides *multiple* failure points. That is, unless each of those
servers also has redundancy, at which point you're beginning to get
silly amounts of hardware.

And if those multiple applications are sharing data, that means that
either: every request for data goes over the network, or the data is
redundant across all the servers. The former is a performance
nightmare, the latter is an administration (and cost) overhead.

The server farm model is a pretty poor idea from an enterprise
standpoint, except in the case of something like Google, with a single
application spread among a gazillion servers. Notice how the term "grid
computing" has pretty much disappeared? That's because it's a bad idea
for most applications. (Not to mention it's not "green" <grin>).

Joe

And if any of multiple servers goes down for same reason, you are in
better shape to continue operations because why?

because you have redundancy for multiple servers? why is that easier /
better than redundancy for one server?

there is no balance, there is only multiple points of failure, often
with a mix of more fragile OS'es when Windows is involved.

rd


Walden H. Leverich wrote:

One the advantages of having an as/400 is that you only need one box.

True. But you need to balance that against the fact that one of the
disadvantages of having an as/400 is you only have one box. Upgrading?
Patching? Crashed? You're entire enterprise is offline. Smart?



As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.