× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Actually, CGI does scale better on an i than on *nix systems *if* you take advantage of named activation groups and service program. The reason is as long as you don't run in the *NEW activation group, ILE programs and service programs only take the hit of preparing the program to run the first time they are used in a job vs. the typical *nix way which is spawning a new thread which has to go out and load everything for each request.

*nix systems do have something called FastCGI which basically starts up several threads and pre-loads the CGI programs in them. I have a little bit of experience with that and besides having to write to a specific API, it doesn't seem to be anywhere near as reliable as running a job on an i (if the CGI program dies, the thread doesn't always die) and when you do need an extra thread, it takes much longer to start up than a normal thread. The app I have this experience with was a bear to tune because you had to balance memory usage of the pre-started threads against overall performance and the ability to handle additional threads starting up.

One other key difference is that the i handles mixed workloads a lot better than pretty much anything else short of a mainframe and with CGI programs; you end up needing something that is both fast with I/O and running programs. With the *nix systems I've been around, that seems like a challenge vs. the i where it pretty much works without a lot of messing with.

Matt
------------------------------
date: Wed, 6 Oct 2010 15:08:28 -0600 (MDT)
from: James Rich <james@xxxxxxxxxxx>
subject: Re: [WEB400] Which scales better? J2EE, PHP, or CGIDEV2?

<snip>

There is nothing unique or special about the CGDEV2 environment described
above on the IBM i vs. CGI programming on *nix. In both cases available
resources and automatically allocated to separate jobs. It seems to me
that a possible differentiator in performance is how well the operating
system performs a context switch (changes the active process on a CPU).

I believe that I have read that CGI programming doesn't scale particularly
well. If that is the case, I believe that one would find no particular
advantage to doing CGI programming on the i versus anything else as I
believe the i isn't doing anything particularly different than what other
systems do. Then if CGI programming in general doesn't scale well, I
would suppose that CGI programming on the i doesn't scale well, either.
Conversely, if CGI programming scales perfectly well on other systems I
would expect the same on the i.

James Rich

if you want to understand why that is, there are many good books on
the design of operating systems. please pass them along to redmond
when you're done reading them :)
- Paul Davis on ardour-dev


------------------------------


As an Amazon Associate we earn from qualifying purchases.

This thread ...


Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.