× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



From: Evan Harris
No, it's your job to define the scope of what is being done and what
audience is being served.

Okay, I'll try. Let's say the scope includes broadly-scoped, highly complex
business applications; like ERP suites. And the audience includes all HTTP
clients (browsers, mobile devices), which may include employees, vendors, or
patrons of the enterprise. Plus web service clients.

The only thing really clear so far is that you believe the i scales
better because you can "add cores".

Perhaps I've failed to write clearly, or you haven't read my posts. One can
"add cores" to any system. But if that's your first answer, that may be a sign
that your system does NOT scale well. I would suggest that IBM i CGI scales
better than JEE, PHP, and MS .Net because:

You can configure the IBM i HTTP Server for *NOMAX threads, the threads are
light weight, and client connections can remain alive. CGI programs may be
loaded into HTTP server worker jobs once, then remain active, or not, depending
on the selected activation group option for each. You can configure IBM i CGI
worker jobs to a defined maximum limit. You can configure and run multiple
separate HTTP server instances for separate clients. RPG programs perform better
than the others; they will inherently handle more users & increasing
workloads. There are more and better work management options under IBM i, than
other virtual machine environments. CGI programs run in the same address space
as the database, as opposed to establishing a connection with a remote DB
server; there's a lot less latency. CGI programs use streamlined interfaces, as
opposed to multi-tier architectures that transfer data from one virtual
container to another to another to another...CGI programs scale to the capacity
of the server automatically; you never have to deploy them across multiple
application server instances, or across virtual partitions, or across server
farms to maximize throughput. But you still have the option of dividing
workloads across multiple physical & logical tiers if you want. Under CGI there
is absolutely no need to separate complex business logic into separate business
logic servers, which add more latency, potential failure points, &
bottlenecks. There is a persistent CGI option, which may apply to specific
applications. I use an architecture where users can release files and free all
resources they may be using by clicking an "Exit" link, and ending a *JOB. That
can be a hidden gem for improving scalability. The system is not dispatching a
garbage collector. I use an architecture where developers can deploy a single
instance of a program to serve multiple concurrent users, or chose to launch a
separate instance for individual users, depending on the type of application.

-Nathan





As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.