× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.




I don't quite follow this frame of thought. Let me explain.

Normally there is one request per job. If you're using persistence with
client sessions IDs, same thing. If you're using persistence built into
the server, it's one job per "visitor". (I've never liked that type of
persistence).


One request per job? To be clear, I think you mean that each request is
handled by an HTTP BCI job. But which job? There may be thousands of BCI
jobs running in a high-volume site. You may have thousands of concurrent
users requesting any *PGM at any moment. HTTP threads indiscriminately
route requests to the next available job. Over time, every *PGM becomes
active in every job.


The question I have is, how is one job per request a "memory leak"? When
you say something like "all of your CGI programs become activated in all
jobs" it makes it sound like you're leaving LR off all the time. :)


In regards to me leaving *INLR *off, remember that I'm not using CGI. But
yes, I believe it's common for CGI programmers to leave *INLR *off.

(I've never been a fan of that either when it's sole purpose is "speed").


I understand that you're not a fan of persistent CGI. Neither am I. I
gather that you set *INLR *on (occasionally?) So the *PGM files may be
closed? You never run into requirements to keep files or SQL cursors open?

If you're like most CGI programmers, you compile your programs to run in
the *caller activation group. So they remain active in the job even though
you may explicitly close files.

Or are you referring to access paths, AGs, etc staying open/active in each
job? If so, is calling that a memory leak really fair? It makes it sound
so... so... dirty! Haha... Most IT folks would refer to memory leaks as
poor programming in C (and other languages) with pointers, not job
resources staying available.


Has nothing to do with poor programming, it's just that you as a developer
or operator or administrator have no control over your programs getting
activated in every BCI job. Other than compiling your programs to run in
*new activation groups? The CGI interface does not provide a way of
deactivating them except to end the HTTP server (some people kill BCI jobs
and later regret it).

I've put together some large systems using CGI and Apache over the years
and rarely if ever have they experienced anything that would be called a
memory leak, even with the servers running for months on end without
restarting them or an IPL. The only thing we end up "rebooting" are PCs in
the chain that handle things the IBM i doesn't. (for example redacting
documents for the web.. I don't like it, but it's a necessity in this case
and a huge bottleneck and i don't have a say in that part of the
application).


ERP class systems? How do you find the time to build and support utilities
and large-scale systems ;-)

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.