Richard,
first you have to learn the basic difference between a socket, a core and a
tread.
A socket is the psysical socket (hardware) the processor plugs into.
I old day we only had one core (the processer) per socket today we may have
many. A core is in other words a psysical unit on the chip in the socket
and the
processes instructions parallel. The speed they processes instructions are
given as their clock-frequence in Ghz regardless of how many treads it is
able to process.
A program never shares a core with another program. A program is given
a timeslice where it can use the core until the timeslice is reached or the
core is given an instruction for peripheral I/O where the core is freed
from the
program so other programs waiting can be delegated the core.
To avoid the last a program may feed instructions to the core in a number of
treads that basically is a pipeline of instrutions. The core will then
process these
instruction randomly but never on the same time but the program may hold
it's
timeslice active since on tread may wait for I/O while another continious to
process.
Very few programs are made for multi treaded processing since the programmer
has to descide what tread to use basical on statement level where the result
from one program statements (that may result in thousands of processor-
instructions) has to deleiver the result to the next statement in order for
it to
be processed.
We have then program languages that are single treaded but tries to simulate
several treads by processing complex instructions asynchrone - node.js is an
example.
This technique may isolated result in better performance for the node.js job
but node.js will 'streal' the processing time from other programs that is
waiting
for the/a core in the main loop of active jobs.
RPG has a build in function for freeing the core and will never take up a
core
in longer time than a main cycle takes.
For som unknown reasons the default timeslice on IBM i (2 seconds for inter-
active jobs and 5 seconds for batch jobs) has never been changed in its
classes, but if a job takes to many processor-resources it may be changed
by a CHJOB TIMESLICE(nnnn ms) where then the OS will terminate the
lease of the core regardless of how clever the programmer thought he was
in stealing core time from others.
In regards in processing CGI-programs it is first important to know that
there
is no such thing or just show me a object type that indicates *CGI.
There are ILE programs that runs under the main job QZSRCGI that also
holds all the programs it runs QTEMP
Normally the problem you describe appears to be opposite because people
thinks that all QZSRCGI jobs share the same QTEMP wich they don't.
And since a request to run a program from the CGI interface selects the
QZSRCGI job that handles the request by random no one says the
content of QTEMP are the same between HTTP calls.
So the only ways and the only secure way to store server side variables
are in files that may be shared by all programs that runs under any QZSRCGI
job.
Cookies is not either the solution since cockiees are small text files that
may
be changed by anybody and thereby give access to data the user isn't
allowed to see unless you of course calculate a 'salted' sha-256 hash value
of
the entire content.
Besides that - in our moder world of microservices etc. a program library
may
have 2-500 programs that may be called through the CGI-interface and where
any access to a service is controlled by a set of user rules and there is no
way you is able to build that into a cookie.
In fact since modern web programs runs based on REST/CRUD the security
shoukld be made in a combination of access to the REST/CRUD services
combined with a row security facility
On Sat, Mar 4, 2017 at 6:57 AM, Richard Schoen <
Richard.Schoen@xxxxxxxxxxxxxxx> wrote:
We have an interesting scenario where a CGI job is using a data area in
QTEMP for a temp value in a web app and there appears to be a random issue
where that value gets clobbered by another job. (At least that's what we
think is happening.)
My understanding is that each CGI call is synchronous and each
thread/program active job instance has its own copy of QTEMP library so one
call on a particular CGI job thread would have to complete before it can be
re-used by another call. Or if there is only 1 CGI thread/job subsequent
calls will block until the executing call has completed.
Is my understanding of this correct ?
Would there be any way for two CGI calls at the same time to overlap the
QTEMP libs ?
Hope this makes sense.
Any thoughts appreciated.
Thanks.
Regards,
[http://static.helpsystems.com/hs/email/templates/
signatures-final/images/hs-logo2.png]
Richard Schoen
Director of Document Management
e. richard.schoen@xxxxxxxxxxxxxxx<mailto:richard.schoen@xxxxxxxxxxxxxxx>
p. 952.486.6802
w. helpsystems.com<http://www.helpsystems.com/>
[cid:image002.jpg@01D1A545.4C2BF250]<http://www.linkedin.
com/in/richardschoen>[cid:image003.jpg@01D1A545.4C2BF250]<
http://www.twitter.com/richardschoen>
--
This is the Web Enabling the IBM i (AS/400 and iSeries) (WEB400) mailing
list
To post a message email: WEB400@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/web400
or email: WEB400-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/web400.
As an Amazon Associate we earn from qualifying purchases.