|
Please see my inline response.
On Sun, Mar 18, 2018 at 5:39 AM, Henrik Rützou <hr@xxxxxxxxxxxx> wrote:
Nathan,close
first of all, HTTP doesn’t maintain open connections in a session like
fashion, it opens a connection when a request comes, processes it and
the connection.
The designers of Node.js make a point that the one thread that manages the
event loop is the only thread that "opens" sockets and maintains a list of
all the "descriptors" that are returned, whereas other HTTP servers launch
a new thread for each each open socket. They claim that Node.js therefore
uses less resources and is thus more "scalable" than other HTTP servers.
They claim that Node.js can thus maintain more connections than other HTTP
servers, because it uses just one thread. That's the context for their
definition of "scalability".
In general, if we talk about RPG/CGI one QZSRCGI job may serve hundredsof
“active concurrent” users in a stateless environment and so can node.js.
Yes, and I wasn't suggesting otherwise. I was just explaining the rational
for the Node.js scalability claim. That ALL connections are made to sockets
that are managed by a single Node.js thread. That is quite a narrow
definition of "scalability". And different than how the term "scalability"
has been used traditionally.
“stateless” is different than “stateful” like 5250 sessions but in termsof
server resources that are no difference. 10,000 5250 users may have10,000
jobs but most of the time their jobs are idle waiting for I/O thatdoesn’t
require server resources.
Yes, I understand that stateless is the opposite of stateful. Idle
processes (Jobs) may not be consuming CPU. But the consume memory. And that
is the basis of Node's scalability claim in regard to maintaining thousands
of open sockets in a single thread. That's not to say that all of the open
sockets have accepted clients at the same time.
If all 10,000 5250 users press enter at the same time, you would see verysame
long response times because all their jobs would become active at the
time but in the real world most jobs do nothing.
The average response time would depend on the number of cores the server
has. The IBM i platform would automatically utilize all the cores, which is
a different type of scalability than the one that Node.js is known for.
Again, the basis for Node's scalability claim is the number of sockets that
can be maintained overall - not that a Node instance scales to utilize all
available cores. I hope that the difference is clear.
Do the math, if every 5250 requests took 1 sec of server resources the last
5250 user would have to wait approx. 3 hours to get an answer.
A 5250 request would likely consume about a millisecond (rather than 1
second) of CPU time. If the server had 64 cores the math suggests that the
overall response time would be 10,000 divided by 1000 divided by 64 = 15.6
(milliseconds).
It is correct that node.js runs single treaded but here kick ASYNC/AWAITin
because many of the asynchrony modules is able to run in other treadswhile
the main tread in node.js continues to process other requests.
The "other threads" you are referring to is called the Node.js thread pool,
which is used for things that require "blocking", such as file I/O. The
default is to configure 4 threads in that pool. There is generally good
reason to not go above that number. But rather to activate more than one
Node.js instance, and use a load-balancer to route requests to other
instances.
In other words, when you fire a SQL statement against DB2 this runs in
another job (and thereby uses other free resources on the system) while
node.js continues to process other pending requests.
I agree.
In order to get the maximum out of node you are also able to cluster theof
one tread into a number of treads running in a single instance of node.js
and thereby spread the processes over a given number of cores in your
system. This will scale the performance of nodes to a (maximum of number
cores * single tread performance) while the system isn’t brought down
because each node.js tread still will compete with other processes on the
system for core resources.
That is like starting a pool of PHP server instances, or a pool of Java web
application server instances. Most shops limit the number of server
instances to the number of CPU cores or core threads. These types of jobs
consume a lot of memory. Each Job is essentially hosting a virtual machine
and a language runtime environment.
A 5250 Job that runs in the same address space and utilizes the same
language environment as the IBM i native virtual machine, which may support
10,000 of those types of Jobs without any problem. You'd probably want to
limit the number of Node.js Jobs to the number of available core threads.
I have made an example that shown how node.js can take full advantage of
all available cores and resources on my I7 4 core/8 logical core 4,5Ghzwater
machine. It comes with a little **warning** because I had to install
cooling to run the test with a reasonable CPU package temperature.
http://powerext.com/nodeCluster.PNG
Interesting, your screen shot indicates that you're running 8 instances of
Node.js, the same number as your logical CPU cores.
--
This is the Web Enabling the IBM i (AS/400 and iSeries) (WEB400) mailing
list
To post a message email: WEB400@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/web400
or email: WEB400-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at https://archive.midrange.com/web400.
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.