× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



BTW a TCP connecetion isn't limited to serving dynamic pages but also
applies
to loading static files where each file represents a TCP connection to the
HTTP
server.

On Wed, Oct 14, 2015 at 2:51 PM, Henrik Rützou <hr@xxxxxxxxxxxx> wrote:

Let’s try to see how Apache handles incoming CGI requests.



When a new connection is required for a CGI request, Apache search for the
first idle QZSRCGI job and transfer it to that job. If the maximum number
of QZSRCGI job is reached the request is queued until a QZSRCGI job is
freed or the max. wait time is reached causing the server to issue a HTTP
error.



In other words Apache has a number of single treaded QZSRCGI jobs making
it able to run concurrent requests (in different jobs) and by that it can
take advantages of all the cores in the machine.



Node.js does it in another way. When a request comes in it starts to
process it when it has time. Since the node.js instance is single treaded
but works asynchrony the real processing first starts when another process
goes into a wait e.g. issues a request to the backend process such as
XMLSERVICE or the request finishes and the connection is closed.



The trick is to limit node.js to only accept a number of new requests and
queue the rest since you are able to overload the single node.js instance
that basically only has one core at its disposition.



Let’s say you mix NGiNX and node.js and set each of you node.js to only
accepting 10 concurrent clients at a given time node.js can then signal
back to NGiNX “I’m busy” simply by turning the TCP connection listener off
and NGiNX will then try another node.js instance until all is busy and the
queue then incoming TCP request. Beside that the load balancer in NGiNX
will distribute load up front to a number of node.js instances thus not
overloading a single instance and thereby minimizing that a single instance
reaches its threshold.



This method of course only applies to a node.js HTTP server and not when
node.js is used as a websocket server where TCP sockets unlike in
traditional HTTP connections are kept open.



In any case you will need some load balancing features that can balance
load to different server instances and thereby balance load between cores.

On Wed, Oct 14, 2015 at 2:45 PM, Nathan Andelin <nandelin@xxxxxxxxx>
wrote:

I am sure Kelly is talking about the top level routing to the app and
how
to handle that as opposed to the question of internal routing to
"screens"
within the app.


Kelly is asking about a mix of both types of routing - External routing
within a reverse-proxy / load-balancer as well as internal routing within
Node's own HTTP service.

How would one configure both so that a broadly-scoped system performs
well,
scales well, you don't introduce too many HTTP restarts?
--
This is the Web Enabling the IBM i (AS/400 and iSeries) (WEB400) mailing
list
To post a message email: WEB400@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/web400
or email: WEB400-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/web400.




--
Regards,
Henrik Rützou

http://powerEXT.com <http://powerext.com/>






As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.