× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Fred,

Thanks for this.  The numbers make absolute sense as to why
my results are as they are.  And it is what I figured as
well.

In my case I want as many simultanious tasks running at one
time as possible.  I know that a single processor machine
really can't act as 50 seperate CPUs (whether jobs or
threads), but it's close.

Thanks again!

Brad

On Wed, 27 Feb 2002 08:27:49 -0600
 Fred Kulack <kulack@us.ibm.com> wrote:
>
> DISCLAIMER: These numbers are very much approximations
> from one
> particular release under one particular circumstance, and
> even then I
> may be misremembering them. I do not have the numbers in
> front
> of me and cannot immediately recreate.
>
> Ok. From some of the testing we have done in the past
> here's what I recall:
> o    Thread creation and waiting for the thread (ignoring
> the started
> thread's
>      runtime iteself) about 32,000 RISC instructions and
> doesn't vary too
> much.
>
> o    Job creation takes somewhere between 350,000 and
> >1,000,000 RISC
>      instructions depending on what mechanism you're
> using to start the
> job.
>      i.e. spawn is relatively lightweight because it
> simply copies lots of
>      stuff from parent to child. But spawn can inherit
> descriptors and
> other
>      stuff making it more expensive. SBMJOB is relatively
> heavy.
>      Adding various jobq, jobd and other stuff to modify
> behavior
>      adds more.
>
> Ultimately however, the choice of using jobs or threads
> does not
> really come down to these measurements.
> WHY? Because, in any server that is more than a toy, you
> should be
> pooling threads or jobs (whichever you use) just like you
> pool JDBC
> connections.
>
> o    A pool of threads or jobs waiting for work from a
> queue.
>      Enqueue work onto the queue and dispatching the
> thread/job using a
>      condition variable can be 1000 or LESS RISC
> instructions if you
> optimize it.
>      i.e. lock a mutex, enqueue data on shared list,
> signal condition
> variable to wake
>      the child thread/job, unlock mutex.
>      This is REAL cheap.
>
> It really comes down to things like how much shared data
> there is between
> the tasks, how much locking contention there is between
> the tasks
> (from both an application and OS perspective).
> And of course, what the overall architecture, application
> management
> requirements, and system memory utilization of your whole
> environment needs
> to look like.
>
> >From your description it sounds like you're not sharing
> much between the
> tasks.
> Without knowing more details, that sounds relatively well
> suited for
> threads.
> The backend server is probably written using processes?
> In that case, the threaded client would probably go
> faster than the server.
>
>
>


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.