But you sure as hell can use multiple cores in Java.
Thanks for the reference, John. And I agree that Java can run pools of
parallel tasks via the "Callable" interface and "consume" CPU on multiple
cores. But it appears that even your reference illustrates the futility of
that interface.
In the example cited:
A "Task" appends a character to a string in a loop 20K times order to
consume CPU. When running a pool of 50 tasks sequentially each instance
completes in an elapsed time of 1.27 seconds which includes 1 second of
"sleep" time. When run in parallel, each instance of the pool completes in
approximately 11 seconds.
Why would a programmer consciously "throttle" tasks which ordinarily
require essentially .27 seconds of CPU time and make them take longer
(effectively 40+ times longer) to complete, just to prove a point about
Java's ability to allocate work to multiple cores?
In the example cited, it took a pool of 50 Callable (submitted) Tasks to
drive 8-cores to 100% utilization. Why couldn't Java drive 8 cores to 100%
with a pool of just 8 Callable Tasks?
Should application programmers take responsibility for allocating work to
multi-core servers? Isn't that the responsibility of the OS?
Regarding Ronald Luijten's comment about Java not supporting multi-cores at
all, no that didn't have anything to do with IBM i. It was just an
observation about Java.
I understand that the total elapsed time to complete 50 Task instances is
greater when run sequentially, than in parallel (submitted). But how might
that apply to the question at hand, in Tim's original post?
All benchmarks of Java web workloads indicate that you must run multiple
application server instances to fully utilize multiple cores. The ratio is
pretty much one to one, even though the application server may be
configured with say 100 active threads.
Nathan.
As an Amazon Associate we earn from qualifying purchases.