My take on the paragraph - I believe that they had an application server defined within WebSphere. As they increased the load on the server it took advantage of the second server well, but received little benefit from subsequent processors. Perhaps for reason of multithreading or some other circumstance, it had two 'jobs' which took most of the processors. Additional processors would not be of benefit. At this point, a logical next step is to create another software instance of that same WebSphere server on the same machine. For whatever reason, multiple instances must have created some integration issues which they wished to avoid, thus they went with loading completely separate copies of WebSphere on physically separate servers. WebSphere is designed to expand via multiple instances on the same hardware, I don't know what the circumstances were here. That's my guess, Andy Nolen-Parkhouse > > >Rick > > > > > >The issue of maxing out on capacity was faced early on. Passer said > > >stress tests were performed at the IBM iSeries headquarters, in > > >Rochester, Minnesota. From that, it was determined that adding a > > >second processor would be "an easy fix." The next round of increasing > > >capacity is more complicated. "After two CPUs in the iSeries," Passer > > >explained, "the application server has too much overhead and doesn't > > >really offer a big advantage over multiple application server > > >instances. In all cases, it means more CPUs, but there are major > > >software issues one gets bogged down with."
As an Amazon Associate we earn from qualifying purchases.
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.