×
The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.
Joe Pluta wrote:
From: Hans Boldt
But yes, for legacy applications, if your pointers have to be large
to begin with, it does make it easier to scale up applications to
monolithic proportions. In other words, you pay a penalty in advance
for the prospect of easier upgrades in the future.
This to me is truly a red herring. What penalty? More memory? Hardly a
price to pay for easier upgrades. Trust me on this, Hans. From an
application vendor's standpoint, I'll be much happier to have to sell a few
extra MB of RAM than telling my client it's a two man-week job to upgrade.
You're right that today it is a small price to pay for easier future
upgrades. The choice is now a no-brainer.
But I remember what it was like in the S/38 days, when RAM and disk
store were vastly more expensive, and that had a big influence on
systems programming on that machine. For example, if you wanted a
list of fullwords, using a linked list took up 4 times the storage
than on a more conventional architecture. And so, (among other
things) you avoided pointer usage, even if pointers would be the
natural solution.
I suppose you could say that the S/38 really was 15-20 years ahead
of its time. Now, adding extra RAM and disk is no big deal. But on
the other hand, a linked list of fullwords *still* uses up 4 times
as much memory as on more conventional architectures, and that still
sometimes grates on my programming sensibilities. On my home
computer, I know I should add a few hundred more megs to improve the
performance of the image manipulations that I often do. But really
now, why isn't 128MB RAM enough on a home computer???
Having lots of cheap RAM and disk have interesting consequences that
have yet to play out. Some researchers propose doing away with
databases on hard disks and keeping gigabytes of data in RAM memory.
The rationale seems to be the impedence mismatch between objects and
database rows. By keeping everything in memory, you can maintain the
data more easily in an object model. This also meshes nicely with
the single level store model. But then again, you still have to
address the issue of communicating with other systems and saving the
objects to persistent store for backup purposes.
On a more practical level, lots of RAM does offer interesting
opportunities for programmers. For example, these days when I write
a program to make a number of systematic changes to a set of files,
the easiest and most straight-forward approach is just to read each
file whole into a program variable, rather than process the files
one record at a time.
Anyways, enough rambling for now. What was the topic again? ;-)
Cheers! Hans
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact
[javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.