× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.


  • Subject: Re: Limits on some object sizes <= 16M
  • From: "Mark S. Waterbury" <mark_s_waterbury@xxxxxxxxx>
  • Date: Tue, 22 May 2001 11:08:24 -0600

Hello, all:

Although I do not have many problems with the inherent limitation of
16 MB for the "segment size" for most objects above the MI layer,
I do have a few "pet peeves" about "arbitrary" limits in OS/400.

For example, why are *DTAARAs limited to a measly 2000 bytes?
Since the *DTAARA is implemented by a "space" object at the MI layer,
there is no good reason not to allow users to create a *DTAARA of up to
16 MB in size, and to allow us to use the RTVDTAARA command, or
the QWCRDTAA API, to "substring" anywhere within that (up to 16MB)
space.  (This is apparently something that goes way back to the S/3, S/32,
S/34 or S/36 architectures.)

There is an old saying in Computer Science, that goes something like this:
"More sins are committed in the name of compatibility...".  You get the
idea...

Of course, for years, we have suffered with the limit of only 25 libraries
in the
user portion of the "library list." Now, finally, after all these years, in
OS/400
V5R1, IBM has addressed this "limit" by increasing the limit by a factor of
10,
to 250 libraries in the library list.  Of course, there are some coding
changes
that will be required for applications to co-exist with the new longer
library lists,
and it is not entirely clear how "smooth" the transition will be.  However,
here
too, I would like to ask the question, why "250" libraries? Why not raise
the
bar to, say, 500 or 1000 libraries?  Why place "arbitrary" limits on things
like
the "library list"?

Of course, I can still remember back when, around OS/400 V1R2, the
maximum size of any database file (physical file member) was limited to
2 GB of data.  That is 2 Gigabytes total, and not the total number of
records
in the database file.  Of course, in those days, circa 1989, disk drives
were
much smaller in capacity and much more expensive, per byte, than today.
And, IBM began to relax many of these "limits" starting in V2 of OS/400,
and continuing in V3 and V4.  So now, the limits on a file size are much
larger, so that, in effect, they can grow to practically an "unlimited"
size,
limited only by how much disk space you want to buy, and how much
room you have to plug-in more DASD, etc.

Recently, there has been much discussion in this list about the "limits" of
space objects to 16 MB per space object (such as the *USRSPC object).

I recall that, in OS/400 V1R1 and all the way until V2R3, the size of the
EPM "heap" (AS/400 Pascal and the original C/400 ran in the Extended
Program Model or EPM environment) was also limited to a maximum of
16 MB. This imposed some rather harsh restrictions on many applications that
use dynamic linked lists and similar modern data structures.

What I did to "get around" this limitation was to, in effect, write my own
heap
storage manager, using an MI routine to create an MI space object, (which
I later converted to use *USRSPC objects, when these became available),
and used my own "new" or "malloc" routine to allocate from within the 16 MB
user space.  One "trick" I used was to NEVER try to "dispose" or "free" any
allocated memory, but rather, to use "sunset garbage collection", which
means,
when the sun sets, (the job ends), we "collect all the garbage" (delete all
of the
*USRSPC objects). Thus, the "new" or "malloc" routine had only to increment
a space pointer, by the size of the amount requested, plus rounding to a 16
byte
boundary, so that pointers in data structures would be properly aligned,
and,
to check that the size of the allocated object would "fit" within the
current 16M
*USRSPC.  If not, then create another *USRSPC, chain them together, and
then start allocating from this new *USRSPC object.  Thus, at any one time,
there is only one "current" *USRSPC object from which current allocations
are
performed.

Later, with V2R3, IBM introduced the Integrated Language Environment
(ILE), although at that time, the only ILE language was ILE C/400. However,
IBM also introduced several new OPM MI instructions to support the new
ILE heap model, including CRTHSS (Create Heap Space), which created a
heap space (of up to 16 MB), ALCHSS (Allocate Space from a "heap"),
DLCHSS ("Deallocate space from a heap"), and DESHS (Destroy Heap
Space), which deleted an entire ILE heap space.  So, I converted my own
heap space manager from using *USRSPC objects to using the new ILE
heap spaces and instructions.  This has the added advantage that, at the
"end of job" (when the "sun sets"), OS/400 automatically cleans up all of
the heap spaces.

The bottom line is that I have never had a problem with allocating dynamic
data structures (linked lists, trees, etc.) whose agregate size far exceeds
the
16 MB "limit" imposed by the size of any single "segment" in OS/400. You
just have to take that into consideration at the appropriate "level" in the
design of your application.  Most applications that use dynamic storage will
usually have their own "storage management" layer anyway, to isolate the
rest of the application(s) from the vagaries of the particular platform and
OS
that the application is running on.

For those who really need a single contiguous large address space, where
you can increment pointers or index large arrays to your heart's content,
you now have the "teraspace", as of V4R4 and above (or was it V4R5?).

And, if you really need a "persistant" contiguous "data space" (array), you
can, of course, store the data in an IFS stream file, which is what most
applications in the Unix world would normally do anyway. For example,
in Unix, using the standard Unix file system I/O APIs, you can directly
"seek" to any byte offset within the file, and then read or write for any
number of bytes desired. (Recall that the Unix File System is byte stream
oriented, rather than "record oriented", as are most IBM operating system
file systems.) Thus, the Unix style APIs make it pretty easy to envision the
contents of the file as some sort of a big "byte space", similar to the way
the OS/400 APIs for accessing the contents of a *USRSPC allow you
to store and retrieve "substrings" within the *USRSPC object space.

And now, apparently with V5R1, you can use the "mmap" API, (from the
Unix and Linux worlds), to "map" the contents of the file into a teraspace,
so that you can linearly address it, if that is what you really want to do.

And, as of V4R5 and above, you even have "journaling" of IFS files, so
you can even "mirror" changes to a byte-stream file in the IFS, for example,
for "high availability", to another AS/400, etc.

Well, that's all for now. I would like to see this whole "thread" or topic
put to rest.  So, what's the "big deal"?

Regards,

Mark S. Waterbury


_________________________________________________________
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com

+---
| This is the MI Programmers Mailing List!
| To submit a new message, send your mail to MI400@midrange.com.
| To subscribe to this list send email to MI400-SUB@midrange.com.
| To unsubscribe from this list send email to MI400-UNSUB@midrange.com.
| Questions should be directed to the list owner/operator: dr2@cssas400.com
+---

As an Amazon Associate we earn from qualifying purchases.

This thread ...


Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.