× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.


  • Subject: RE: Based variables
  • From: "Sims, Ken" <KSIMS@xxxxxxxxxxxxxxxx>
  • Date: Tue, 12 Dec 2000 12:46:36 -0500

Hi Mark -

>>The ALLOC and REALLOC operations allocate memory in multiples of 4K.  So
if
>>you allocate 240 bytes, then turn around and REALLOC for 480 bytes,
>>persumably the memory allocation handler is smart to realize that the
memory
>>is actually already allocated and just return a sucess code, keeping the
>>process very fast until an additional 4K is actually allocated.
>
>  Have you tested this?  I'm curious if ALLOC has the smarts for that.

I was in error when I said that memory was allocated in 4K chunks.  The
entire heap is allocated in 4K chunks (for the default heap, user heaps can
have different sizes).  Individual allocations are in 16-byte chunks (for
the default heap, user heaps can be different).

Unfortunately the heap storagement routines appear to not have any smarts
when it comes to reallocation.  Even if the size is exactly the same as what
it was before, it copies the data to a new location.

>>So for efficiency you should never make a request that would be satisified
>>within the same multiple of 4K as you have already requested.
>
>  If the system is smart enough, then it shouldn't make much difference,
right?

Since the system appears to be pretty dumb, reallocations should obviously
be kept to a minimum.

>  The record length in this case is 148.  I'm expecting (at this time) a 
>maximum of 4000-5000 records.  Minimum is one record.  So I picked an 
>incremental of 500 as a reasonable number.  I'm using a based DS, so I can 
>potentially allocate a little over 113,000 records at one time. Then I'm 
>"shifting around" another DS as a "view" to each iteration of the 
>data.  That should give us plenty of elbow room.

You shouldn't need a DS based over the whole area, just the DS that you
shift around.  Since the maximum single allocation in the default heap is
16MB - 64KB, the maximum number of records you can hold is 112,916.  If you
ever needed more, you could use multiple allocations, it just would take
some extra coding to switch between allocations as needed.

>  My concern as far as performance if I were to only allocate one 
>additional record at a time is that (4K allocation chunk notwithstanding - 
>it's only 27 records!) the system will probably need to keep copying the 
>existing data and add on the requested space.  A lot of unnecessary data 
>movement.

Very true.  Personally I think I would run some tests to see what size would
satisfy 80% of the requests and use that as the initial allocation, then
allocate in 500 record chunks after that, rather than just starting with
500.  If the processing always handles the entire data set, I would probably
set the initial allocation big enough to handle everything.

Ken
Southern Wine and Spirits of Nevada, Inc.
Opinions expressed are my own and do not necessarily represent the views of
my employer or anyone in their right mind.

+---
| This is the RPG/400 Mailing List!
| To submit a new message, send your mail to RPG400-L@midrange.com.
| To subscribe to this list send email to RPG400-L-SUB@midrange.com.
| To unsubscribe from this list send email to RPG400-L-UNSUB@midrange.com.
| Questions should be directed to the list owner/operator: david@midrange.com
+---

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.