× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



OK - I'll try and address that.

You're only looking at it in terms of a couple of simple requests. But you also have to take into account requests for an increase in size and also what happens when a number of the reservations are relinquished. Over the life of a job (or even a program for that matter) there can be a lot of these.

Other than simple "rounding" to its multiple size (typically a binary related boundary like 16, 64 or whatever) the system will normally respond to requests by supplying what is asked for. So (assuming no rounding) a series of requests for 50, 100, 20, 60, 40, 80 - would be met with the requested number. Suppose that the 20 allocation then requests expansion to 40. The 20 is now unused and a new 40 allocation is added at the end leaving a "gap" of 20 i.e. 50, 100, (20), 60, 40, 80, 40. Some time later the 60 reservation is released. The next request received is for 70 bytes. At this point the system's algorithm may decide to reuse the 20 and 60 blocks freed previously. The algorithm itself will have some rule on what the minimum "gap" to be left should be - suppose it is 20 in this case - so it will not leave a a gap of 10 but rather allocate 80 bytes to the 70 byte request. While it may be practical in the context of a single procedure to consider that blocks of memory of 5 or 10 bytes are practical, from an OS perspective units that small would be sure to cause problems in terms of ability to allocate large amounts of contiguous memory.

I'm no OS expert, but my guess would be that the smallest allocation unit would probably be a function of the type of process requesting it (OS or USer for example). It may also be dynamically calculated based on allocation requests, but I suspect that the overhead of doing so it not worth the effort. Who cares if 200 bytes is allocated when 10 was asked for? Most of the time it is irrelevant and if it means that less work is required in terms of determining if a contiguous block is of a particular size is available, then it is probably a good trade off.

I haven't tested this for a long time but I recall finding that if I ran RPG code that allocated a chunk of memory and then allocated another, the addresses returned demonstrated that the memory was not contiguous but rather had been rounded out to some preferred allocation size. Maybe one of these days I'll get round to testing it with a number of different memory allocation methods and see what happens. For example I suspect that little (if any) rounding would be added to allocations for subprocedure variables since it is known that they will all be released at the same time - so I would expect them to be contiguous. But who knows.

Interesting academic exercise - but of no real practical use.


Jon Paris

www.Partner400.com
www.SystemiDeveloper.com



On Nov 21, 2010, at 5:50 AM, rpg400-l-request@xxxxxxxxxxxx wrote:

"Why is it better to given extra ram to a call to malloc, when you know it
doesn't need it, when the next call might need it."

i.e. if call 1 asks for 90 bytes why give it 100 bytes when the next call
might be for 10 bytes?


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.