× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



This is an interesting point. You COULD theoretically determine the
pointer size based on the maximum defined length of the field.

Pointer size? What the heck are you talking about? Are you talking
about the difference between the standard 128-bit pointers, and the
64-bit pointer that C programs can use for teraspace?


Which brings us to an even trickier question: how would you define a
field with a length of of 128TB anyway? The actual length is
140737488355328, and I'm pretty confident that you can't define that
on a D spec. With only seven positions for length, you're limited to
about 8MB.

The ALLOCUNIT idea is pretty good. Another idea would be to put the the unit in the actual length field:

D bigGuy s 128mbA

Though, ALLOCUNIT might be better. Also, I don't think we'll have 128TB fields anytime soon since teraspace doesn't allow for allocations that big yet.

But, I'm sure we'd want the capability of supporting fields larger than 8 million bytes, so something like ALLOCUNIT or putting the units in the length field or something like that would be a good idea.


Well, I'm not worried about typing. Having values that are readable
makes sense; let your IDE do your typing for you.

You're talking about content assist? Have you really found that useful? I find it more cumbersome than just typing the data. (Unless of course I don't know what to type next...)

Or are you talking about a macro or something like that? (Which is probably what I'd use)

Please, lets not make things more complicated to teach then they
already are.

The only issue is people who have already hardcoded -2 and +2 a lot;
you've said yourself that you're one of those.

Actually, seems to me that coding something like VARYING(*MAXwhatever) solves that problem rather neatly. It just makes things more complicated to teach and learn, which was the very first point I made. Though, you didn't comment on that part of it...


IBM could just size the pointer based on the defined size of the
field, which would currently leave all code backwards compatible,
since by definition all fields must be less than 32K today, and so
use a two-byte pointer.

The 2-byte integer that keeps track of the length can actually handle up to 64k, not 32k. We've had the ability to use 64k strings since V4R4 (8 years ago)

As I explained in a previous post, I want the ability to specify the size of the integer. That way I can have a subprocedure that accepts a 10000 byte VARYING CONST string today, and I can upgrade it to handle a 1mb string tomorrow without breaking compatibility.

Whereas, if I can't control the size of the integer, the 10000 byte VARYING would be a prefix of 2 bytes, and the 1mb would use a prefix of 4 bytes, thus breaking backward compatibility, requiring me to recompile everything.

To further clarify: I'm not referring to compatbility with programs written for V5R4 and earlier. I'm referring to the ability to make my V5R5 (or whenever we get the long string support) programs future-compatible. So I can make them 10000 bytes with a 4-byte integer in V5R5, then 6 months later when the business rules change, I can make them 1mb with a 4-byte prefix. I can't do that if IBM picks the prefix size for me.

Next, add a new BIF (%indexsize?) to determine the number of bytes in
the index portion of a varying field. Programmers like you need to
use this new BIF in all new code, and must remember to fix any
existing code when a varying field size is changed.

Yes, I like the idea of a BIF so I don't have to hard-code the size of the prefix. But, I still want the ability to specify the prefix size on the D-spec.


For maximum backwards compatibility, IBM could implement a
*ALLOWBIGVARYING compile flag. This would default to *NO and if you
unknowingly changed a field's buffer size to a value that required a
larger pointer (say through a LIKE define or a /COPY) without
actually changing the *ALLOWBIGVARYING flag on the program, then the
program wouldn't compile.

I don't see the value in this. Since you have to explicitly tell it to change to a different size length prefix, I don't see the point behind this keyword.

Err... also, I'm not sure that we need a *MAX32KB. Is that just
for compatibility with older releases, or what?

It's for completeness.

You misunderstand. I was asking why we need a *MAX32KB assuming that we already have a *MAX64KB keyword. *MAX64KB gives us the "completeness" that you're referring to. Why do we also need *MAX32KB? The only reason I can think of is for compatibility with V4R2's compiler, though that seems silly. Of course, you misinterpreted this because (apparently) you didn't realize that the limit had been upgraded above 32k.

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.