× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



<snip>
Alan, I think you might want to do some basic research.  Defining your
data
with DDL is a completely different topic than accessing it via SQL.  You
can
read DDL-defined files just as easily using native I/O as you can with
SQL.
</snip>

Not sure I understand your point. How is DDL different than SQL since
DDL is part of SQL? 

Interesting question about doing I/O through record I/O to table defined
in DDL. Does it use new I/O system or because you are using Record I/O
instead of SQL, does it drop back. Don't know the answer to that
question.

<snip>
All the time!  Do you actually write business applications?  I have to
chain
to get an item master, then chain to get a customer record.  I need to
see
if a warehouse override exists.  And even when I do a loop, it's usually
a
small number of records.
</snip> 

No, only for thirty years with 16 years of using SQL. 

I guess my question would be, what is the key to the warehouse table and
where does it come from. My guess would be that the warehouse code comes
from item master so why would I chain in the item master and then chain
in the warehouse file when I could do it in one SQL statement?

When I code I/O in SQL(assuming a good database), I find the statements
start disappearing. I need fewer and fewer. 

<snip>
have NEVER created a 32000+ array of data structures.  I can't see why
you
would except in utility programs or MAYBE some queries.  Either one of
those
is a good reason to use SQL.
</snip>

Array is mapped to a user space as I indicated. Takes zero storage until
records begin to come in. 32766 is just the maximum number of records to
read in which is the IBM limit. Again going back to them continuing to
use the Pre-compilier from RPG III. Practical limit is really only max
size of user space at 16MB so query size on SQL I displayed is 16 bytes
so real limit is 1,048,576 records. 

<snip>
Either one of those
is a good reason to use SQL.  But this DS approach has other issues:
any
time the data you are fetching doesn't exactly match the physical file
layout, you have to manually create a data structure, and make sure it
matches your SQL statement.
</snip>

Using data structures is a decision of the programmer. You can, also,
just read directly to variables but I prefer using qualified data
structures. I like to see where the data came from and it eliminates
having two variables values mapped to one variable.

Fieldx = dsWarehouse.Override; // Whatever.

This same issue comes up if you are doing Record I/O. Now days I always
go to data structure when doing I/O and of course, I have to define a
data structure even if it is LikeRec. 
 
I don't see creating the data structure as much of an issue. I only need
to do it for the fields I am bring in. Usually that is only a few
fields. If I am designing my code correctly, then the data structure
should be in a procedure with only the SQL to do that function so I am
not building a huge program with all my structures at the top. 

IBM checks the data structure or variables at compile time and during
runtime so if data structure or variables change or are incorrect you
get an error. 

Also, I don't think I am saying that SQL is easier than Record I/O. Just
that the extra work is worth it. The biggest single hassle when defining
variables either in a data structure or standalone is that you have to
either define an external data structure or just put the size in
manually. Royal pain in the butt. IBM has not given us a way to simply
define a field based on a table unless the table is in the program. Easy
enough to do. IBM has just not done it. 

<snip>
If you read into an array of data structures as you propose, the data
structure would contain both header and detail information.  You must
copy
the header data multiple times.
</snip>

If you are talking about moving a field from the SQL buffer to the RPG
program, yes that does occur but I can move a few bytes billions of time
in a second with slowest AS/400 out there. 

<snip>
But you haven't proven a single thing.  That's why I despise these sort
of
mental gymnastics.  Prove your point.  Write a benchmark.  Until then,
it's
just smoke.
</snip>

Joe. Let me take it back the other way. You need to do some reading on
SQL. You are making statements but clearly do not understand how SQL
works. 

And I think almost anyone would tell you that when designing systems
spending your time worrying about a few hundred millionth of seconds is
just not worth the time or in this case, a hundred of billionth of
second. The larger issue is thinking and designing for database, not a
file I/O system. As long as we continue to focus on using record I/O
instead of using SQL, that is where the focus is going to remain.

This whole discussion began because I suggested that newcomers learn SQL
as their primary I/O method. If you learn to just do Record I/O, that is
probably all you will learn. How many AS/400 programmers are expert in
SQL?  

Change the focus and the mind will follow. Keep using a hammer and you
will continue to see hammers as the solution for everything. 

Also, keeping the focus on SQL means putting more pressure on IBM to fix
the problems that SQL has. As long as we continue to focus on Record
I/O, how much pressure are we putting on IBM to fix or just eliminate
the pre-compiler? It should be a whole lot easier and a whole lot faster
than it is. 

Oh well. Off the soap box. 


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.