|
From: Alan Campin Not sure I understand your point. How is DDL different than SQL since DDL is part of SQL?
There's a huge difference in using SQL to define your database and using it to access your database.
Interesting question about doing I/O through record I/O to table defined in DDL. Does it use new I/O system or because you are using Record I/O instead of SQL, does it drop back. Don't know the answer to that question.
I do know, because I've actually taken the time to do the research. It uses the new I/O techniques.
I guess my question would be, what is the key to the warehouse table and where does it come from. My guess would be that the warehouse code comes from item master so why would I chain in the item master and then chain in the warehouse file when I could do it in one SQL statement?
Because there are times when you don't have to read the warehouse record, so it would be overhead to read it when you didn't need it. This is the sort of conditional, navigational I/O that happens all the time in enterprise applications. For example, I may not need to go to the warehouse master if my item is lot controlled. So I can either conditionally read the warehouse, or I can create two separate SQL statements, one that includes the warehouse and one that doesn't.
When I code I/O in SQL(assuming a good database), I find the statements start disappearing. I need fewer and fewer.
Yet I find myself typing more. I find myself having to specify individual fields rather than simply reading a record. Suddenly I have to do all the work in my program that I SHOULD be doing in my data definition.
<snip> have NEVER created a 32000+ array of data structures. I can't see why you would except in utility programs or MAYBE some queries. Either one of those is a good reason to use SQL. </snip> Array is mapped to a user space as I indicated. Takes zero storage until records begin to come in. 32766 is just the maximum number of records to read in which is the IBM limit. Again going back to them continuing to use the Pre-compilier from RPG III. Practical limit is really only max size of user space at 16MB so query size on SQL I displayed is 16 bytes so real limit is 1,048,576 records.
You haven't answered the question: WHY would you do this? What business reason do you have to read a million records into a memory buffer?
I don't see creating the data structure as much of an issue. I only need to do it for the fields I am bring in. Usually that is only a few fields. If I am designing my code correctly, then the data structure should be in a procedure with only the SQL to do that function so I am not building a huge program with all my structures at the top.
And having managed projects containing millions of lines of code, I do see double maintenance as an issue. In fact, this whole concept of having to define the fields in a data structure AND in an SQL statement (twice if you are doing "INTO") is a backwards step to me. It hearkens back to the old days of internally defined files.
<snip> If you read into an array of data structures as you propose, the data structure would contain both header and detail information. You must copy the header data multiple times. </snip> If you are talking about moving a field from the SQL buffer to the RPG program, yes that does occur but I can move a few bytes billions of time in a second with slowest AS/400 out there.
That's not the point. The point is that needless overhead accumulates. The whole philosophy of "SQL for everything" seems to me destined to lots of useless bloat. And yet your argument is that it is faster. I don't understand the double standard.
Joe. Let me take it back the other way. You need to do some reading on SQL. You are making statements but clearly do not understand how SQL works.
In what way? I've been coding SQL since the early 90s. I use SQL every day for one thing or another, and I'm pretty proficient at embedded SQL. However, I only use it where it's appropriate.
And I think almost anyone would tell you that when designing systems spending your time worrying about a few hundred millionth of seconds is just not worth the time or in this case, a hundred of billionth of second. The larger issue is thinking and designing for database, not a file I/O system. As long as we continue to focus on using record I/O instead of using SQL, that is where the focus is going to remain.
I don't understand anything in this statement, except the fact that you're not worried about performance. Most of the people who say THAT work for Microsoft. The rest of the statement ("thinking and designing for a database") has no concrete semantic meaning. Please give me an example of how you would design a database differently using SQL as opposed to DDS.
This whole discussion began because I suggested that newcomers learn SQL as their primary I/O method. If you learn to just do Record I/O, that is probably all you will learn. How many AS/400 programmers are expert in SQL?
Quite a few RPG programmers are very, very good at SQL. Me, I'm probably average, neither good nor bad. I refuse to build big, fat CASE statements but I'm comfortable with COALESCE and I can kick ass with a common table expression.
Change the focus and the mind will follow. Keep using a hammer and you will continue to see hammers as the solution for everything.
Actually, that's what I'm telling you. I'm saying use native I/O where appropriate and SQL where appropriate. And SQL is NOT appropriate for single-record fetches of the type that most standard enterprise transaction processing programs use. Joe
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.