|
Hi, Joe This has been a very interesting discussion. Like you and Zak and Art, I have the habit and pleasure of learning new stuff on my own. My career on the 400 has been one of seat-of-the-pants learning, sometimes on the company's nickle. It's sometimes been hard on both me and my emplyers, but it's mostly been a positive experience. Why, right now I'm supposed ly on vacation! It's been long enough since I learned debugging in RPG or CL or C, that I don't remember the pain of learning how. I rely on it now - changes to code are always accompanied by debugging small changes. There are things we have all learned (who have used RPG) that help performance - like avoiding the once common trick of multiplying a date to change the format, or using SETLL *EQ instead of a CHAIN to test existence. What we need is some common guidelines for SQL. John Sears gives a good session on this. There is a good manual (or appendix in earlier releases) on performance. We've all heard about indexes to support JOINs, WHEREs, and ORDER BYs. I do remember discovering the use of STRDBG after some time working with SQL. This has been the basic tool I've used. But I needed to read up in the above-mentioned performance sections to learn about access methods. This was necessary to understand what the joblog messages told me. Then there's database monitor. We don't get much help here, except for one white paper. The data is spread across multiple related record types, and the various fields are heavily overloaded in usage. As bad as PEX files, sometimes. Of course, the optimizer is a black box (I know you love these, Joe!) that takes a lot of things into consideration that RPG doesn't. We have a more static environment in RPG and COBOL. We know ahead of time that we will probably read all the records, so we choose arrival access, or a certain keyed logical file. We know we'll work with individual records, related to some other file, so we have keyed logicals, again, that have parent-child relationships. When we first use SQL, we don't worry about indexes so much - it's so easy to just join files together, to put an order on any result set. But we bring systems to their knees very easily this way. One problem with the 400 is, that IBM set its sights on static, embedded SQL. And the rest of the industry went to dynamic SQL. It's safe to say, I think, that IBM was wrong. And Rochester is trying to change this. So the black box is no longer the same as it was. With SQL, the optimizer looks at memory and file size and other characteristics, that may affect how best to get the data. It makes assumptions about what data is returned, based on the kinds of selection criteria you specify. The new optimizer will have better information about cardinality and distribution, on which to base its decisions. Hope these have been useful observations. Vern At 04:08 PM 12/27/02 -0600, you wrote:
> From: Art Tostaine, Jr. > > We do the same thing here. Since I went to wireless networking in my > house, as soon as the wife and kids are asleep (and sometimes before), I > grab the notebook and start "learning" Yup. That's me, too. I learned Java on my own timer, with a few books and lots of Internet time. I've done the same with SQL, but haven't had nearly the time to spend, and it shows in the naivete of many of my questions. And probably contributes to my lack of comfort. Joe
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.