Do you use change management to control your compiles? We use TurnOver here, and store our DDL source in QDDSSRC. The only difference between DDL source and DDS source (besides the contents) is the object type attribute we use for checkout and worklist.

Now that DDL is being introduced in your shop, you might want to build the case for updating your standards to include (or even prefer) DDL. There are some advantages to DDL, both for performance and for data integrity, that make it attractive.

Why DDL rocks (imo):
1. Data integrity and validation is performed on WRITE instead of READ. Most applications read more than they write, so moving this validation should improve overall thruput. Also, since the record is validated before being written, it is much harder (impossible?) to store invalid data in the table. No more data-decimal errors when we somehow got blanks instead of zoned zeros.....
2. Better data management, larger buffers, etc. DDS objects are catching up here, so less important...
3. New datatypes like BLOB, CLOB, DATALINK and the rest, that extend our database.

hth,
Eric

-----Original Message-----
From: midrange-l-bounces@xxxxxxxxxxxx
[mailto:midrange-l-bounces@xxxxxxxxxxxx]On Behalf Of Mike Cunningham
Sent: Tuesday, March 04, 2008 1:18 PM
To: 'Midrange Systems Technical Discussion'
Subject: RE: Join Logical file using the same physical file twice


Thanks Eric, Rob and Chuck. I got a view created (after learning some things about SQL) that does what I need. Now since I went outside out shop standard (dds) we need to think how to handle SQL view creation source and naming conventions so we know what's what. I did not try Rob's idea of creating my own DDM file pointing back to the local system to fool the LF DDS into thinking it was two different files but I might do that just to see if it would work. Even if it did it might be better to go the SQL route.

-----Original Message-----
From: midrange-l-bounces@xxxxxxxxxxxx [mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of rob@xxxxxxxxx
Sent: Tuesday, March 04, 2008 1:35 PM
To: Midrange Systems Technical Discussion
Subject: Re: Join Logical file using the same physical file twice

with
select * from qtemp.emp
..1....+....2....+....3
EMPNBR MASTER
1 2
2 3
3 0
* End of data *******

select * from qtemp.empemail
..1....+....2....+....3.
EMPNBR EMAIL
1 one@xxxxxxxxx
2 two@xxxxxxxxx
3 three@xxxxxxxxx
* End of data ********


create view qtemp.empemailv as(
select emp.empnbr, emp.master,
em1.email as employeeemail, em2.email as masteremail
from qtemp.emp as emp left outer join qtemp.empemail as em1
on emp.empnbr=em1.empnbr
left outer join qtemp.empemail as em2
on emp.master=em2.empnbr and emp.master<>0)

select * from qtemp.empemailv
..1....+....2....+....3....+....4....+....5....+....6....+....7....
EMPNBR MASTER EMPLOYEEEMAIL MASTEREMAIL
1 2 one@xxxxxxxxx two@xxxxxxxxx
2 3 two@xxxxxxxxx three@xxxxxxxxx
3 0 three@xxxxxxxxx -
* End of data ********

You might want to throw in a IFNULL or COALESCE on that last row and give
it a default value.

Rob Berendt
--
Group Dekko Services, LLC
Dept 01.073
Dock 108
6928N 400E
Kendallville, IN 46755
http://www.dekko.com





Mike Cunningham <mcunning@xxxxxxx>
Sent by: midrange-l-bounces@xxxxxxxxxxxx
03/04/2008 12:12 PM
Please respond to
Midrange Systems Technical Discussion <midrange-l@xxxxxxxxxxxx>


To
"'Midrange Systems Technical Discussion'" <midrange-l@xxxxxxxxxxxx>
cc

Subject
Join Logical file using the same physical file twice






I have a need to join two files as if they were three. We have an employee
file that has an employee ID and an employees supervisors smeployee ID. We
have a second file that is part of our menu system that has an employees
employee id and their system userid and e-mail address. I want to create a
single join file that has the employee's userid and e-mail and the
supervisors userid and e-mail.

I can't do this
R EMFLE JFILE(UTPUNAME LWPEMFLE UTPUNAME)
J JOIN(1 2)
JFLD(USS# LWPCTID)
J JOIN(2 3)
JFLD(LWEMPS USS#)
The create fails saying there are two many "J" specs. Acts like the second
UTPUNAME is being ignored.

So I tried creating an alias pointing to UTPUNAME named UTAUNAME and tried
this..

R EMFLE JFILE(UTPUNAME LWPEMFLE UTAUNAME)

J JOIN(1 2)
JFLD(USS# LWPCTID)
J JOIN(2 3)
JFLD(LWEMPS USS#)
This create failed because it said DDM file UTAUNAME could not be accessed
at remote location *LOCAL. We do have a *LOCAL defined under WRKRDBDIRE
that net.data needed that points to the local system name. (Interesting
that the SQL CREATE ALIAS used DDM files).

Can anyone offer any suggestions on how to fix the DDM issue or how to
setup the kind of join I want to create?

p.s. The app that needs this is doing a JDBC connection from a PC and
using SQL commands to access the data and could do two SELECT commands but
I would prefer to keep the logic for how this join is defined and the
fields it contains and what the application sees on the iSeries side and
not in the PC application.
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.


--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.


This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2019 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].