|
Thanks,
I do have Journaling turned on, and I will turn it off
(this is a simple transaction file, only written from one place, and
only updated from another place...)
It does have a logical over it (select "unprocessed records" only)
G
Charles Wilt wrote:
1) If the table is journaled, make sure you use commitment control (orTh
stop journaling )
2) Don't write one row one at at time.
INSERT INTO <lib.file>(field1, field2, etc)
VALUES (value1a, value2a, etc),(value1b, value2b, etc), (value1c,
value2c, etc),<....>
Charles
On Tue, Feb 17, 2009 at 4:06 PM, Gqcy <gmufasa01@xxxxxxxxx> wrote:
I have a process that just inserts rows ( INSERT INTO <lib.file>
(field1, field2, etc) VALUES(value1, value2, etc)
things were great.... until I got like 400k records in the file,
then when I wanted to do a lot of writes at near the same time,
performance died.
I see something in the SQL monitor about doing a "table scan" before
each INSERT.
How do I NOT do this? (if I can)
I have no key on this physical.
Should I create this file differently?
Does someone have the "perfect example" of a process that only writes to
a BIG FILE?
Thanks
Gerald
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2025 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.