× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



On 5/16/2013 2:48 PM, John Jones wrote:
My former employer uses an app to create PDFs from SPLFs. With the number
of concurrent jobs and the raw number of conversions that would occur
during a typical workday, MIMIX could not keep up. Even though the WAN
link ran at full Gb speed.

The actual issue was all of the object creation/updates that would occur
during the PDF creation process would bog MIMIX down. I think it was
trying to lock the object once it was created but the conversion process
was not yet done with it.

In their case, a workable solution was to simply not replicate during the
day. They separated the IFS folder (with about 1000 subdirectories) into
it's own MIMIX replication group and enabled that group at around 11PM then
disabled it around 5AM. That was plenty of time for MIMIX to catch up.

So the files were replicated daily. Not instant but for their use that was
good enough. And didn't require any additional software or system
reconfiguration.


I just got done dealing with this issue. I had two completely different systems using the IFS for XML document storage (one for order management, the other for EDI). It turns out that the IFS in particular and the IBM i in general really aren't set up for handling millions of documents.

Two issues. First is that any directory over 16000 entries begins to fail on most QShell or PASE commands, although the native commands such as MOV and DEL still work. But that does you no good when you want to JAR up a directory with 150,000 files. Seconds, user profiles don't like owning millions of objects. They just don't. They get bigger and bigger and bigger and eventually your SAVSECDTA starts taking hours. Not exactly the best situation.

So what I did is move everything back to the database. I use CLOBs and store everything in a file. That allows me to easily manage the data using traditional keys rather than crazy nested directory structures and archival is as simple as copying records from one file to another. It's a beautiful thing. As I said, I'm using it for XML but it's just as applicable to PDFs or indeed any other stream file.

TIP: For XML you can use the XML data type rather than BLOB or CLOB. It checks that the data is well formed and hopefully some day will also allow PureXML querying like it does in DB2 LUW.

Joe

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.