<snip>
Several years back
</snip>
Right now it's about 50M objects per user.
Unless it wasn't the number of objects but that one user cannot own more than 8.5M TB of data.
https://www.ibm.com/support/knowledgecenter/ssw_ibm_i_73/rzamp/rzampsecurity.htm
-----Original Message-----
From: MIDRANGE-L <midrange-l-bounces@xxxxxxxxxxxxxxxxxx> On Behalf Of Richard Schoen
Sent: Thursday, April 11, 2019 1:23 PM
To: midrange-l@xxxxxxxxxxxxxxxxxx
Subject: Re: IFS limits
Boy have I got an IFS story for you.
Found out the hard way on IFS limits several years back with a customer.
Customer was generating 1,000,000 documents per month. Yep, per month.
We discovered ownership limit was about 1,000,000 documents.
Started getting user issues and things blowing up.
Ended up creating a directory each week I believe and a new user profile to go with it.
Now talk about painful management of objects.
250,000 is probably not bad, but probably time to open a new directory if possible.
Either way with that many objects IFS backup time is probably slow.
And eventually you may see object ownership issues unless ownership limits have been bumped up.
Many customers that I have worked with have migrated to using NFS and SAN these days which means you treat it like IFS, but it's not.
And object ownership issues go away and you have to back up the SAN or NAS of course. But that disk is much, much cheaper.
Hope this helps
Regards
Richard Schoen
Web:
http://www.richardschoen.net
Email: richard@xxxxxxxxxxxxxxxxx
Phn: (612) 315-1745
----------------------------------------------------------------------
message: 1
date: Thu, 11 Apr 2019 10:21:32 -0500
from: Joe Pluta <joepluta@xxxxxxxxxxxxxxxxx>
subject: Re: IFS limits
I agree with that sentiment, John.? The last time I had a situation where we had a large number of files in a directory, we started seeing various performance issues. Lists took much longer than simple linear math would have suggested, some utilities couldn't handle the number of files, various commands in QShell had problems, and so on.
Generally speaking, I prefer to limit the number of files to the low thousands.? After that, I try to come up with a subdirectory management structure.
On 4/11/2019 10:12 AM, John Yeung wrote:
On Thu, Apr 11, 2019 at 10:08 AM Doug Englander
<denglander@xxxxxxxxxxxxxxxxxxxxxxxx> wrote:
We have one IFS folder with over 261,000 PDFs in it. I am wondering what the limit is so I can be proactive and avoid a problem.
In my opinion, quarter of a million objects in one directory is way,
way, way, way, way too many already. Obviously, it's best if you're
proactive right from the beginning. I have a rule of thumb: If I
stumble upon a directory and I have to wonder if it has too many files
in it, then it has too many files in it.
What counts as "too many" for my sensibilities is so far below any
hard limits that I keep forgetting that there can even BE hard limits,
other than total disk space.
John Y.
------------------------------
message: 2
date: Thu, 11 Apr 2019 15:43:38 +0000
from: Rob Berendt <rob@xxxxxxxxx>
subject: RE: IFS limits
Good point Joe. Just because IBM supports it doesn't necessarily mean that all the open source tools do.
-----Original Message-----
From: MIDRANGE-L <midrange-l-bounces@xxxxxxxxxxxxxxxxxx> On Behalf Of Joe Pluta
Sent: Thursday, April 11, 2019 11:22 AM
To: midrange-l@xxxxxxxxxxxxxxxxxx
Subject: Re: IFS limits
I agree with that sentiment, John.? The last time I had a situation where we had a large number of files in a directory, we started seeing various performance issues. Lists took much longer than simple linear math would have suggested, some utilities couldn't handle the number of files, various commands in QShell had problems, and so on.
Generally speaking, I prefer to limit the number of files to the low thousands.? After that, I try to come up with a subdirectory management structure.
On 4/11/2019 10:12 AM, John Yeung wrote:
On Thu, Apr 11, 2019 at 10:08 AM Doug Englander
<denglander@xxxxxxxxxxxxxxxxxxxxxxxx> wrote:
We have one IFS folder with over 261,000 PDFs in it. I am wondering what the limit is so I can be proactive and avoid a problem.
In my opinion, quarter of a million objects in one directory is way,
way, way, way, way too many already. Obviously, it's best if you're
proactive right from the beginning. I have a rule of thumb: If I
stumble upon a directory and I have to wonder if it has too many files
in it, then it has too many files in it.
What counts as "too many" for my sensibilities is so far below any
hard limits that I keep forgetting that there can even BE hard limits,
other than total disk space.
John Y.
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx To subscribe, unsubscribe, or change list options,
visit:
https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives at
https://archive.midrange.com/midrange-l.
Please contact support@xxxxxxxxxxxx for any subscription related questions.
Help support midrange.com by shopping at amazon.com with our affiliate link:
https://amazon.midrange.com
------------------------------
message: 3
date: Thu, 11 Apr 2019 12:23:05 -0400
from: "midrange" <franz9000@xxxxxxxxx>
subject: RE: IFS limits
While I agree it is best to understand the limits before designing a system, in 40 years of coding, it has been a constant battle dealing with limitations placed by the big OS companies - IBM, Microsoft, and others. The ifs object limits is only one of many. The mother of all limits was Y2k..
That it is a performance hit to have more than a few thousand objects in a directory when the max is 999,998 objects kind of bites..
To have to build complex application structures to work around such limits adds more overhead.
I have worked on many applications where ifs use is critical, and just because your particular business might not have that need, doesn't mean it is not fairly common.
It's also true that big volumes may be 10 years down the road from when the app is designed.
In my current work we import, generate/export thousands in a day or two, and have regulatory requirements to keep data for a long time...we work around the limit.
For the record, we have been able to live with performance serving docs from directories with 200,000 + objects (and I think some directories have much more).
Have had to be careful of generating a doc (pdf, txt, csv) and then immediately move or ftp it - either a 2 second dlyjob or monitor message about existence and retry.
-----Original Message-----
From: MIDRANGE-L [mailto:midrange-l-bounces@xxxxxxxxxxxxxxxxxx] On Behalf Of Joe Pluta
Sent: Thursday, April 11, 2019 11:22 AM
To: midrange-l@xxxxxxxxxxxxxxxxxx
Subject: Re: IFS limits
I agree with that sentiment, John. The last time I had a situation where we had a large number of files in a directory, we started seeing various performance issues. Lists took much longer than simple linear math would have suggested, some utilities couldn't handle the number of files, various commands in QShell had problems, and so on.
Generally speaking, I prefer to limit the number of files to the low thousands. After that, I try to come up with a subdirectory management structure.
On 4/11/2019 10:12 AM, John Yeung wrote:
On Thu, Apr 11, 2019 at 10:08 AM Doug Englander
<denglander@xxxxxxxxxxxxxxxxxxxxxxxx> wrote:
We have one IFS folder with over 261,000 PDFs in it. I am wondering what the limit is so I can be proactive and avoid a problem.
In my opinion, quarter of a million objects in one directory is way,
way, way, way, way too many already. Obviously, it's best if you're
proactive right from the beginning. I have a rule of thumb: If I
stumble upon a directory and I have to wonder if it has too many files
in it, then it has too many files in it.
What counts as "too many" for my sensibilities is so far below any
hard limits that I keep forgetting that there can even BE hard limits,
other than total disk space.
John Y.
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx To subscribe, unsubscribe, or change list options,
visit:
https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives at
https://archive.midrange.com/midrange-l.
Please contact support@xxxxxxxxxxxx for any subscription related questions.
Help support midrange.com by shopping at amazon.com with our affiliate link:
https://amazon.midrange.com
------------------------------
Subject: Digest Footer
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) digest list To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx To subscribe, unsubscribe, or change list options,
visit:
https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives at
https://archive.midrange.com/midrange-l.
Please contact support@xxxxxxxxxxxx for any subscription related questions.
Help support midrange.com by shopping at amazon.com with our affiliate
link:
https://amazon.midrange.com
------------------------------
End of MIDRANGE-L Digest, Vol 18, Issue 592
*******************************************
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx To subscribe, unsubscribe, or change list options,
visit:
https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives at
https://archive.midrange.com/midrange-l.
Please contact support@xxxxxxxxxxxx for any subscription related questions.
Help support midrange.com by shopping at amazon.com with our affiliate link:
https://amazon.midrange.com
As an Amazon Associate we earn from qualifying purchases.