× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Hello John,

thanks for taking time for assembling such a nice read!


Am 28.12.2021 um 03:10 schrieb John Yeung <gallium.arsenide@xxxxxxxxx>:

Well, I am not too familiar with old IBM midrange stuff. I started paying attention to versions at around V5R2.

No problem, I can contribute my experience. ;-)

the customizations that had to be made to run on AS/400 were very visible.

Of course. Many things are quite different to Unix, even more different when comparing just Windows vs. Unix. And when you dare to expand into what I'd call a "true port", then I'd add the requirement to convert man pages to help pages, build a converter for the IBM i style braced parameter passing to Unix style parameters built into getopt, (or be lazy like IBM, see exportfs command), and most important, have some way to at least read from/write to the text part from source files in QSYS.LIB — record I/O. Expanding on that, using also DSPFs and PRTFs would be a nice thing to have, also do some automatic rewrite from stdout to the job log, etc.

True porting is an awful lot of work, compared to just use some AIX running somewhere else, run configure && make, copy over the stuff and hope it will not crash. :-)

A good beginning to porting things would be a complete, hand crafted config.h for GNU autoconf based projects, #defin'ing all the functions being available in the C90 standard library — V4R5 is stated in the documentation to adhere to C90. (Later, things became more blurry and no official statement about C level compatibility was available anymore.) Unfortunately, this is a serious undertaking, so I've not yet done that. Maybe extracting available function names from the libc service program could be automated…

The biggest and most fundamental was that the internal representation of strings was EBCDIC.

The native representation on IBM i, and thus the best choice when it comes to text string I/O with other system components. :-) Also, it means no additional porting work, because it's the "natural" encoding for IBM i. ;-)

Yes, work starts at least when interfacing to the outside world, e. g. via network.

Further, not all of the standard library (supposedly included with every implementation of Python) was ported, so software which claimed to "run anywhere that Python 2.3 runs" would not necessarily run on iSeriesPython.

Same with the now also somewhat dated Perl port.

If your hobbyist rules allow the use of iSeriesPython 2.3.3, I can point you to a version compiled for V4R5. (You have to use the Wayback Machine!)

Rules… LOL! :-) TL;DR: No, thanks.

Long version:

They would allow, but there is one more related artificial restriction: The Model 150 I'm using is clocked at a bit less than 50 MHz. This is an incentive to me, to make the most of these restricted resource. And what's more in the spirit of old AS/400's than squeezing the biggest amount of productive work out of a limited machine? :-)

Remember my struggle to avoid SQL wherever possible? This isn't because I don't like SQL, but because it is inefficient as hell, when viewed from a "processor cycles vs. actual work being done" viewpoint. At least in V4R5. It takes multiple seconds to get everything prepared, which is rather annoying for interactive applications. This is somewhat different for typical batch jobs chewing on big files with huge amounts of records.

See example here: https://github.com/PoC-dev/asterisk-translate-clid

This daemon ("never ending batch job") provides a reply to a request after around 250ms, even when the machine wasn't actively serving connections over 24 hours. And I'm sure there are other scenarios where such a *very* simple but efficient and *fast* way to expose tiny amounts of data buried in a PF to the outside world is very welcome.

But seriously… my very very limited experience with Python (expanding Fail2Ban) on Linux with not too old hardware didn't show that stuff was running exactly fast. This might have been related to some side effects, or to my own inexperience with Python itself. All in all, not exactly an incentive to learn more about it. Even more when I guess that I invest some time to get Python to run on OS/400 and learn that the CPU is at 100% but the actual work needed to be done is coming through very slow.

Finally add the fact that Python means to me to learn a new programming language. While this isn't bad in itself, I became aware about "Hey, shouldn't a hobby be something to enjoy?". Looking at all those things I planned to do and try out, all those ideas appearing a cool idea to have and publish, stacked upon software dependencies I need to fulfill before, became frustrating. (Example: Wouldn't it be nice to port xxx? Yes, xxx would be a nice thing to have, especially as a server job. But, OS/400 has no fork, and the older versions have no SSL support being worth mentioning. So, no easy and fast way to have — example — an IMAP server to be used from my modern Mac available for OS/400.) To stay sane, I'm required to start sorting out. What is enjoyable? What is more likely to fail because until the first "it basically does what it should" requires so much work being done before that I'm losing patience and interest?

When something finally works, preferably efficient, after not losing interest because being entangled in an endless chain of dependency hell… this is my hobbyist reward of things. :-)

For making business charts, you could conceivably leverage Excel. The xlwt package I mentioned earlier doesn't expose the charting capabilities, as far as I'm aware, so it would be another hobbyist's challenge to backport XlsxWriter (much better and more modern Excel file writer) to Python 2.3 (from Python 2.7).

Not exactly what I had in mind. :-) Maybe my "boss says" example wasn't a good one. ;-)

I could probably use Excel (or Numbers on my Mac) more easily by generating CSV and importing that into a readymade Sheet + Graph. Which would involve semi manual steps. Expanding on the idea to use some "external" tool for graphing, I also could reuse my already existing rrdtool stuff on Linux and fetch data in regular intervals via cron from my 150. I'd love to have a self-contained solution, though. But, see above about dependencies…

I know, it's all pretty circuitous, but given your constraints, there might not be any short, straight paths.

I know, and I very much appreciate your thinking in a serious way about possible solutions. In fact, the reuse of rrdtool as external graphing solution is really something I'll consider, before digging into the rabbit hole of GDDM. Something I'd love to procrastinate to a later point in time when other, easier things are done and running.

Thumbs up!

:wq! PoC


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.