|
Aaron, I'm looking at you :)
http://archive.midrange.com/rpg400-l/200204/msg00438.html ...As long as Aaron is okay with it ... he seems to have the 'patent' on RPGDocs :P Whew, the pressure of it all. I have had one good idea my whole life and now you are asking me to give it up to the community!? ;-) At my previous place of employment we had a programmer working on such a tool that would take an RPG program and based on various meta-data would produce a whole slew of information in a DB2 table. Thinking through it more now it would seem best to put it in an xml file to then be processed by a second process (i.e. xslt) which would produce any desired end result (e.g. a single or group of HTML pages that could be automatically published to a wiki site). Here's how I would do RPGDocs if I started today. 1. Determine what meta-data I was going to collect and how I was going to get it. 1a. Files used 1b. H spec options use 1c. Variable name declarations (good for search if you want to find all programs where INV# is used) 1d. Procedure names 1e. Sub routine names 1f. Etc... 2. Determine what data you want to collect that is more on the comments side. For instance many programmers put comment blocks above each sub proc and sub routine and also at the top of each program (see below). Each shop has different naming conventions for denoting or describing the info in the comment block (e.g. @Program, Program Name:, Program Name...:, etc) so these values would have to be configurable, again, I would use XML to configure it. The hard part with this is knowing when the data stops (i.e. what happens when "@Description -" spans multiple lines?) //************************************************************************** ***************** // @Program - BFXML (Build From XML) // @Author - Aaron Bartell // @Creation Date - 2006-07-07 // @Description - Dynamically build an RPG program that will parse the specified xml file. // @Notes - ALB 2006-07-07 This should go out in the next release (v1.2). //************************************************************************** ***************** 3. Determine how to catalog the information after it has been "search" or "crawled". For this I would use an xml file like the following. The beauty of putting all this information in an xml document is that xml is easily extended and can be consumed by MANY other tools out there (thinking of Lucene http://lucene.apache.org/java/docs/). <library name="PROD1"> <program name="BFXML" memberName="BFXML"> <copybooks> <copybook name="ErrorCp" qualifiedName="QSOURCE,ErrorCP" /> <copybook name="UtilCp" qualifiedName="QSOURCE,UtilCP" /> </copybooks> <HspecOptions> <Hspec keyword="dftactgrp" value="*no"/> </HspecOptions> <tables> <table name="CONFIGPF" [put attributes for all F spec column data] /> </tables> <variables> <variable name="gError" type="datastructure" based="RXS_Error"/> ... </variables> <subprocedures> <subprocedure name="parseXML" exported="false"> <parm name="pXmlFile" type="varying" length="256"/> </subprocedure> </subprocedures> [there would be much more information, but you get the idea. Basically log anything and everything about the program] </program> </library> 4. Build an XSLT (XML Stylesheet Language Translation) from the generated xml to a grouping of HTML files that could easily be published to a static site similar to JavaDocs. Of course we would want to make it not so static because JavaDocs suck for searching (check out this next generation JavaDoc called JavaRef: http://mowyourlawn.com/blog/?p=22). No reason we couldn't create an application just like it for RPGDocs. The above is just talking about searching the source and hasn't even gotten into processing the API's that will give you interrelations like what other modules are making use of module xyz. The best part about this is that a source crawl tool has most likely already been written to do something like this, we would just have to mod it to go over RPG. Note that it would probably be written in Java, but all the better IMO because then you can more easily take advantage of tools like Lucene. Anyways, those are my thoughts on the matter. My forecast for free time to do something like this in the near future is rainy at best. Right now I am working on an open source tool called IFSARCH that will allow an iSeries shop to easily schedule jobs to archive files in the IFS (e.g. .xml, .pdf, .html, .xls, .doc, .txt, etc) with wild card features, zipping and a whole bunch more! HTH, Aaron Bartell Add my blog to your rss reader: http://mowyourlawn.com/blog
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2025 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.