|
A system of measurement that cuts into the time available for knowledge workers doing their ultimate work needs to be included in the measurement, so that the people who want the paperwork can see if the time spent doing the paperwork fits within their grand scheme of things. As for the question of WHO is responsible for the most bugs, our programming standards include lines in the front of our programs giving a history of modifications some tracking code, DATE, programmer initials, brief statement of change many of these are related to fixing some problem introduced by ... point at earlier mod someone could go thru this & total up the problems fixed & where they came from 99% of ours came from our ERP vendor a metric might be based on our latest information on the bugs that did not come from our ERP vendor or our consultants, how many did Macintyre introduce to our total systems for each year he was on our payroll if this was tied to future pay rates, then Macintyre might become less religious about documenting this > How does one count lines of code? Lets say the job is to write a program > and I do what I usually do and find a program like the one wanted and copy > it, modify it with changes, and clean up some old messy sections. In the > end the new program is 300 lines shorter than the one I'd copied. Did I > have a good day, a bad day, or a horrible day? Can we count lines of code or machine cycles associated with the execution of bug free code ... I should think that productivity might be considered to be inversely proportional to the amount of code needed to accomplish various tasks. Let's exclude the lines that come into the program from file layouts with many fields not used by the program. > A more interesting > measurement is number of edit/compile cycles to complete the project. Sort > of makes you think before compiling. And isn't it the thinking part we want > to reward? There are different styles of software development & different aspects of what non-technical people want to reward. I think my accounting department will consider having a successful year if they can cut costs without impairing output, or having to admit that any manager made a mistake ... I have made several suggestions what can be done to cut costs, but if they agree with my position them it will be tacit acknowledgment that management, or someone, made some past decision to spend more money than was necessary, and it is not always practical to blame former employees, so perhaps my suggestions were ill-conceived or ill-timed. For myself, my productivity perspective is never making a mistake that affects my co-workers, such as foot in mouth or bugs that are not caught by QA, and fulfilling end user software modification requests before they have a chance to change their minds about what they want, so they now have in their hands exactly what they asked for, then they can ask for something new, instead of them asking for something new before I have finished creating what they now not want, so that my time there has gone unrewarded. Run a compile but do not print it - go into edit & include the last compile & search that print for error message lines, then get to corresponding line in the program & fix it - repeat cycle until no more error messages. I was shown this by one of our consultants & it struck me that it made programming mechanical with not much provision to think about how to make the product better for end users. This approach is liked by accountants who are interested in the amount of computer paper consumed by programmers. I do not like this approach because I like to look at all the other code that is affected by the stuff where the error was & that is easier when I have a compiled print out in my hands. Create a program that we know is not yet complete. Run a test of the so far stuff, trying to think like a user - what else does this new program need that would make it more versatile & smooth to operate - what features are needed in the flow to make it more useful to the company. Go back to the code & include those ideas. Run another test, with the same kind of analysis & also review notes on what it was they originally asked for - is it looking good. When it seems good enough, pass it to QA to see if they are happy with how it works in all relevant scenarios. That's what I was doing Fri nite. We have a particular model. Outer CL from Menu Prompt screen with choices Program "A" gathers up all the data needed for each report "line" & puts it in work file Program "B" pretty prints the results from a different logical of the work file So they want more options on the prompt & restructuring of final report ... there are perhaps 20-30 distinct changes needed ... I do some of the changes then run compile & test to make sure they working right & might miss a nuance & fix it then do some more changes that involve different aspect of the logic. When I ran out of time-steam-brain-gas for the evening what I had was something that worked part way ... new prompt, new selection criteria ... end report works but not yet have all the bells & whistles they want ... I will tackle that next work day, but if I get interrupted with some other priority, they have a working something until I get back to it. That is another measure of productivity ... when we leave projects in mid stream, is anyone able to use our work output until we manage to return to those projects? > The problem with metrics on source code is that they don't measure what > I am producing they only measure the path I took to get there. It would > be like measuring the productivity of an artist by how much paint he > used or how many different colors. You might be able to determine how > HARD he was working but it wouldn't have anything to do with what he > produced. There was an article in the last Supply Chain Technology News (the one on Ford & Firestone) regarding RETURN ON DATA. A company invests in software, in masses of data captured & stored. How much of this is actually used by anyone? With DB2/UDB we can capture count of days our computer assets actually used since original creation, when last changed etc. for queries, programs, files. This can then show that the core ERP package came with x,xxx,xxx programs of which we are really using x,xxx heavily, that our programmers added xx,xxx programs of which x,xxx are being used heavily & xxx on a daily basis, that our end users have added x,xxx queries of which xx are used every day. License management of PC software can do something similar ... what all have we purchased vs. what all is actually being used. I suggest that this kind of picture is potentially much more valuable to CIO than metrics associated with adding to the collection. I have in fact recently added an *OUTFILE viewed via Query, that lists all queries that we have, sorted by their description, so a user can easily answer the question "Do we have a program or query to do X" provided the users have named their queries in a logical consistent manner (& some who have not are now adjusting names to make this new view more productive). The most heavily used programs can then be put on a list to be audited in next computer audit - any problems with this stuff? I plan to use this kind of data to focus on performance of what is most heavily used. The SCTN article was about not spending disk space hardware dollars on data that no one is using. The VALUE we are obviously getting out of this resource investment is not just the responsibility of the programmers, it is end users & management, a collective responsibility. A related question sometimes put to me is if we are wasting disk space & my answer is that we are pretty lean but not as conservative as in the past ... I think that the time that could be spent trying to reduce disk space consumption could be better spent trying to improve overall performance & eliminating garbage data. Alister William Macintyre Computer Data Janitor etc. of BPCS 405 CD Rel-02 on 400 model 170 OS4 V4R3 (forerunner to IBM e-Server i-Series 400) @ http://www.cen-elec.com Central Industries of Indiana--->Quality manufacturer of wire harnesses and electrical sub-assemblies +--- | This is the Midrange System Mailing List! | To submit a new message, send your mail to MIDRANGE-L@midrange.com. | To subscribe to this list send email to MIDRANGE-L-SUB@midrange.com. | To unsubscribe from this list send email to MIDRANGE-L-UNSUB@midrange.com. | Questions should be directed to the list owner/operator: david@midrange.com +---
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.