|
Then how come it takes longer for my OS to load, my word processor to load, my spreadsheets to load, my games to load, etc. than it did 5 years ago or 10 years ago? Could it be because programmers in the PC world decide what "fast enough" is so that even though processors have skyrocketed in their speed and power, they can't keep up with the horrendous bloat added by improvements in software? While I sure see the changes made in the games I'm using, the truth is I don't do much more with my word processor than I did 10 years ago. The machine I am running is probably several thousand times as fast as the Apple II+ I used so many years ago (comparing a 700 MHz Pentium to a 1 MHz 6502), but it doesn't respond as quickly when I want to write a letter or run a spreadsheet. Since my software runs slower than it did before, I suggest that PC hardware is also losing the fight with bloated, inefficient code written by software engineers who count on the next generation of processors to carry the load of their poor code. I'm really sorry you brought this up because I am an advocate of Java. But compiler and code optimization needs to be better than it has been. I am shocked that people think the hardware should just "carry the load" of whatever crummy code is heaped upon it. Don't people care about the quality of their work any more? PC coders don't worry about efficiency and that helps the economy by forcing PC users to buy faster and bigger machines every year just to keep installing the newer versions of their software. When you say optimizing something that is already sub second is "counterproductive" you are being a little myopic. I'm sure you have some particular setting in mind, but if that routine (or method) you are talking about is used a billion times a day then there can be a lot of value in optimizing it. The difference between 1/10000th of a second and 2/10000ths of a second could be a difference of 28 hours of processing time. Now, you probably think that is okay if you spread it out among a thousand users but the problem is the next programmer felt the same way and his code added another minute and a half to their wait, and the next and the next. Just because my word processor did a zillion things in a big hurry doesn't mean I don't mind waiting. Steve Richter wrote: >Hello Walden ( and Phillip and others interested in the subject as I am ), > >In this day of very cheap, practically free, cpu, I think the readability >and logical organization of code is much more important than the nbr of >instructions needed to run to perform a function. I think that even code >bloat gets a bad rap. If the bloat is organized and it runs fast on a human >scale ( sub second ), then it is not bloat. > >More times than not, optimizing code reduces its readability. If what you >are optimizing is already sub second, you are being counterproductive. Only >on the cpu challenged as400 does activity like this have a justification. > >Steve Richter > > > -- Chris Rehm javadisciple@earthlink.net Beloved, let us love one another: for love is of God; and every one that loveth is born of God, and knoweth God. 1 John 4:7
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.