Software Performance, the Web and Microsoft

We live in an era where processing power is plentiful, and everyone is connected to the web. Developing software is getting easier than ever, despite the eternal struggle software professional have against the bad practices of software development. It looks like we are living in a new era of software productivity, but it is interesting to compare what we are doing today to what companies of the past era (such as Microsoft) did.

The Reality of Microsoft in the 90s

Different from us, Microsoft grew up in a time when processors where not plentiful, but were getting faster exponentially. This had a profound impact in the way software engineers viewed the resources available for software deployment.

Given that in the 80s processors were getting exponentially faster,  if you consider that the typical project at MS took one to two years to complete, then you see how they rarely had the necessity to optimize code. They could just write software for the most high end machines they could find in the market, make sure that the software run in their systems, and do whatever necessary to finish the coding within the allocated time. When the software was finally released to consumers, the  machine that two years ago was affordable only inside the MS labs was now common place  in the market.

No matter how badly written in terms of performance the program was, it was virtually guaranteed that it would work on consumers machines. After all, consumers basically had at release time better hardware what MS itself had two years ago. This means that users would be able to run the programs even faster then MS programmers were able to, just one or two years ago.

The Web Reality

In the web, on the other hand, there is little time difference between programming and delivery of software. First, there is no physical barrier for software delivery. For example, we can use the latest production code for Gmail by just reloading the web browser. Ten years ago, MS had to carefully prepare a master disk, manufacture millions of CDs, send them to distribution centers, then stores, then your home, and you still needed to spend a few hours to install the new program. This process would span from a few weeks to a few months.

Performance is much more critical today that it was in the past, because machines are not getting dramatically faster. However, we have two advantages:

  • We can now do most of the work on a server. Thanks to the web, open standards can be used to write really distributed systems. If there is something that is time consuming, we can put on a fast machine, or distribute it among several slow machines.
  • Design decisions are easier to fix. No code needs to be delivered in binary form, so we are free to change implementation as much as we want. If a solution is not scalable enough, just change your algorithms and/or infrastructure, and continue using the same interface.

I believe it is a great advantage to have the web as a intermediate, as a way of solving performance issues. A lot has been done in this respect, but there is still many lessons that software developers can learn while developing scalable applications for the web.

Should we Write Generic Code?

In programming, there are two extremes that are consistently travelled by developers. One is to write code that is not extensible, which implements functionality but provides no avenues for extensibility. This is the mistake made by most people starting in the profession.

The other extreme of the spectrum is mostly done by people that are becoming more experienced, but is also a big problem: writing code that is too generic and extensible, without an immediate need.

The Smaller Of Two Evils

It is easy to argue against inflexible code: most people will just point out that if you need reuse the code in another situation, then you will need to change it, and that will make the system harder to extend.

Despite this, as paradoxical as it seems, inflexible code is easier to fix then ultra-extensible code. The problem is that when we try to write code that is extensible, we are trying to guess the nature of changes that will come up in the future. As most of us can witness, predicting the future is not a strong skill of human beings. We are constantly making wrong bets when it comes to write software to satisfy our future needs.

Now, the big problem with code that is more-generic-than-it-should-be is that now we have more code to maintain over the long run — even if the future use that we were predicting never comes. As development evolves, every layer of code will have these embedded functionality assumptions, code that ultimately will never be used. As result, it becomes increasingly difficult to verify that our generic code is working, as there is no real code using those features.

Unit Testing to the Rescue

If you are a fan of unit testing, then you would most probably say that the solution is to create unit testing for all the extra functionality as you write the application. That is in principle a good solution for the problem,  but you may be forgetting that unit testing code is also code. It may contains bugs, and the more code we write, the bigger is the possibility of introducing new bugs. So, while writing unit tests for the whole code is a strategy to avoid bugs, it is not necessarily the best way to fix the issue, as it can also become a source of problems.

The whole grail of programming, in my view, is that good software needs as little code as possible to implement its features. More code means more problems, and that is the main issue with generic code: we simply need to spend more effort to maintain it. The shorter code that does only what it needs to do will most certainly be easier to maintain and to understand. As a side-effect, if it is easier to understand, it will probably require less intelectual effort to extend when necessary.

Conclusion

Writing good software is an act of balance. We certainly want to avoid situations where functionality is tied to only one environment, and therefore it is difficult to reuse. However, we also need to guard ourselves against over-extending code with features that will probably never be used.

Keeping Your Users in Control

One of the more interesting aspects of programming is the relation software developers have with the products they create. Software engineers have to spend so many hours literally crafting a software product, that they consider the software as theirs. It is a relationship that becomes very complicated, however, because there is another element in the equation: users.

By definition, if you have a business that provides software for a segment of users, then the software becomes their to use. It is more than fair to someone that spent money on a piece of digital creation to expect that he or she can use the software in the way they want. However, the computer industry has traditionally made harder and harder for users to have what they need from software.

The example that motivated me to write this is, surprise, Windows. Right now, I am using windows and it decided that it needs to update the system. That is fine and good, all operating systems need to update critical parts from time to time. What makes the process annoying on Windows is that it removes any authority I could have on the process. Windows decides by itself that it needs to update, then the time the update will be downloaded, then finally, it makes the decision to install everything and tells me that I need to restart now.

winupd

Of course, I know that each of these decisions can be changed by going somewhere in the control panel, and changing the required options. This is not the point. Good software must empower users, not putting them in a situation where they need to do something or their work will be lost by a system reboot.

Instead of making harsh decisions, a better model would be to provide a simple, easy to find way to install the latest changes to the system, along with a discrete notification system. Windows has a kind-of notification system in the tray area. It is not perfect, but it is something. Then, provide a dead easy way to act on the information. For example, the start menu could have a big option “update and restart”. With these two elements, which would change practically nothing in their update system, Microsoft could go from making users angry to empowering them, and making the system more predictable (and fun) to use. 

Microsoft may get away with this, due to its dominant position in the OS market. But I don’t think small businesses should copy the decisions made in this case. Users like to have control. They don’t want to worry about mindless details of software, but when something impact them (such as restarting the system), you should better provide options that make them feel in power, not at the mercy of software developers.