Software Performance, the Web and Microsoft

We live in an era where processing power is plentiful, and everyone is connected to the web. Developing software is getting easier than ever, despite the eternal struggle software professional have against the bad practices of software development. It looks like we are living in a new era of software productivity, but it is interesting to compare what we are doing today to what companies of the past era (such as Microsoft) did.

The Reality of Microsoft in the 90s

Different from us, Microsoft grew up in a time when processors where not plentiful, but were getting faster exponentially. This had a profound impact in the way software engineers viewed the resources available for software deployment.

Given that in the 80s processors were getting exponentially faster,  if you consider that the typical project at MS took one to two years to complete, then you see how they rarely had the necessity to optimize code. They could just write software for the most high end machines they could find in the market, make sure that the software run in their systems, and do whatever necessary to finish the coding within the allocated time. When the software was finally released to consumers, the  machine that two years ago was affordable only inside the MS labs was now common place  in the market.

No matter how badly written in terms of performance the program was, it was virtually guaranteed that it would work on consumers machines. After all, consumers basically had at release time better hardware what MS itself had two years ago. This means that users would be able to run the programs even faster then MS programmers were able to, just one or two years ago.

The Web Reality

In the web, on the other hand, there is little time difference between programming and delivery of software. First, there is no physical barrier for software delivery. For example, we can use the latest production code for Gmail by just reloading the web browser. Ten years ago, MS had to carefully prepare a master disk, manufacture millions of CDs, send them to distribution centers, then stores, then your home, and you still needed to spend a few hours to install the new program. This process would span from a few weeks to a few months.

Performance is much more critical today that it was in the past, because machines are not getting dramatically faster. However, we have two advantages:

  • We can now do most of the work on a server. Thanks to the web, open standards can be used to write really distributed systems. If there is something that is time consuming, we can put on a fast machine, or distribute it among several slow machines.
  • Design decisions are easier to fix. No code needs to be delivered in binary form, so we are free to change implementation as much as we want. If a solution is not scalable enough, just change your algorithms and/or infrastructure, and continue using the same interface.

I believe it is a great advantage to have the web as a intermediate, as a way of solving performance issues. A lot has been done in this respect, but there is still many lessons that software developers can learn while developing scalable applications for the web.

Similar Posts:

About the Author

Carlos Oliveira holds a PhD in Systems Engineering and Optimization from University of Florida. He works as a software engineer, with more than 10 years of experience in developing high performance, commercial and scientific applications in C++, Java, and Objective-C. His most Recent Book is Practical C++ Financial Programming.

Post a Comment