Day 30: Understand Where Resources are Wasted

Writing great performance code is as much an art as a science. Much of the process involves thinking critically about what you are doing and how to change the existing approaches. Unlike physical construction, for example, where rules of thumb have been developed over centuries, the software engineering discipline is still too young.

What is even more important, the needs and goals of different projects are so different that any rule created in a particular context is difficult to generalize to  other areas. This is even more true can it comes to measuring and improving performance.

Despite these general issues, it is still possible to achieve high performance in most programming environments. It is, however, a matter of experimentation and careful testing of the available development options.

Finding the exact areas where resources are wasted is one of the main tools available to engineers and software designers. By careful examination of a software system, it is possible to determine what regions are responsible for most of the time spent in processing as well as input-output tasks.

Use a profiler

The profiler is the first important tool for performance optimization. Most languages will allow you to determine the most used methods and functions, as well as how much time was spent in those activities.

This kind of information is precious because it shows where to spend time and effort. Without a good understanding of where resources are wasted, there is little advantage in doing performance optimization.

In fact, as has been famously stated by Knuth, early optimization may have bad consequences for the development of a system, since a priori we down’t know if these efforts will lead to any performance advantage. Premature optimization may, instead, result in algorithms and/or data structures that are suboptimal for a problem. Such “optimizations” can also remove flexibility from a design even when this is not necessary or desirable.

”Careful examination of a software system allow us to determine what regions are responsible for most of the time spent in processing.”

A typical example in C++ is the polemic about the use of virtual pointers. A lot of C++ programmers try to limit the number of virtual functions because they know that there is an inherent performance penalty in using them.

However, while it is true that there is a slight decrease in performance by the use of virtual functions, it turns out that this doesn’t matter in the vast majority of use cases. In C++, as in every other high performance language, the majority of the computational load is concentrated in the inner loops.

If we are concerned about the performance penalty of virtual functions, what needs to be done is to determine where the inner loops are, and avoid the use of virtual pointers in those areas alone. For everything else in the program (which is 99% of the code), there is no difference between using virtual functions or not.

Test Under High Load

Another important part of the process is to test the system under high load. Preferably, the kind of load that would be above any conceivable normal level operation.

This is necessary because some algorithms will perform differently under high load conditions. For example, if you use algorithms to operate on collections (such as those from the STL or from the Java standard library) the time necessary to perform some operations may be quadratic (or even worse) on the size of the input.

Such a change in behavior means that high load situations will see dramatic increases in time for otherwise common operations. This is the kind of performance issue that can and should be considered during the design phase, but that should also be thoroughly tested to avoid inconvenient results.

Don’t Guess

The whole idea of measuring performance is to avoid guessing games. Every time you need to guess what is going on in a system, the result is less time to do other important things, such as coding new features, removing existing bugs, or improving the documentation.

Instead of trying to prematurely optimize a system, and spending several hours in an “improvement” that may never gain you anything, it is better to spend a few minutes testing and profiling. As a result, you will know for sure where your efforts are better utilized.

Another side effect of this kind of controlled measurement is that you will have a better idea of how other parts of the system operate. So, in the future, you will know that a particular change in a class may have a small or large impact based on previous information. While this doesn’t replace the need for future profiling, it provides better guidance than the simple guessing game that is so common in this industry.

Similar Posts:

About the Author

Carlos Oliveira holds a PhD in Systems Engineering and Optimization from University of Florida. He works as a software engineer, with more than 10 years of experience in developing high performance, commercial and scientific applications in C++, Java, and Objective-C. His most Recent Book is Practical C++ Financial Programming.

Post a Comment