Top Software for Programmers

As a software developer, I came to realize that one of the most important things (after being paid) is to create software that we really enjoy.
During those years, it has become really easy to see the kind of software valued by programmers: editors, compilers, interpreters, and all kinds of little tools that are useful only by other software professionals.
And the reason these programs are written, I came to understand, is not just that we need them. I believe that one of the main reasons this kind of software is so popular is exactly that, to write it, there is no need for external users!
When we create software for ourselves, the obvious advantage it that there is not formal need to elicit requirements.

The Easiest Software to Write?

Writing software for programmers is easy in an important sense: you just need listening to yourself. Writing software for the needs of real users is not so easy: first one needs to interview one, and understand the hidden requirements — what really needs to be done.
Next, you need to worry about user interface… Why bother with this when everything could be solved with a couple of command line arguments?
Just continue thinking like this, and UNIX is reborn. That is right, classical UNIX is the ultimate dream of most programmers: a system that needs no real interface other than the command line.

Who is to Blame?

Programmers are notable for writing this kind of software. And who is to complain? It is much easier to write programs when we don’t need to worry about real requirements. Integration with existing environments is another pain for business software writers: just say no.
Our culture has found the perfect solution for such problems in writing programming tools. No wonder why there are so many computer languages, compilers, editors, and web frameworks. They are fun to write, have challenges that are mostly intellectual, and can be shared among other programmers as a trophy.
Here are some example areas if you are looking for something exciting to write:

  • Programming language implementation: C, Python, Lisp, TeX;
  • Experimental operating systems based on UNIX;
  • Games (most games still are just the implementation of an abstract concept);
  • Web frameworks

So, if you want to improve your software development skills, maybe you should  spend some of your time developing code that you like, after all.

Avoid dealing with user requirements during your creative breaks. At least stay away from things such as: interfacing with legacy code; interfacing with existing databases; using existing UI standards; and using mainstream business libraries…
Most of these factors don’t help in improving the quality of a product after all (and sometimes it is just the opposite), and takes time and energy from developers.
It may be a good exercise to practice programming without such worries. And if you do it really well, you could end up with something useful, which could even become your next “real” job.

  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • E-mail this story to a friend!
  • HackerNews
  • Reddit
  • StumbleUpon
  • Twitter

Why you will never write the perfect software

While there is a lot of sloppy programmers out there, these are not the only ones that have trouble finishing something. Some programmers strive for perfection in the software they create, and in the process end up spending precious time.

Anything less than perfection feels like it is a waste for such programmers. It is like they cannot move from a task until all possible features have been implemented…

Even if this seems unlikely, a few people operate with a similar mindset: trying to do everything that the software possibly could is similar to strive for perfection.

A simple example of this type of attitude towards programming is designing classes that have all possible methods that a client will ever need. The list of user cases continues to grow, and gets to the point when it becomes unmanageable. Classes that have all possible responsibilities is another symptom of this disease.

Avoiding Perfection

Is not that perfection, or completeness, is bad. The problem is the cost of having “perfect” software.

Software is different from physical materials. While a machine can be designed to look perfect (such as the designs that Apple strives to do), requiring the same from a piece of software would add a lot of functionality that is not essential.

A theoretically perfect software would have a virtually infinite number of features capable of satisfying every need of its users  — even if it is targeted to a specific area. Think of this as an example of combinatorial explosion, or fractal design: the more detail you try to put into an application, the more features it needs to have.

However, from all these features, one can only implement a few in finite time.

There is also a more insidious reason. As the number of features in your application grows, so grows the amount of code you need to implement them. This has two side effects:

  • First, you can make more mistakes just because of the enormous surface area for them to appear. This is the classical case of bloat, where more and more features make it difficult to properly maintain the application.
  • Second, as the code size increases, your program also becomes sluggish. Eventually, a program with a large number of features is slower and buggier than a more nimble version.

The next time you design an application, think twice about the essential features. Don’t try to make it perfect, make it functional. Your users will thank you.

  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • E-mail this story to a friend!
  • HackerNews
  • Reddit
  • StumbleUpon
  • Twitter

The Most Common Debugging Styles

Debugging code is an activity that shows a lot of the developer’s personality. It is a process that looks a lot like an investigation, leading to the detection of a failure in an application.

During the investigation, a programmer has to employ some techniques, like information gathering, testing of hypothesis, performing quick instrumentation, etc.

When it comes to learning how a program works, though, there are two main “schools” of debugging: the ones that use print statements, and the ones that use a debugger.

Print Statements

From these two techniques, certainly printing is the most “primitive”, in the sense that it is available in almost any programming environment. As long as there is an output device (which doesn’t need to be the screen/stdio), this technique might work.

The advantage of printing statements is that they are easy to use and program. They also don’t require the program to slow down in order to be used. For example, when working with multithreaded code, printing may be more useful than debugging, because it doesn’t require one or more of the threads to slow down.

Interactive Debuggers

Using a debugger, on the other hand, is not a privilege for everyone. There are many platforms these days with very decent debuggers; however there are still lots of environment that don’t have one.

For example, I remember reading somewhere that Linus doesn’t create a debugger for the Linux kernel because he thinks it is not important.

Despite the rant, it is  nice to have a good debugger available. Depending on the quality of the debugger, it can do some things that would be possible only on dynamic languages. Things like changing the value of variables during a debug session, or going up in the call stack and repeating some code path, can be very useful when determining where a bug is hidden.

I personally like debuggers, but agree that they should not be an excuse for sloppy thinking. Some programmers believe that is normal to use debuggers to poke at a program, instead of stopping and thinking about what the application is doing.

I like to keep in mind that debugging is a research exercise, and it works much better when we use our head more than the debugger.

  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • E-mail this story to a friend!
  • HackerNews
  • Reddit
  • StumbleUpon
  • Twitter