Is Code Reuse Always a Good Thing?

The idea of code reuse has been a gospel for many years among advocates of modular technologies: first structured programming, then object oriented as well as other types of programming methodologies.

While it is a good thing that we are able to separate concerns and make our code more modular, there is also a downside is requiring that all our code become
reusable.

If too much emphasis is put on reuse, It is possible that a perfectly good solution just becomes something super complicated. In that case, it can quickly degenerate into a generic piece of code, which requires major efforts to interoperate with other parts of the application.

Design Mistakes

Reusable code can also lead to big mistakes in design. By reusable code, people have the idea of creating code that can be applied in another scenario by doing small modifications. This is how inheritance is used by some developers.

And it becomes tricky because real programs don’t work this way. If you write code that is generic enough to be reused in different scenarios, it may be that it doesn’t solve any particular problem well.

A Better Way to Go

The other way to get code reused is to design it to fit a particular problem well. When that happens, we will start noticing a pattern that can be applied not only to the problem that it immediately solves. Thus, such code can help us finding a more generic interface that could (and should) be used in other situations.

Looking at that aspect of programming, it might be better to talk about the sharing of algorithmic strategies, instead of simple code.

Reuse and Black Boxes

It is hard to reason about an algorithm when all you have to work on is a set of black boxes.

Reusable modules as we have these days work pretty much as a black box. Although they are available in source or binary form, they cannot be easily changed if necessary. Sometimes is true even when we’re working with open source software, due to the difficulty of making changes to a library.

I feel that when we reuse code just as a way of creating more black boxes, we have a less flexible toolbox, that is not easily amenable to algorithmic changes. Several layers of such libraries can make our code to look brittle and inefficient.

Maybe we should look for other ways to create software that is easily modifiable. I think that generic programming, using meta constructs available in languages such as Lisp and Prolog could provide a better solution. Maybe we will see more of this in the future.

Further Reading

Knuth talks about some of these issues on his interview for coders at work.

When Free Software is Not Free

The Free Software movement was created with the assumption that software should be freely modified by its users. The main premise being that having the rights of using a program also means the right of modifying it when necessary.

Free software exists today in different forms, and it is widely used, however in many situations it hasn’t provide what it proposed.

Consider the case of the Linux Kernel used on Android Phones. You are free to download and study it if you have a suitable Internet connection. After all, the software is in many ways the same used in any other Linux machine.

However, if you really plan to use this software to do any useful improvement you are out of luck. First of all, changing the operating system in one of these devices will break all warranties. I wouldn’t even be surprised if it make the mobile phone stop working.

You only hope of getting your modifications into the OS is making changes in a simulated environment and then submit a patch to Android developers. Then, if they are interested in your patch you might have a chance of seeing it in the next version of the Android OS, when released by your vendor.

The same thing would happen if you used the iPhone, which has a BSD-based kernel.

Free For Whom?
As you see, the fact that the software is free didn’t take away the right of companies to dictate what they want you to use. And this is the key for this new generation of open source deployments.

What the technology companies learned is that it doesn’t matter that anyone has the right the change the software. What really matter is that the companies still have the right to say what you can run or not — and their problems are solved.

Using this tactic, technology companies have the best of both worlds: they are using free software, which makes it seem like they care about freedom. This also makes it possible for them to use billions of lines of free, tested code. On the other hand, they don’t need to give away control of the resulting system, and any changes need to be approved by them.

Free software is a fantastic bargain: all of the excellent free code can be combined into powerful frameworks for very little price. In fact, a big company just need to hire some of the same developers that created the free software project.

At the same time, companies such as Google and Apple can put themselves into direct competition with, for example, Microsoft, which by their own decisions decided to create software from scratch.

Ultimately, the power of open source is demonstrated by the fact that it can be used successfully in such a scenario.

If this is what their creators intended, it is another story. I am pretty sure, however, that this was not the future intended by the Free Software Foundation, for example.

But only time will tell if free software will really mean freedom for developers, or free profits for big companies.

Object Interfaces versus Concrete Code

Although lots of people design object oriented systems, few notice the
reason why them sometimes don’t work as planed. While such systems
were created to reduce complexity, several object oriented design become
extremely complicated. Especially when we start to add design patterns
to solve even simple problems.

Frequently, OO systems are the right solution for the wrong problem. The major problem we face when writing a system is not writing components
themselves. In fact, writing a single component can be done as easily in
an object oriented language as in a procedural language, for example.

The big problem we have is writing interfaces. Interfaces generate the large scale complexity in a system, which is what we want to avoid.

The problem though,is that most programmers think it is ok to use the interface building paradigm when writing code for a single component. And while this is possible, it is not the most efficient or maintainable solution.

What is an OO system?

OO incorporate the notions of interfaces but adds some syntax sugar
to implement dynamic dispatch. This simplifies the creation of interfaces,
the same way COM provides this feature for binary interfaces in C.

Notice that interfaces are a nice feature, but they are not all that we need to
write. So, while it is interesting to write systems with a good interface design, thinking that all you write is an interface is very simplistic.

In my experience, there is a lot of code that benefits from being written in different styles, such as functional or declarative. These components don’t needs to be written using classic object oriented components, and will never be part of the outside interface of a system.

In this type of code, it is just to have free functions that access data stored in data structures or lists, for example.

What About Polymorphism

The other feature, polymorphism, is not really new to OO, because it has been done for decades in other languages, such as C. Just define a pointer to a function and call it instead of of a statically defined function — This is how
DLLs or shared objects work, by the way. There is no real need for a new language just to define what will be called when a function pointer is invoked.

In my opinion, one of the main reasons why people like OO for commercial software is the way it constrains developers.

To create software with OO languages, one has to use classes for dynamic dispatch and interface hiding. But while it is ok to pretend for a while that these are the only important features you need, it quickly gets unconfortable.

Very soon OO people have to go through hoops just to simulate things they cannot do in their framework. Have you ever read about singleton — and why you need a special class to represent something that is not an object?

Have you studied flyweight objects to rediscover that sometimes you need values that are not objects? Have you struggled through the performance issues that exist only because of over reliance on OO?

Well, all of these things could be avoided by understanding that OO is nice as a way of representing interfaces to systems, but is a very poor metaphor to work on the internal implementation — unless you are prepared to go through hoops just to make something simple work.