Disadvantages of Statically Typed Languages

It is widely understood that typed languages have advantages that make them suitable for development of a wide range of systems. In fact, statically typed languages such as C++ and Java are currently the most successful in terms of commercial adoption for the development of desktop and server-side applications. Most operating systems, including Linux, Windows, and Mac OS X, are written on C/C++ at their cores. Moreover, user applications in the desktop are frequently written in typed languages: traditionally C++ has been used for this purpose, but we nowadays also have C#, Objective-C and Java as contenders for that position.

The increase in importance of web services, however, has made it possible to use dynamic languages (where types are not strictly enforced at compile time) for large scale development. Languages such as Python, Ruby, and PHP, which were previously treated as simple tools to create scripts, are now used to write software for some of the world’s biggest web sites. As a result, companies such as Facebook and Google rely every day on software written using dynamic languages.

The main reason for the uprising of dynamic languages is the nature of web-based programming. Typical web applications are concerned mostly with storing and retrieving data, where the heavy lifting is done by database systems. The other piece of the puzzle is generating markup language: interpretation of the markup and traditional event handling is performed in the client by a web browser. For this reason, web applications have a very limited set of responsibilities as compared to what is typically assigned to a desktop application, which is responsible for all tasks associated to displaying and manipulating data.

Given this change is programming needs, it is interesting to understand how dynamic languages compare to statically typed languages in the areas of usability and maintainability.

Strengths of Dynamic Languages

Although typed languages provide many useful services that can improve the performance and safety of resulting applications, they are not, however, the solution for all kinds of software engineering issues. A lot of programming problems can be more easily expressed in languages that have a relaxed notion of data types. For example, dynamic languages such as Lisp and Javascript have made it possible to express complicated requirements without the introduction of new types for each concept. Instead, functions and macros are used to define types that are checked are run time only, which reduces the amount work necessary to write a compiler.

At the heart of the problem with compile-time types in programming languages is the requirement of creating user defined classes for each new concept. For example, when a new concept is introduced in an object-oriented system, some languages (such as Java) require that this information be encoded as a new user defined type — which happens to be a statically defined element referred to as a “class”. Moreover, the new type will only be allowed to use operations that have been white-listed for its use. For instance, the inheritance mechanism is frequently used for this purpose. It is also possible to manually add methods to a new class definition as needed.

Forcing Programmers to Deal with Types

The idea of compile-time type checking is also important when safety is a concern, specially in a language with a low-overhead run-time. A prime example of this is C, which manages to use compile time checking to provide some degree of type safety, even in the absence of a proper run-time.

However, when this idea is not properly managed, it ends up generating more work for programmers than it can possibly save. One example of this insidious problem happens when two libraries treat the same concept using similar — but different– types. When this occurs, programmers are required to deal themselves with the differences. They end up needing to provide translation layers that make it possible for the two pieces of code to interoperate.

Working for the Compiler

One of the worst feelings I have when programming in C++ or Java is the sense of what I call “fighting the compiler”. Essentially, that is the part of the work in which one needs to make the compiler happy, by fixing all type errors so that a program can compile without issues. Usually it happens when we’re doing some kind of manual refactoring and several places have similar type checking issues that result in a large list of errors.

Slow compilation is another problem that frequently arises during the development of programs in a typed language. The main reason is that the compiler has to be able not only to parse the language in question, but also check each expression for correctness with respect to the type system. Depending on the complexity of the expressions used, the computational time required to perform these operations can be comparable to the total time for code generation. This is especially true when the language has a complex syntax such as C++, which even includes templates that can perform recursion at compile time.

This means that the requirements of compile-time type systems may increase the overhead on programmers. And few things can be more annoying than having to wait a long compilation time for seemingly small changes in a code base — as a result, productivity suffers. This may be a negative force in a large project, even when the advantages provided by type-checking are factored into the equation.

Conclusion

Compile-time checking provides several advantages, including the automatic elimination of a large class of errors. However, the rise of dynamic languages during the last few years has provide an opportunity for the discussion of the disadvantages of strict compile-time type checking.

Among the problems caused by type checking during compilation is the increased time that programmers need to spent for a full build. Sometimes it is much better to be able to test a program quickly, and let full-scale type checking for a later phase. Traditional typed languages do not allow this relaxed approach, which is one of the key advantages of dynamic languages.

In the future, we expect programming languages to provide better trade offs in this arena. Newer languages such as C# and Go have already introduced new ideas in this respect, but would like to see even more improvements in the next few years.

Similar Posts:

About the Author

Carlos Oliveira holds a PhD in Systems Engineering and Optimization from University of Florida. He works as a software engineer, with more than 10 years of experience in developing high performance, commercial and scientific applications in C++, Java, and Objective-C. His most Recent Book is Practical C++ Financial Programming.

2 Responses to “Disadvantages of Statically Typed Languages”

  1. > Usually it happens when we’re doing some kind of manual
    > refactoring and several places have similar type
    > checking issues that result in a large list of errors.

    Refactoring is a major advantage of static types: any omission or typo while refactoring would cause a runtime error in a dynamic language.

    Is this a big issue in web programming ?
    I guess not. Either Testing will catch these errors or users have resigned to unreliable web sites…

    By dchambers on Dec 22, 2011

  2. One thing to remember is that the main purpose of strong typing is to enforce good coding habits. Code quality has gotten much better overall. As a result, where a project scores on the trade-off between quality gain and productivity loss has changed.

    By Leo on Dec 23, 2011

Sorry, comments for this entry are closed at this time.