Saturday 19 February 2011

A Theory of Software Development: Bugs

Software development is a difficult task. It requires in-depth knowledge of the technical problem being solved, an acute awareness of how the technical problem fits into the framework of the business it functions within, an adherence to a set of established or evolving processes and finally, the ability to regularity adapt to new requirements, new technologies, new people and many forms of technical shortcomings throughout the lifecycle of project. Essentially software development is the process of solving many technical problems while simultaneously managing complexity and adapting to change.

Throughout a project many challenges are overcome and the ones that are not usually manifest themselves one way or another as bugs within the software application. And one thing is clear during today's modern software development: there will be many software bugs that need to be managed. Managing these bugs will become an routine task that every development team will face over and over again throughout the development lifecycle. So where do these bugs come from? Bugs find their way into software applications for 2 main reasons:
  1. Insufficient Requirements: Bugs that surface because the software is exposed to scenarios that it was never designed to be exposed to. For example, a method is designed to do X when given data Y, however after it is built method X is given unexpected data Z which causes it to return an unexpected result or alternatively throw an exception. This can also be thought of as violating a precondition.
  2. Insufficient Code:  Bugs that surface because developers design or build the software incorrectly. For example, a method should do X (the requirement specifies that it should do X) but it is built by a developer in such a way that it does Y instead of X. This can also be thought of as violating a postcondition.
I find it helps to think about software bugs in this manner because over time you can start to see which bugs are continually cropping up and therefore which ones your team should focus on avoiding. If it's of the Insufficient Requirements variety, I find that something is going wrong in the requirements gathering and dissemination process (either the requirements that are being gathered are incorrect or incomplete or the way they are understood and used by developers are incorrect). If it's of the Insufficient Code variety, I find that there is generally not enough (or any) unit-testing and/or acceptance testing being done. Finding ways to identify these issues and correct them is paramount to continually improving the software your team builds. The rest of this post will focus on the Insufficient Code variety and how a specific type of testing can be used anticipate and mitigate these bugs.

Once the understanding of where bugs come from has been established, the next step is to understand how bugs are classified within the Insufficient Code variety. Misko Hevery at Google has a unified theory of bugs which I find particularly compelling. He classifies bugs into 3 types: logical, wiring and rendering. He goes onto argue that logical bugs are on average the most common type of bug, are notoriously hard to find and are also the hardest to fix. Thus if developers only have so much time and energy to spend on testing, they should focus their testing efforts on uncovering logic bugs more so than wiring or rendering bugs. Of all the types of testing, unit-testing is the best mechanism for uncovering logic bugs. Misko Hevery goes onto say the following about unit-testing:
Unit-tests give you greatest bang for the buck. A unit-test focuses on the most common bugs, hardest to track down and hardest to fix. And a unit-test forces you to write testable code which indirectly helps with wiring bugs. As a result when writing automated tests for your application we want to overwhelmingly focus on unit test. Unit-tests are tests which focus on the logic and focus on one class/method at a time.
A key question that can be asked after reading the above paragraphs is how then do I write testable code that makes it easier to unit-test and therefore uncover logic bugs more easily? This can be accomplished by doing the following:
  1. Methods should have parameters that are interfaces not concrete types (unless the types are primitives such as ints, strings, doubles... etc)
  2. Constructors should not do any significant work besides checking and setting their parameters to internal member variables
  3. Avoid global state at all costs (this includes using singletons and static methods)
  4. Do not use the new operator within classes whose primary focus is logical in nature (i.e. classes whose methods contain ifs, loops and calculations)
    1. Note 1: Factory classes can and should have the new operator as these classes are specifically designed and used for wiring up other classes together
    2. Note 2: Certain data structures like lists and arrays are usually fine to "new" up inside logical classes
  5. Avoid Law of Demeter violations
So here is what the above rules actually look like in source code format for a class called Process which has a Start and Stop method and performs logic functions on some scheduler and data store objects:

public class Process
{
     private IDataStore _dataStore;
     private IScheduler _scheduler;

     public Process(IScheduler scheduler, IDataStore dataStore)
     {
          if(scheduler == null)
          {
               throw new NullParamException("scheduler");
          }
          if(dataStore == null)
          {
               throw new NullParamException("dataStore");
          }
      
          _scheduler = scheduler;
          _dataStore = dataStore;
     }

     public void Start(int delay)
     {
          _scheduler.JobComplete += _dataStore.Save;

          _scheduler.Start(delay);
     }

     public void Stop(bool forceShutdown)
     {
          _scheduler.JobComplete -= _dataStore.Save;

          _scheduler.ShutDown(forceShutdown);
     }
}

This class can now be effectively unit-tested by creating mock IDataStore and IScheduler objects that are passed into the constructor upon creation (see how I do this using RhinoMocks: a dynamic mock object framework). By using this approach, only the method under test is actually being tested and not any other classes. I have found this approach for designing classes and methods to be of particular importance as a project grows larger and matures. When a healthy regression suite (set of unit-tests) has been created for an application without taking into account this approach (i.e. a single unit-test will test multiple classes at the same time) and then a change request occurs, I find that many disparate unit-tests begin to fail after making code changes. It is therefore essential to isolate each unit-test so that it tests only the logic contained within a single classes' method and no other methods. Adhering to this principle from the beginning of development will save countless hours of refactoring seemingly unconnected unit-tests that fail after a single code change.

A final note about wiring logical classes together: By using the above approach, one or more application builder classes (factory classes) will need to be built that are responsible for wiring together the application. These builder classes create ("new" up) all logic classes and use inversion of control (IoC) to pass other logic classes to each other through their constructors. This essentially wires up the entire object graph of the application. These builder classes can then be tested but the resultant tests are more integration/system tests as they verify that the application’s object graph has been setup correctly and are therefore an attempt at uncovering any possible wiring bugs that may exist.

No comments: