Getting Back on the Blog Wagon (again)

I’m back on the blog wagon with an old/new blog. I had to close down my blog for a couple of months because it got hacked due to me not updating WordPress frequently enough. In the process of restoring my blog and setting up WordPress again I decided to change the name and location as well. I have several reasons for doing this starting with the direction of the blog. The direction has changed over the years and I now want to move it even further towards the craftsmanship and business of software. The general topic will be agile/lean, software development and entrepreneurship. I’ll leave all my old posts as reference, fun or trivia. Some of them might be relevant and some may not.

I also think that the old name, “Freedom Taking Over”, and the old domain upplopp.se don’t make much sense in this context. That’s why I’ve decided to move the blog and rename it to “The Zuul Cat Idea Brewery” or The Zuul Cat Blog for short.

w00t?! You said! That makes even less sense.

I would like to tell you the story of Zuul Cat but it’s rather long for an introductory post; I think I’ll save it for later. The short version however is that I once worked in a project where we used to put fun pictures on our SCRUM board both for fun and as small motivators. One of those pictures was Zuul Cat. Zuul Cat watched over the process and reminded us to play tuff, “Der is no Fluffy, Only Zuul”. He reminded us to stay on track and true to the process and keep pushing change.

Zuul Cat stuck with me and when moving the blog somehow Zuul Cat seemed a good fit. Besides, it sounds cool!


Admitting Defeat

As hard as it may be to admit; I have failed miserably.

About 3 months ago I set out on a journey into independence. I named it project Mindhex. The basic idea was that I would give myself free hands to work on whatever I desired.

The plan was that I would polish my ruby skills, learn to develop IPhone applications and create a kick ass Sinatra clone in .NET called Nancy. That was far from what actually happened.

To track my progress I kept a journal….for about 14 days. Truth be told 14 days into my little experiment I started to slip. It rather quickly dawned on me that I don’t actually have to do any of this I can do whatever I want. So I did, I picked up scuba diving, I started walking, I slept, I refurnished the apartment and I spent time with my family. I went into what I would like to call vacation mode.

The result is that I didn’t accomplish any of the goals I had set for myself. Thus I call failure on the project. However I have read somewhere that failure is a byproduct of learning. So what did I learn from this?

After conducting my private little retrospective I found that there are actually two lessons to be learnt here. One could aergue that this is very individual but I really think that others may also benefit from this aswell.

Lesson 1: If there is nothing chasing you, you don’t really have to run. I had arranged it financially so that I wouldn’t really have to work during my 3 months to make ends meet. This didn’t add any sense of urgency to the work I was doing. Since it wasn’t urgent it became really easy to procrastinate or assign low priority to actual “work” which resulted in a complete halt.

Lesson 2: There is no I in team. Working alone quickly becomes boring and painful. Some problems that could easily be solved with another set of eyes can snowball out of control quickly. A bad day can, combined with Lesson 1, makes it really easy to give up. Without someone in the same boat to help you over a speed bump the speed bump can quickly become a brick wall. Besides, doing good work is not really rewarding if you don’t have anyone to share it with.

To summarize; if I were to do this again I would most definitely con someone into doing it with me and I would make it so that my survival depends on the projects failure or success.

So what now?

Well I guess it’s back to work! I feel somewhat misadjusted to everything after slacking off for 3 months, but I guess I will settle in eventually.


New Beginnings

As some of you know I have taken a leave of absence from work. I’ll be away
from “proper” work for 3 months, but this doesn’t mean I won’t work.

I will use the next 3 months to work on project Mindhex. I guess the
best word to describe project Mindhex without saying to much is experiment.
I don’t want to unveil exactly what it is just yet.

Some of you may think you have already figured it out and some of you probably
have. I’ll leave you guessing until the main site is ready for launch and I can
finally disappoint you with the answer.

For clues and ambiance visit Mindhex.com


Using Syntax Fairy Dust to BDD with NUnit

A project that I’m currently involved in is leaning more towards the BDD style syntax of writing tests. We decided to use NUnit instead of any of the spec frameworks out there because NUnit has a large community and a lot of tools that work well with it.

To accommodate this we added a whole bunch of extension methods that wrap the standard NUnit assertions. Example:

public static object ShouldEqual(this object actual, object expected)
{
    Assert.AreEqual(expected, actual);
    return expected;
}

To be honest I never really liked this. It adds loads of extension methods to all objects making the intelisense list contain a lot of garbage. Looking at how RSpec does this and taking some inspiration from the NUnit Constraint-Based Assert Model I figured out a NUnit syntax sugaring to make it more BDDish. My specifications now look like this:

[Describe]
public class MonkeyDescription
{
    private Monkey monkey;

    [Before]
    public void Before()
    {
        monkey = new Monkey();
    }

    [After]
    public void After()
    {
        monkey = null;
    }

    [Specification]
    public void It_should_be_possible_to_name_the_monkey()
    {
        monkey.Name = "Chunky";

        monkey.Name.Should(Be.EqualTo("Chunky"));
    }
}

To accomplish this I first had to add the extension method Should to all objects.

public static class SpecificationExtensions
{
    public static void Should(this object actual, ISpecificationConstraint constraint)
    {
        constraint.Matches(actual);
    }
}

Most constraints will be implemented using the GenericSpecificationConstraint. But since we are using a interface we can easily add any constraints we can think of that can’t be implemented using the GenericSpecificationConstraint.

public interface ISpecificationConstraint
{
    void Matches(object actual);
}

public class GenericSpecificationConstraint : ISpecificationConstraint
{
    private Action<object> _specification;

    public GenericSpecificationConstraint(Action<object> specification)
    {
        _specification = specification;
    }

    public override void Matches(object actual)
    {
        _specification(actual);
    }
}

And then we need some syntax helpers:

public static class Be
{
    public static ISpecificationConstraint EqualTo(object expected)
    {
        return new GenericSpecificationConstraint(actual => Assert.That(actual, Is.EqualTo(expected)));
    }

    //...more more more
}

public static class Match
{
    public static ISpecificationConstraint Pattern(string expected)
    {
        return new GenericSpecificationConstraint(actual => Assert.That(actual, Text.Matches(expected)));
    }

    //...more more more
}

These are just two examples but we could easily create other helpers than Be and Match. Contain would be another useful helper.

The only thing remaining now is the attributes. These are easily “aliased” by inheriting the NUnit attributes.

public class DescribeAttribute : TestFixtureAttribute {}
public class SpecificationAttribute : TestAttribute {}
public class BeforeAttribute : SetUpAttribute {}
public class AfterAttribute : TearDownAttribute {}

I don’t know if this is useful but it is a alternative way of making NUNit a little more BDDish. We haven’t implemented this on our project yet and I don’t know if we will.

My biggest issue with this is getting use to all the parenthesis that a assertion requires, monkey.Name.Should(Be.EqualTo("Chunky")), I guess this is what you get for having mandatory parenthesis.


Why You Will Never Get Rich by Consulting

There is a know fallacy in consulting as a business model. The idea of charging someone a fix sum of money for an hour of labor can only be profitable until you run out of time to sell.

If you run a company that has 10 consultants, every consultant can only work about 40h a week. If all your consultants are working with clients you have hit your cap. The only way to make more money is to hire more people and find more clients. This leads to more administration which in turn leads to overhead and expenses. This doesn’t scale.

Calculated in percent you will probably be making less and less as your company grows. But this isn’t really news for most. Still people do consulting, because consulting is a quick win. With consulting you actually start earning revenue the first hour you work. It’s safe.

However the fact is; you will never get rich doing consulting. For the same reason you will never get rich working for a monthly or hourly wage. To quote Ben Curtis on the Rails Podcast:

There are two secrets to wealth. One is to spend less
money than you make; that’s easy. And two is to get out
of trading time for money.

Wouldn’t it be nice if you wake up in the morning and log on to your bank to see that you just earned 1k $/€/£ while sleeping. In order for this to be reality you need to separate the time you spend working from how much money you will make. In the software industry this means you need to build products not sell hours.

Products are nice because you spend a fixed time building them and then you can charge customers over and over again for using them. This means once you have built it; it will continue making money without you having to spend any more time. Once you have a couple of fairly profitable products you can retire to an island heaven somewhere and spend the rest of your days as a corn farmer.

Just a provoking thought from a consultant.


Save Time with Test / Source Toggling Macro

For the past 2-3 years all the development I have been doing has been test driven. I don’t really want to count how many hours I have spent looking for corresponding test file / source file.

Last week I remembered that I use to have the same problem doing c++ development only then it was the cpp file / h file. To solve this tedious problem a friend of mine gave me a visual studio macro to toggle cpp and h file. I figured the same should be possible for test and source file.

Most of the projects I work on still use the convention of one test suite per class. For example the Person class would be tested in the PersonTests suite. If you follow a similar pattern you can use this macro for toggling between source file and test file.

Imports System
Imports EnvDTE
Imports EnvDTE80
Imports EnvDTE90
Imports System.Diagnostics
Imports System.Windows

Public Module TestHelpers
    Const CODE_FILE_SUFFIX As String = ".cs"
    Const TEST_FILE_SUFFIX As String = "Tests.cs"

    Public Sub SwitchBetweenSourceAndTest()
        Dim currentDocument As String = ActiveDocument.Name
        Dim targetDocument As String = String.Empty

        If currentDocument.EndsWith(TEST_FILE_SUFFIX, _
        StringComparison.InvariantCultureIgnoreCase) Then
            targetDocument = SwapSuffix(currentDocument, TEST_FILE_SUFFIX, CODE_FILE_SUFFIX)
        ElseIf currentDocument.EndsWith(CODE_FILE_SUFFIX, _
        StringComparison.InvariantCultureIgnoreCase) Then
            targetDocument = SwapSuffix(currentDocument, CODE_FILE_SUFFIX, TEST_FILE_SUFFIX)
        End If

        OpenDocument(targetDocument)
    End Sub

    Private Sub OpenDocument(ByRef documentName As String)
        Dim item As EnvDTE.ProjectItem = DTE.Solution.FindProjectItem(documentName)

        If Not item Is Nothing Then
            item.Open()
            item.Document.Activate()
        Else
            Forms.MessageBox.Show(String.Format("Could not find file {0}.", documentName), _
            "File not found.")
        End If
    End Sub

    Private Function SwapSuffix(ByRef file As String, ByRef fromEnd As String, _
    ByRef toEnd As String) As String
        Return Left(file, Len(file) - Len(fromEnd)) & toEnd
    End Function
End Module

I have mine mapped to alt-O.

If you haven’t used Visual Studio macros before you will get a annoying balloon every time you trigger the macro. Check out “How to disable Visual Studio macro “tip” balloon?” on Stack Overflow to solve this.


Estimating in Trees, Probing the Unknown

One of the biggest problems for me ever since i started doing professional software development has been estimation.

Back in the dark ages we used to get a huge spec and spend a couple of weeks going through it estimating every feature in hours. We did this because the business people SAID they wanted to know the hours.

This obviously didn’t work since we always got it wrong. When I started thinking agile I learnt the trick of relative estimates; this revolutionized everything. The problem was that while we changed, the business stayed the same. In order for us to tell someone how long something would take we still needed a initial backlog. Usually it takes some time to spec a system, even in stories and honestly doesn’t this feel a lot like big spec upfront revamped?

I figured that there has to be a better way to solve this, all we really want is a rough estimate between here and there nothing to detailed. A educated guess.

At Öredev 2007 someone (can’t remember who) told be about estimating in trees. This is basically the idea of relative estimates applied iteratively over and over again. When estimating in trees we work with different levels of granularity and drill down at certain spots to probe how big something might be.

What we try do is split the problem into smaller and smaller chunks and weigh them relative to each other until we find us at a comfortable level of granularity.

Let’s demonstrate. Assume we are building the application, call it “Star Monkey DB”, our first level might be the application itself.

For the next level I like to use epics, but you could also use wireframes or use cases. Anything that is smaller than the previous level.

Now we estimate! Of course we do this using relative estimates and preferably with the whole team present. Pick a reference item (here Epic 3) set it to two epic points and estimate the rest in relation to that epic using your favorite estimating technique.

Now take your reference item and break it down to stories.

Estimate again! Pick a reference story, set it to two story points and estimate all the other stories relative that one.

Now we know that 2 epic points are 20 story points; this means that the Star Monkey DB would take, roughly, 170 story points to build. If your team has a established velocity your done. If not you’ll have to do a estimated velocity to find out how many iterations you think you need.

This works well when estimating something that does not have a complete backlog. The results can be used to discus cost and time frame of a project without doing the much dreaded “pre-study phase”.

This technique has worked well for me and given surprisingly good results. Keep in mind though, this is a rough estimate and you might want to do some sort of worst case/best case before handing the results over to any business people.


Follow

Get every new post delivered to your Inbox.