Read More

How we measure output and productivity

 

… getting more stuff done

 

At Trade Me we want to measure the overall health of Tech (that’s our team of 125 designers, developers, testers, BAs, and Squad Masters) to identify trends and to know if we are getting better (or worse!). We know that when we measure something it is a strong way of saying “This matters” which is why we put a lot of effort into deciding which metrics to collect.

 

Read More

Splitting a User Story along Acceptance Criteria

When breaking down a large user story to ensure it is sized appropriately, the default is to use Richard Lawrence’s excellent 9 Patterns for splitting a user story.

I also use an additional approach; in the first instance I look to see whether it can be split along its acceptance criteria. Every good user story should have acceptance criteria, and this approach ensures that not only do they exist, but they are also reviewed before we look to split.

Read More

Evolving the Story map

I can’t say enough about how useful story maps are and how essential they are on any Agile project. Jeff Pattonis the undisputed (certainly in my mind) master of the story map and it’s well worth looking at the materials on his site. Jeff summarises a story map as, “A prioritized user story backlog helps to understand what to do next, but is a difficult tool for understanding what your whole system is intended to do.

 

Read More

The document dilemma

The Agile manifesto states “Working software over comprehensive documentation” – this seems to be one of the biggest mindset shifts that organisations need to make when adopting any Agile framework. For some people that simple statement brings up visions of chaos, lack of control, and the worst fear of developers doing what they want. At the other end of the spectrum that’s exactly what some people hope it does mean.

If you ask people within most software development teams / companies what they do it is very unlikely that you would get the response “write documents about the products for our clients”. Most responses would be about the latest website or the latest Google hack they are doing. So inherently at an external client facing level we do value working software over comprehensive documentation. Where I believe this value changes is during the production of the working software and the legal requirements from the iron grip of the contract.

Read More

Acceptance Criteria and the Definition of Done

Recently some of the teams I’m coaching found it difficult to distinguish between acceptance criteria for user stories and the definition of done. Here’s my attempt to make the distinction clear: 

  • For a user story or feature to be “potentially shippable” it needs to meet the expectations of the Product Owner and be of the agreed quality.
  • The Product Owner’s expectations are phrased as acceptance test criteria. Acceptance test criteria are conditions of satisfaction specific for each individual user story. (For more on acceptance criteria read “On Acceptance Criteria”).
  • The user story’s (internal) quality is defined in the “Done” statement. The “Done” statement is applicable to all user stories in the project.

 

Here’s an example:

User Story:
“As a music lover I want to be able to pay for my album by VISA card”

How story points work

One of my clients is a small software development house that does custom development in the form of development projects for clients . I helped them to successfully introduce Agile (Scrum with XP) and both the team and business managers are really happy with it. 

As they liked our methods of planning and estimating (story points and velocity) the account managers and sales team were discussing the options to relate story points to dollar values. 
 
To explain why I think this is very risky and not advisable I wanted to give them some background.
 
“How big” vs. “How long”
Story points are units that are used to size a piece of functionality or work. Sizing in this case means that story points indicate “how big” a piece of work is. 
 
This is often confused with “how long” it takes to implement it but in fact “how big” and “how long” are very different things:  

  • The “how long” is highly dependent on which developer is performing the work 
  • The “how big” bears no relationship to who is performing the work