Pages

Thursday, October 1, 2015

How to measure productivity in Agile teams ?

No, the answer is not story points. Mike Cohn describes the answer to be cycle time.

A lot of people have written similar articles but I always end up finding people in the senior leadership who want to measure productivity using  story points. And there are those who want to compare one team's story points against another. And there are those who try to add story points across all teams without considering them to be different at all.

Reasons why people are so tempted to use story points are because

  • They are so readily available to measure, that people don't feel like looking anywhere else. A
  • A lot of Agile coaches end up not understanding cycle time well, 
  • A lot of project management tools that claim to be "agile" do not support cycle time calculation at every level well enough.


I have seen "measuring productivity" conversations start when programs are behind schedule , senior leadership does not trust the teams or vendors , i.e. in general when the going gets tough. This is exactly the tipping point when a wrong measurement /KPI  can push the program into a completely wrong direction.

The scenario when "story points" are chosen as a performance indicator

  • Will immediately lead to teams inflating their story estimates to look good on reports. 
  • This will lead to a false perception of progress overall. 
  • Teams could get completely demotivated as the focus moves to story points than actual working software as a measure of progress. 
  • There could also be scenarios where quality is compromised for speed and a lot of defects creep in.


The scenario when "cycle time" is chosen as a performance indicator

  • Will lead teams to think about how to measure it in first place. 
  • It will lead teams to break the end-to-end cycle time into time taken from moving a story from Analysis to Development, from Development to QA, and QA to Done etc...
  • Teams will end up thinking about definition of done / acceptance policy for each stage. And if a story moves from Analysis to Development violating the definition of done policy, the story is sent back to Analysis, providing immediate feedback to the previous stage owners to produce better quality work. 
  • Quality then gets built implicitly into the process because of faster feedback and being able to fail fast, which are the fundamentals of an Agile team.
So choose your measures wisely :-) 


Sunday, February 1, 2015

Release trains are a good first step, not the last one.

In software development, the release train concept tries to bring about some cadence in releasing software. Once it is agreed that the software will be released "every X weeks", product development teams focus on ensuring that a release happens at every X week interval. For most teams it could be anywhere between 2 weeks to 6 months. However the main idea being that while the interval and schedule is fixed, the scope or content of the release is variable.

This concept is a boon to enterprises who struggle to release software for years, especially those who are running teams that are in a dependency hell. Aligning every team towards a shorter release cycle with negotiable scope and actually releasing more often is a great first step.

The problem occurs when the concept starts breaking down into a bureaucratic process of release management instead of focusing on the core engineering problems behind releasing frequently.

Here is what happens in large enterprises

  • Organisations start creating new release committees (with roles like release managers, deployment coordinators etc...) to manage releases every X weeks across teams that deliver a common software platform.  Since the release committee works across teams without enough bandwidth to understand details, delivery teams are requested to create a whole bunch of documentation to aid the committee's decision making, mainly around a Go/No Go decision for every release.
  • Conservative strategies such as early branching, code freeze, etc... are applied to meet the release timeline of the train. QA teams start demanding more time for testing as they are constantly worried about every release.
  • Teams start planning more conservatively during their release planning exercises. No one wants to pack in more scope and be caught as a dependent team holding up a release. 
The core issue in larger organisations is that even with release trains, it ends up being a process problem that needs to be managed rather than an engineering problem. As an example, even with release trains, a lot of teams incur a bigger overhead in managing dependencies instead of investing the same time in decoupling systems to release independently.

Organisations embracing release trains as a first step lose their way towards continuous delivery. They get caught in the false sense of satisfaction that a release train gives them and hate to see the amount of overhead still incurred with every release. 

The real goal should be to engineer software in such a way that you can reliably release it on demand, whether it is daily or even hourly. So, please do take the next steps.



Saturday, January 31, 2015

Painful truth about "Agile QA" in the enterprise world.

When Agile brought the idea of cross functional teams with Devs and QAs working closely together every day, it created enough disruption and a feeling of insecurity amongst the traditional QA organisation.

However most of the big enterprises have embraced the idea and called themselves "Agile",  by paradropping QAs from their organisation into Scrum teams. These QAs still report to the old QA organisation but are also told to work on priorities set by their Scrum Master. These folks then declared victory in Agile adoption even though they were struggling with  collaboration between Devs and QAs in the Scrum teams on the ground. And more often than not, issues with collaboration are brushed off as tactical issues by QA management.

The reasons for lack of collaboration between developers and quality analysts are fairly deep rooted in our IT industry.


  • The relevance of so called "manual QA" is slowly diminishing with the advent of Continuous Delivery. There is a huge community of technical folks who are working towards reducing the complexity of testing and releasing on demand with minimal effort. Gone are the days of large manual regression QA teams running test cases documented from a Wiki.  If an IT organisation is still doing this, the IT business might cease to exist in the next 10 years. So in that sense, the CD embracing developers become a threat to the traditional manual QA world.
  • In most companies QAs are not hired for their keen eye and rigour in looking at where systems could break. Instead, a lot of times, QAs are created out of developers who struggled to clear a developer interview. In a lot of places, such candidates are made to do a QA role for a while, and are promised that if they do well, they might move into a development role soon. This creates a psychological divide. QAs try to pick up automated testing so that they can hone their development skills along with testing. In this context, whenever developers try to help QAs in automation, they are viewed as a threat. Innovation in automated testing is needed at all levels in the Test Pyramid, and the current culture hinders progress in that direction.
As organisations achieve more maturity in Continuous Delivery, there will be a lot more overlap between Developers and QAs. This will also include roles that perform non-functional testing such as Performance, Security, Load etc... In that context, the purpose of a QA organisation needs to change towards cultivating people who are learning to solve testing, integration and deployment problems so that releases become uneventful. IT folks who invest in bridging the gap between Dev <=> QA <=> Ops actively will be the ones that succeed in providing true agility to their business.