Pages

Friday, February 10, 2017

Agile planning & estimation mindset (as a manifesto)

In the time I have been at Thoughtworks, almost every year there is a debate about how one should estimate and plan, in the delivery engagements that we work on.

Given the variety of engagements that we do, there is no one size fits all approach.  But there is probably a mindset that is useful when approaching these engagements. And I made an attempt to make my mindset explicit (inspired from the Agile manifesto)

I value
Learnings from the estimation conversation over Estimation numbers themselves
Shorter plans with lightweight estimates over Longer plans with precise estimates
Using past trend/flow data  over Estimating every time 
Optimising for delivery of business value over Optimising for time and budget

While adhering to one of the core principles of
Learning by releasing working software frequently, from daily to a couple of weeks, with a preference for a shorter time scale.

Sunday, April 24, 2016

Agile won't scale with average talent and a prescriptive framework.

A common question amongst many large enterprise clients is how do we "Scale Agile".  And most often than not these enterprises don't have the time and patience to work through the values of using agile methods and inculcating that within their organisation.

What they are really asking for is a shortcut way of spreading some agile practices within an average talent pool in the company with the hope that they might learn to deliver software better. And this is the very reason process frameworks like SAFe and others seem more attractive to senior execs.

Ironically though what might seem as a shortcut of adopting a framework in first place ends up burning more of the IT budget with minimal results within 3-5 years.

With an average talent pool, the problem of scaling agility soon becomes a problem of enforcing agility (pardon the oxymoron). Being prescriptive at a large scale requires a ton of micro processes and standards to be setup to manage the average worker.

SAFe talks about normalizing estimates across all teams at the very beginning by forcing teams to adopt 1 point as 1 day. This defeats a lot of benefits of point based estimation and relative sizing. Again in SAFe, teams are forced to sign up for objectives at the end of every increment planning of 2 months or so. This obviously does not take into account the nature of the work the team is doing. A team that cannot release something in 3 months ends up with weird looking objectives just to satisfy the organization standards. 

Such standards are created to put some checks around the average talent pool who might be prone to wrong doings. A company even went to the length of running reports on story checkin commits to ensure that stories do not span multiple iterations. And then when there were hangover stories, people were asked to split stories accounting for partial, untestable work in the previous iterations.

Another company I worked with, was trying SAFe, and was forcing product owners to come up with a quantification of business value and time criticality so that cost of delay could be calculated.  While core to product development, the essence of this whole calculation was diluted by the middle management of product owners who ended up making up numbers as a shortcut. Justification of the business value numbers ended up becoming more subjective and usually the loudest product owner in the room won. 

These experiences clearly show that there is really no alternative to investing in hiring and nurturing good talent. While for execs at the very top, a scaleable framework with checks and balances might look very attractive, the middle management along with folks on the ground will find ways and means to work around these checks and never end up learning the real values.

While hiring talent is hard, it is equally important to nurture good talent, especially in IT where the market is highly competitive.

Design your enterprise towards continuous delivery of value. A lean enterprise should be able to build, measure and learn from the software they delivered and iterate rapidly towards maximizing business value. This is not easy, especially if you are a large enterprise. Reaching this goal will require hard work and changes within the organization that are beyond IT such as financial budgeting, performance reviews etc... Invest in hiring smart talent and then invest in your people to coach them on fundamental principles such as early feedback and failing fast and how core XP engineering practices and CD help achieve that.

Just scaling agile with a prescriptive process framework will be attractive in the short term but will not live the test of time for your enterprise. Don't give in to the temptation. Do the hard work.

Thursday, October 1, 2015

How to measure productivity in Agile teams ?

No, the answer is not story points. Mike Cohn describes the answer to be cycle time.

A lot of people have written similar articles but I always end up finding people in the senior leadership who want to measure productivity using  story points. And there are those who want to compare one team's story points against another. And there are those who try to add story points across all teams without considering them to be different at all.

Reasons why people are so tempted to use story points are because

  • They are so readily available to measure, that people don't feel like looking anywhere else. A
  • A lot of Agile coaches end up not understanding cycle time well, 
  • A lot of project management tools that claim to be "agile" do not support cycle time calculation at every level well enough.


I have seen "measuring productivity" conversations start when programs are behind schedule , senior leadership does not trust the teams or vendors , i.e. in general when the going gets tough. This is exactly the tipping point when a wrong measurement /KPI  can push the program into a completely wrong direction.

The scenario when "story points" are chosen as a performance indicator

  • Will immediately lead to teams inflating their story estimates to look good on reports. 
  • This will lead to a false perception of progress overall. 
  • Teams could get completely demotivated as the focus moves to story points than actual working software as a measure of progress. 
  • There could also be scenarios where quality is compromised for speed and a lot of defects creep in.


The scenario when "cycle time" is chosen as a performance indicator

  • Will lead teams to think about how to measure it in first place. 
  • It will lead teams to break the end-to-end cycle time into time taken from moving a story from Analysis to Development, from Development to QA, and QA to Done etc...
  • Teams will end up thinking about definition of done / acceptance policy for each stage. And if a story moves from Analysis to Development violating the definition of done policy, the story is sent back to Analysis, providing immediate feedback to the previous stage owners to produce better quality work. 
  • Quality then gets built implicitly into the process because of faster feedback and being able to fail fast, which are the fundamentals of an Agile team.
So choose your measures wisely :-) 


Sunday, February 1, 2015

Release trains are a good first step, not the last one.

In software development, the release train concept tries to bring about some cadence in releasing software. Once it is agreed that the software will be released "every X weeks", product development teams focus on ensuring that a release happens at every X week interval. For most teams it could be anywhere between 2 weeks to 6 months. However the main idea being that while the interval and schedule is fixed, the scope or content of the release is variable.

This concept is a boon to enterprises who struggle to release software for years, especially those who are running teams that are in a dependency hell. Aligning every team towards a shorter release cycle with negotiable scope and actually releasing more often is a great first step.

The problem occurs when the concept starts breaking down into a bureaucratic process of release management instead of focusing on the core engineering problems behind releasing frequently.

Here is what happens in large enterprises

  • Organisations start creating new release committees (with roles like release managers, deployment coordinators etc...) to manage releases every X weeks across teams that deliver a common software platform.  Since the release committee works across teams without enough bandwidth to understand details, delivery teams are requested to create a whole bunch of documentation to aid the committee's decision making, mainly around a Go/No Go decision for every release.
  • Conservative strategies such as early branching, code freeze, etc... are applied to meet the release timeline of the train. QA teams start demanding more time for testing as they are constantly worried about every release.
  • Teams start planning more conservatively during their release planning exercises. No one wants to pack in more scope and be caught as a dependent team holding up a release. 
The core issue in larger organisations is that even with release trains, it ends up being a process problem that needs to be managed rather than an engineering problem. As an example, even with release trains, a lot of teams incur a bigger overhead in managing dependencies instead of investing the same time in decoupling systems to release independently.

Organisations embracing release trains as a first step lose their way towards continuous delivery. They get caught in the false sense of satisfaction that a release train gives them and hate to see the amount of overhead still incurred with every release. 

The real goal should be to engineer software in such a way that you can reliably release it on demand, whether it is daily or even hourly. So, please do take the next steps.



Saturday, January 31, 2015

Painful truth about "Agile QA" in the enterprise world.

When Agile brought the idea of cross functional teams with Devs and QAs working closely together every day, it created enough disruption and a feeling of insecurity amongst the traditional QA organisation.

However most of the big enterprises have embraced the idea and called themselves "Agile",  by paradropping QAs from their organisation into Scrum teams. These QAs still report to the old QA organisation but are also told to work on priorities set by their Scrum Master. These folks then declared victory in Agile adoption even though they were struggling with  collaboration between Devs and QAs in the Scrum teams on the ground. And more often than not, issues with collaboration are brushed off as tactical issues by QA management.

The reasons for lack of collaboration between developers and quality analysts are fairly deep rooted in our IT industry.


  • The relevance of so called "manual QA" is slowly diminishing with the advent of Continuous Delivery. There is a huge community of technical folks who are working towards reducing the complexity of testing and releasing on demand with minimal effort. Gone are the days of large manual regression QA teams running test cases documented from a Wiki.  If an IT organisation is still doing this, the IT business might cease to exist in the next 10 years. So in that sense, the CD embracing developers become a threat to the traditional manual QA world.
  • In most companies QAs are not hired for their keen eye and rigour in looking at where systems could break. Instead, a lot of times, QAs are created out of developers who struggled to clear a developer interview. In a lot of places, such candidates are made to do a QA role for a while, and are promised that if they do well, they might move into a development role soon. This creates a psychological divide. QAs try to pick up automated testing so that they can hone their development skills along with testing. In this context, whenever developers try to help QAs in automation, they are viewed as a threat. Innovation in automated testing is needed at all levels in the Test Pyramid, and the current culture hinders progress in that direction.
As organisations achieve more maturity in Continuous Delivery, there will be a lot more overlap between Developers and QAs. This will also include roles that perform non-functional testing such as Performance, Security, Load etc... In that context, the purpose of a QA organisation needs to change towards cultivating people who are learning to solve testing, integration and deployment problems so that releases become uneventful. IT folks who invest in bridging the gap between Dev <=> QA <=> Ops actively will be the ones that succeed in providing true agility to their business.