SDA SE Wiki

Software Engineering for Smart Data Analytics & Smart Data Analytics for Software Engineering

User Tools

Site Tools


Agile Lab 2013 – External Impressions

Feedback on the short survey conducted on 2013-09-10 by Michael Mahlberg

Recommendations

General recommendations

  • Only do things that generate value – e.g. if you record a metric that you never use then there is no value generated. Either find a way to eliminate the need to record the metric or find a way to use it (e.g. make it visible)
  • Use the scientific method – formulate a theory, conduct an experiment, evolve the theory.
    E.g. If the theory is, that the WIP-limits are optimal, experiment with changing them, set WIP-limits according to what you learned (which might well the old values, but now you know that they are optimal for the current conditions)

Process recommendations

There are five areas of improvement with regard to the process that seem to have considerable leverage.

  • Introduction of “Kaizen events”
  • Introduction of “pirate metrics / pirate marks”
  • Strengthening of Andon-principle
  • Verification of WIP-Limits
  • Evolutionary change of the board, initiated by the team

Let's look at each of theses in turn

Introduction of "Kaizen events"

“Kaizen Event” is an informal term used in the Kanban Method's vocabulary to describe short events where a significant part of the team gathers to share information. Usually there are a number of reasons for kaizen events:

  • blockers [unscheduled] – when a [sub]team can't continue with their work
  • solutions [unscheduled] – when a blocker is removed (so that everyone knows how to avoid it in the future)
  • inventions [scheduled] – when a [sub]team has removed a blocker or introduced a new technology to the integrated product (e.g. Release branch)
  • need for process changes [scheduled] – when the team (usually during the standup) identifies a problem in the process (bottlenecks, too much slack, rework etc.) (can result in definition of a subteam to address the issue and propose solutions)
  • process changes [scheduled] – If a (sub)team wants to propose a change to the process (policies)

Those events should be extremely short (5 to 10 Minutes) and – in case of inventions – well prepared.

Introduction of "pirate metrics / pirate marks"

Since a company choose to capitalize on the name “Pirate metrics” it is hard to google for the term and find anything helpful. The basic idea – as presented for example by Benjamin Mitchell in his Talk at the LKCE 2012, see slide 24 – is to just mark the cards with a “tag” for each day that it spends in a certain column. For extremely narrow cadences this timeframe might be even smaller (e.g. half days).
Basically it just means you

  • A) assign “signs” to the columns (e.g. R = Ready, A = Acceptance test definition, S = Story Preparation, I = Implementation, P = Post processing on the story level and R, I and P respectively on the task level) (remember to put the 'signs' on the board as well so that everybody can look up the meaning of the tags immediately) and
  • B) at defined intervals have somebody go over the board and 'tag' all cards according to their current column.
  • C) Use the information gathered by this in Kaizen Events, retrospectives and the standup meeting.

Sample pirate tags

Sample of pirate tags on a (hypothetical and hopefully unrealistic) story card, that spent half a day in ready, two half days in the definition of acceptance criteria, half a day in story preparation, half a day in implementation and two-and-a-half days in post-processing.

Strengthening of andon principle

The traffic light for the build status and the acoustic signals for some events are examples of an andon system as it is often used in production environments. [Personal remark: I just love the traffic light!]
In production systems an andon system is often used in conjunction with a jidoka (stop-the-line) system. It might be helpful to consider a comparable approach – often the usage of the andon system can indicate an opportunity to have a Kaizen-Event.
Additional comment: In accordance with the broken window theory it seems especially important to make sure that the andon system actually reflects the real situation – otherwise the team (unconsciously) looses their trust in the system and it gets easier and easier to ignore it.

Verification of WIP limits

The WIP limits seem to work perfectly for the team for now, but do they really add value? To make sure that the WIP limits are used to their full extent their values should be determined by experiments. There are many reasons for applying WIP limits but two of the most important are the identification of bottlenecks and the leveling of the workload. (Right after creating predictability via the inverse of Little's Law ⇒ the lower the Work In Progress, the shorter the overall cycle time).
To achieve these goals the WIP limits should be tightened until bottlenecks show or people start complaining that they are idle. This way bottlenecks will show up and process optimization can happen (via Kaizen Events or during the retrospective, based on factual data).

Evolutionary change of the board, initiated by the team

The interpretation of the board has undergone some changes since it first came into use. This is a very healthy sign and indicates evolutionary change of the process as such. Those changes should be reflected on the board (e.g. by removing columns). Furthermore the evolutionary change doesn't stop with the board but also includes other artifacts of the process such as the recorded information, the format of the task-cards etc.

Recommendations for the team

  • Future-centric standup meetings
  • More focus on concrete concepts
  • Challenge the stories

Future-centric standup meetings

The purpose of the stand-up meeting is – at least in some interpretations – to exchange information about

  • “what happened since the last standup” – so that everyone has all the information they need for their work today
  • “what do we want to achieve till the next standup” – so that there is a joint effort, no work is done by two teams in duplicate and no work is done that is not put to use
  • “what is hindering the team” – so that the team can remove those things

In general a stand-up meeting that puts emphasis on the past (e.g. the first question) indicates a team that is still relying on their coaches and managers for decision making while an emphasis on the future (e.g. the second and third question) fosters self-organiszation and independence.

More focus on concrete concepts

Even though the whole team is involved in the estimation of the stories and thus everyone should be familiar with the meaning of the story numbers only a small percentage of people is “wired” for this kind of association and usually the details get lost very soon. Mentioning the concrete concepts instead of just the numbers triggers different memory areas and thus a comment during the standup meeting like “I intend to implement the collection of keystrokes per minute for the refacoring-efficiency gauge tends to engage people far more than “I'll work on Task 5 of S-42. The concrete mention of the task's contents might trigger someone who is working on something completely different to chime in with “Oh, we implemented a keystroke collector last week, let's meet after the standup”. The probability for that to happen is not very high in the case of the abstract references.

Challenge the stories

Good stories are negotiable – providing input on how a story can be improved based on the current developments and in accordance with the product vision is a valuable contribution.

Recommendations for the customer

  • Stronger product-vision
  • Use the INVEST concept when designing stories, perhaps incorporate the Cohn format

Stronger product-vision

A strong product vision is like a commander's intent: A vision of how the situation is after the action is over. This enables each end every member of the team to make informed decision about local (implementation) questions. The hardest part is the formulation of the product vision in a way that is engaging and captures the essence instead of the details. A good way to achieve this in software projects is to concentrate on the capabilities the users will have once the project is completed.

Vision focusing on the “how”:

“Our mission is to become the international leader in the space industry through maximum team-centered innovation and strategically targeted aerospace initiatives.” – hypothetical aerospace CEO

Vision focusing on the “what”:

“We'll put a man on the moon and return him safely by the end of the decade.” – John F. Kennedy

[Examples from Made to Stick, by Cliff and Dan Heath]
I would encourage anyone who has to formulate a vision to try to go the JFK-way.

Use the INVEST concept when designing stories

The INVEST list of properties of good stories could help to sharpen the formulation of stories so that the team has a better chance of involvement on the content level of the product.

... perhaps incorporate the Cohn format

Formulating the stories along the lines of Mike Cohn's story format has some serious advantages (although that is not an answer to each and every problem with stories). With the format “As a type of user I want system behavior so that benefit” a lot of practices come automatically into play: You get personas (types of users), an outside view (I want) and even the first level of acceptance criteria and a value indication (benefit).

Recommendations for the coach

Include SOLID (or the like) in the review-guidelines

Right now the guidelines for the review do not not state specifics about design decisions – a reminder on the results of the pre-lab work (e.g. on object oriented software construction) with some tangible guidelines (SOLID was mentioned on one of the posters in the hallway) would give reviewers some guidance for the criteria they should apply.

consider encouraging (slightly) larger implementation tasks

From the feedback it seems that some of the tasks are extremely small – if that proves to be true it makes sense to encourage slightly larger tasks.

consider a homegrown application for tracking in future labs

While I am a strong proponent of physical task-boards I also belief that it is beneficial for some of the measurements if it is possible to mine the data. Since COTS-solutions tend to be to heavy – especially if the goal is to perform data-mining and all you need is the raw data of the state changes – a really lightweight in-house solution could be considered. Contact me for details on this idea

Overall setting

Overall the project seems to be in a very good shape from a process perspective.
The main chances for improvement seem to be in the area of quicker feedback on blocking issues and on inventions.

Strong points

  • Almost no rework (Stories only move from left to right, a “done” task or story really is done
  • Andon system in place
  • Coach and customer on-site
  • Well established process and team-responsabilities
  • One board to show 'em all!

Weak points

  • The setting for the lab has grown rather big (compared to the XP-Lab in the beginning) – a trade-off analysis might be beneficial
  • The codebase under consideration might be to big – so that pre-existing code interferes with the learning objectives. Clearer boundaries might be helpful.

Kanban Board

The Kanban board fulfills its purpose completely. For new team-members some pointers as to the direction of flow in the first lanes would be helpful (horizontal/vertical) but in the given setting (team-members don't change) that does not seem to be necessary.

Depth of kanban

The Depth Of Kanban evaluation is according to an article from the limited wip society's website where you can also find some other samples of implementation depths.

Observation: External impression Feedback: Interpretation of the interviewees answers

Visualization

Range: 1 to 10, Current value: 7 (7.5)

The proposed metrics from the LWS website suggest to count 1 for each of

  • [X] Work
    … ⇒ Via board
  • [/] Different Work Item Types
    … ⇒ Different areas on the board – perhaps only partially applicable in this context
  • [X] Workflow
    … ⇒ Via board
  • [/] Kanban Limits
    … ⇒ As numbers & by board-space for stories, numbers (and no other visualization) for tasks
  • [X] Ready for pull (“done”)
    … ⇒ Explicit lanes
  • [X] Blocking issues (special cause variations)
    … ⇒ Blocked Marker
  • [X] Capacity Allocation
    … ⇒ Via Avatars
  • [/] Metrics-related aspects such as – lead time, local cycle time, SLA target
    … ⇒ partially (cycle-time, lead-time), “only” numerical representation, no indicator function (e.g. close to limit)
  • [X] Inter-work item dependency (incl hierarchical, parent-child dependency)
    … ⇒ Swimlanes per Story
  • [ ] Inter-workflow dependency
    … ⇒ Not really applicable in this context since the lab does not encompass multiple organizational units. One option would be an explicit workflow for task post processing, but with the time restriction of the lab this seems to incur more cost that it would provide in benefits
  • [ ] Other risk dimensions – cost of delay (function shape & order of magnitude), technical risk, market risk
    … ⇒ Not really applicable in the situation

This would be 7.5 Points, but apart from the original suggestions, the missing visualization of the flow (e.g. “age” of tasks in lane) was counted as -0.5 points.

WIP Limits

The original paper proposes a simple taxonomy of four discrete values (Visualisation only, Proto-kanban, Kanban and multi-kanban), and from my observations the team operates a fully working Kanban system.

Manage flow

Range: 1 to 7, Current value: 3 (3.5)

The proposed metrics from the LWS website suggest to count 1 for each of

  • [X] Daily meetings
    … ⇒ Daily Standup in place, retrospective/operations review also in place
  • [ ] Cumulative Flow Diagrams
    … ⇒ hard to do on the task-level without tooling
  • [/] Delivery rate (velocity/throughput) control chart
    … ⇒ no control chart but known by every person I asked – so probably no need for visualization
  • [ ] SLA or lead time target
    … ⇒ Not applicable in this situation
  • [X] Flexible staff allocation or swarming behavior
    … ⇒ Seems to work
  • [X] Deferred pull decisions, or dynamic prioritization
    … ⇒ happens alls the time, thanks to adaption of XP style work agreements
  • [ ] Metrics for assessing flow such as number of days blocked, lead time efficiency
    … ⇒ hard to measure without tooling, for the timeframe of the lab not viable from a resource point of view suggestion for future labs: see recommendations for the coach.

This would yield a value of 3.5 but 0.5 points have been deducted due to the missed opportunity for visual feedback with regard to the flow of taks-cards (writing on the back of the cards is a good idea in some instances, but only information vital to the person who handles the card should be put there – things that need to be visible should go on the front).

Explicit policies

Range: 1 to 5, Current value: 1.5 (feedback) / 2.5 (observation)

The proposed metrics from the LWS website suggest to count 1 for each of

  • [X] Pull criteria (definition fo done, exit criteria)
    … ⇒ Observation yielded clear definition, feedback implied some less clear definitions
  • [ ] Capacity allocation
    … ⇒
  • [/] Queue replenishment
    … ⇒ no explicitly stated cadence
  • [ ] Classes of service
    … ⇒ not used
  • [X] Staff allocation / work assignments
    … ⇒ Via Avatars

Feedback loops

Range: 1 to 3, Current value: 2 (feedback) / 3 (observation)
[The original article refers to the Toyota Kata – I adjusted the scale with regard to the coaching model in place]
The proposed metrics from the LWS website suggest a taxonomy according to the Toyota Kata – the following is an abstraction from the goals of each step

  • [X] team-internal feedback loops
    … ⇒ Encouraged (pair programming) and explicitly installed in the form of code reviews
  • [X] feedback loops via the coaches
    … ⇒ Explicitly installed as part of the process
  • [X] feedback loops in operation reviews
    … ⇒ Installed on multiple levels (weekly ops review, external review, evaluation through coaches)

Improvements

Range: 1 to 5, Current value: 3
[The original article refers to the Toyota Kata – I adjusted the scale with regard to the coaching model in place]
The proposed metrics from the LWS website suggest a taxonomy according to the Toyota Kata – the following is an abstraction from the goals of each step

  • [/] Evolution of the system
    … ⇒ only partially visible, could be more
  • [X] deepening Kanban implementation
    … ⇒ yes, according to interviews
  • [/] model driven
    … ⇒ partially
  • [/] coached
    … ⇒ not in the sense of the Toyota Kata, but coaches are present all the time
  • [/] operationss review
    … ⇒ inferred from interview results the retrospectives do not necessarily include all the opportunities that come up during the week
teaching/labs/xp/2013b/external_impressions.txt · Last modified: 2018/05/09 01:59 (external edit)

SEWiki, © 2019