Software Engineering for Smart Data Analytics & Smart Data Analytics for Software Engineering
Feedback on the short survey conducted on 2013-09-10 by Michael Mahlberg
There are five areas of improvement with regard to the process that seem to have considerable leverage.
Let's look at each of theses in turn
“Kaizen Event” is an informal term used in the Kanban Method's vocabulary to describe short events where a significant part of the team gathers to share information. Usually there are a number of reasons for kaizen events:
Those events should be extremely short (5 to 10 Minutes) and – in case of inventions – well prepared.
Since a company choose to capitalize on the name “Pirate metrics” it is hard to google for the term and find anything helpful. The basic idea – as presented for example by Benjamin Mitchell in his Talk at the LKCE 2012, see slide 24 – is to just mark the cards with a “tag” for each day that it spends in a certain column. For extremely narrow cadences this timeframe might be even smaller (e.g. half days).
Basically it just means you
Sample of pirate tags on a (hypothetical and hopefully unrealistic) story card, that spent half a day in ready, two half days in the definition of acceptance criteria, half a day in story preparation, half a day in implementation and two-and-a-half days in post-processing.
The traffic light for the build status and the acoustic signals for some events are examples of an andon system as it is often used in production environments. [Personal remark: I just love the traffic light!]
In production systems an andon system is often used in conjunction with a jidoka (stop-the-line) system. It might be helpful to consider a comparable approach – often the usage of the andon system can indicate an opportunity to have a Kaizen-Event.
Additional comment: In accordance with the broken window theory it seems especially important to make sure that the andon system actually reflects the real situation – otherwise the team (unconsciously) looses their trust in the system and it gets easier and easier to ignore it.
The WIP limits seem to work perfectly for the team for now, but do they really add value? To make sure that the WIP limits are used to their full extent their values should be determined by experiments. There are many reasons for applying WIP limits but two of the most important are the identification of bottlenecks and the leveling of the workload. (Right after creating predictability via the inverse of Little's Law ⇒ the lower the Work In Progress, the shorter the overall cycle time).
To achieve these goals the WIP limits should be tightened until bottlenecks show or people start complaining that they are idle. This way bottlenecks will show up and process optimization can happen (via Kaizen Events or during the retrospective, based on factual data).
The interpretation of the board has undergone some changes since it first came into use. This is a very healthy sign and indicates evolutionary change of the process as such. Those changes should be reflected on the board (e.g. by removing columns). Furthermore the evolutionary change doesn't stop with the board but also includes other artifacts of the process such as the recorded information, the format of the task-cards etc.
The purpose of the stand-up meeting is – at least in some interpretations – to exchange information about
In general a stand-up meeting that puts emphasis on the past (e.g. the first question) indicates a team that is still relying on their coaches and managers for decision making while an emphasis on the future (e.g. the second and third question) fosters self-organiszation and independence.
Even though the whole team is involved in the estimation of the stories and thus everyone should be familiar with the meaning of the story numbers only a small percentage of people is “wired” for this kind of association and usually the details get lost very soon.
Mentioning the concrete concepts instead of just the numbers triggers different memory areas and thus a comment during the standup meeting like “I intend to implement the collection of keystrokes per minute
for the refacoring-efficiency gauge
“ tends to engage people far more than “I'll work on Task 5
of S-42
“. The concrete mention of the task's contents might trigger someone who is working on something completely different to chime in with “Oh, we implemented a keystroke collector last week, let's meet after the standup”. The probability for that to happen is not very high in the case of the abstract references.
Good stories are negotiable – providing input on how a story can be improved based on the current developments and in accordance with the product vision is a valuable contribution.
A strong product vision is like a commander's intent: A vision of how the situation is after the action is over. This enables each end every member of the team to make informed decision about local (implementation) questions. The hardest part is the formulation of the product vision in a way that is engaging and captures the essence instead of the details. A good way to achieve this in software projects is to concentrate on the capabilities the users will have once the project is completed.
Vision focusing on the “how”:
“Our mission is to become the international leader in the space industry through maximum team-centered innovation and strategically targeted aerospace initiatives.” – hypothetical aerospace CEO
Vision focusing on the “what”:
“We'll put a man on the moon and return him safely by the end of the decade.” – John F. Kennedy
[Examples from Made to Stick, by Cliff and Dan Heath]
I would encourage anyone who has to formulate a vision to try to go the JFK-way.
The INVEST list of properties of good stories could help to sharpen the formulation of stories so that the team has a better chance of involvement on the content level of the product.
Formulating the stories along the lines of Mike Cohn's story format has some serious advantages (although that is not an answer to each and every problem with stories). With the format “As a type of user
I want system behavior
so that benefit
” a lot of practices come automatically into play: You get personas (types of users), an outside view (I want) and even the first level of acceptance criteria and a value indication (benefit).
Right now the guidelines for the review do not not state specifics about design decisions – a reminder on the results of the pre-lab work (e.g. on object oriented software construction) with some tangible guidelines (SOLID was mentioned on one of the posters in the hallway) would give reviewers some guidance for the criteria they should apply.
From the feedback it seems that some of the tasks are extremely small – if that proves to be true it makes sense to encourage slightly larger tasks.
While I am a strong proponent of physical task-boards I also belief that it is beneficial for some of the measurements if it is possible to mine the data. Since COTS-solutions tend to be to heavy – especially if the goal is to perform data-mining and all you need is the raw data of the state changes – a really lightweight in-house solution could be considered. Contact me for details on this idea
Overall the project seems to be in a very good shape from a process perspective.
The main chances for improvement seem to be in the area of quicker feedback on blocking issues and on inventions.
The Kanban board fulfills its purpose completely. For new team-members some pointers as to the direction of flow in the first lanes would be helpful (horizontal/vertical) but in the given setting (team-members don't change) that does not seem to be necessary.
The Depth Of Kanban evaluation is according to an article from the limited wip society's website where you can also find some other samples of implementation depths.
Observation: External impression Feedback: Interpretation of the interviewees answers
Range: 1 to 10, Current value: 7 (7.5)
The proposed metrics from the LWS website suggest to count 1 for each of
task post processing
, but with the time restriction of the lab this seems to incur more cost that it would provide in benefitsThis would be 7.5 Points, but apart from the original suggestions, the missing visualization of the flow (e.g. “age” of tasks in lane) was counted as -0.5 points.
The original paper proposes a simple taxonomy of four discrete values (Visualisation only, Proto-kanban, Kanban and multi-kanban), and from my observations the team operates a fully working Kanban system.
Range: 1 to 7, Current value: 3 (3.5)
The proposed metrics from the LWS website suggest to count 1 for each of
This would yield a value of 3.5 but 0.5 points have been deducted due to the missed opportunity for visual feedback with regard to the flow of taks-cards (writing on the back of the cards is a good idea in some instances, but only information vital to the person who handles the card should be put there – things that need to be visible should go on the front).
Range: 1 to 5, Current value: 1.5 (feedback) / 2.5 (observation)
The proposed metrics from the LWS website suggest to count 1 for each of
Range: 1 to 3, Current value: 2 (feedback) / 3 (observation)
[The original article refers to the Toyota Kata – I adjusted the scale with regard to the coaching model in place]
The proposed metrics from the LWS website suggest a taxonomy according to the Toyota Kata – the following is an abstraction from the goals of each step
Range: 1 to 5, Current value: 3
[The original article refers to the Toyota Kata – I adjusted the scale with regard to the coaching model in place]
The proposed metrics from the LWS website suggest a taxonomy according to the Toyota Kata – the following is an abstraction from the goals of each step