SDA SE Wiki

Software Engineering for Smart Data Analytics & Smart Data Analytics for Software Engineering

User Tools

Site Tools


Agile-Lab 2008b, Some external impressions

[dsp: The following observations are based on direct observations on September 5th and on interviews with the four involved research associates and four participants. General remarks about agile processes are set in italics all other text reports about impressions of the lab.]

Overview

  • Rather coherent view of the situation
  • Rather confident
  • Stable
  • Need for improvements only in small areas

The whole team seems to share a common understanding of where the project stands. They are rather confident about their progress the development effort as such is perceived as „stable“. This means that the velocity of the team is closer to „constant“ than it is to „bouncy“ and even after only one week of working together the team members give a rather coherent picture. [Some of the participants voiced a concern regarding too many discussion.]

Overall - compared to an industry project - a need for project improvement seems to concern only small areas

Overview details

After a couple of interviews concentrating on the main indicators the pictures was surprisingly concise and coherent. Usually an inconsistent cognition in these areas indicates some serious points in a projects structure that need to be addressed.

The Points from the interview are:

Satisfaction: High

The personal satisfaction with the work (related to the project) in terms of results as well as in terms of workplace experience.

The fact, that this indicator is quite high (On the delivery side as well as on the customer side) implies a healthy project culture and good communication as well as a young, energetic, growing project that has not yet come to larger pitfalls. It should be noted, that the observation of individual satisfaction via interviews is rather imprecise.

Speed: Known and good

The amount of work completed during a specified interval of the project.

In most settings this indicator is either not known and/or perceived as insufficient. The important part of this indicator is not necessarily the perception of the speed as “high” or “low” (since the speed is not directly deducible from the application of processes). Much more important is the fact that the whole project team is (or at least seems to be) aware of the speed which implies that they are capable of estimating the units of work and have a sense of “finishing” jobs.

Testing: Known, established and „sufficient“

This captures the different aspects about testing as applied in the project. Again the personal perception of the sufficiency of the testing process is the least import aspect of the question. A high diversion on this topic as well as a lack of established test tools would both have called for action.

Observation: Other than stated in some of the interviews the testcoverage was perceived as something that could be improved. This is especially true for tests that need additional preparation that is not (yet) present. This includes UI-Tests and tests that require special environments.

Customer-Feedback: Could be better, but sufficient

This topic summarizes the quality and density of (bidirectional!) feedback between team and customer.

Although the customer is not available on-site all of the time the team seems to get good feedback due to the fact that the customer is represented by to experts inside the team. This is one of the few point where the interviewers impression differs from the interviewees impressions. So below (Feedback / Customer Involvement) for a more thorough discussion.

Coherence: Good and getting better

Meant to measure how much the individual projects members' perceptions of other aspects deviate from each other. A good coherence is achieved, when is not very important which project member gets a project related question because all project members pretty much would answer the same. A bad coherence is indicated by the fact that there are specific members of the project that get asked more than other if an outsider (e.g. the customer or the sponsor) is interested in the “real” state of the project.

Observation: Having a high/good coherence after only a few days is rather rare! (personal remark: I have no idea if that's an effect of the selection process, the lab's reputation or the quality of the teachers but it's rather impressive)

Planning: Well understood and transparent

The “planning” aspect in agile projects differs greatly from the planning aspect in traditional projects. To efficiently plan and execute the planning game (aka planning poker) a sound knowledge of the planning process has to be widespread throughout the whole team.

Transparency: High

One of the most alarming indicators in industrial projects is the (more often than not) low transparency. XP and other agile methods like Scrum try to enforce transparency, so a mediocre or high transparence is to be expected in agile projects. Should it vanish (e.g if people start questioning third parties “What is <x> doing” to often) that should be treated as a severe warning signal.

Acceptance Tests: Defined and sufficient

Here we try to capture the way, the team handles the concepts of “done” and “error”. Having (automated) acceptance tests as the foundation of both concepts indicates a high maturity of team and customer but has to be examined closely because of a high risk of self-deception. (i.e. “So why is the customer still unhappy? All the acceptance tests are green!”)

Observation: Due to the adoption of RoR's nomenclature with respect to tests the notion of “acceptance test” differs slightly from XP's original understanding of the term. We'll go into this in “Acceptance Tests” below.

Architecture

During the interviews the topic of architectural styles arose several times

Emerging Architecture

In agile environments an emerging approach to architecture definition is not only feasible but also recommended. In the original definition of XP the Big Design Up Front (BDUF) was considered to be one of the main reasons for cost overruns and necessary rework (due to anticipations that did not quite come true). Although an emerging architecture refrains from doing an BDUF it still needs to be managed. In the original XP-Team almost all members could have worked as full time architects in other projects and where well familiar with the technology. In most “real life” projects the situation is different. The architectural experience is not spread evenly throughout the team. The technical expertise differs. The line between “exploration” and discussion quickly blurs. In these scenarios the architecture should still be emergent and emerge upon the input from the whole team. But designated people ought to actively care about the architecture (based upon stories) and communicate their findings.

Spikes

Whenever the consequences of architectural decisions are somewhat unforeseeable concept of a “Spike” - the creation of some functionality away from the main branch of the project that definitely won't make it into production code and does not necessarily adhere to the projects standards - is at least an option, if not mandatory. Since spikes don't have to adhere to the projects standards they can be developed alone - which enables a two person group to explore two different implementation ideas for the same requirement at the same time without loosing additional development time. Furthermore - and that is the main reason to use spikes - after doing a spike the team can decide upon “hard facts” instead of a bunch of interdependent assumptions.

Technical Exploration

Even if it is desirable to value business value above technical questions it should be taken into account that in an environment where the business case is situated in a highly technical environment technical questions become business decisions. In these case technical exploration has to be part of the customer negotiation and can easily be the foundation for user stories or tasks.

Build Management

The introduction of an automated build process is a very sensible thing to do in an agile project. Still “Build Management” consists of more than automatically building the product after each check-in. Although probably not necessary in the typical lab-size the introduction of the idea seems worthwhile. Although this topic is to large to be discussed in full in this brief report some ideas should be considered.

Staging. The idea, that all development code is in a certain stage an can only reach the next stage via some kind of (probably automated) promotion. This is something that has to find it's representation in the source code repository - not only in the directory structure of the build server.

Baselining. Related to the idea of different stages is the concept to develop certain parts of a project related to a defined baseline which can be used as a fall back point in case of unachieved goals or severe errors. The difference between a tag in the repository called “baseline_x” and the idea of baselines is mainly the active reference to baselines during development. One could think of them as internal releases.

Deployment In a full-blown build management environment the deployment includes the distribution of the product for each stage.

Acceptance Tests

The fact that the first task in each story is the definition of it's acceptance test leads to a sound foundation for “finishing” but could be carried on a little further

Carried out manually due to technical restrictions (e.g. no robots)

Since some of the acceptance tests require the movement of the device and “real” user input it's not feasible to automate all acceptance tests. This should not lead to the elision of all automated acceptance test!

Mediocre visibility

Although the acceptance tests are defined early on, their visibility - especially for the customer - is not very high. One of the few point where the consistency of the team seemed to be low was when the question “when, how and how often are acceptance test carried out?” was under discussion.

High satisfaction nonetheless

Nonetheless up to now the results are good. But to ensure “early crashing” another acceptance test scheme should be adopted.

Feedback / Customer Involvement

In agile processes the road of customer feedback is meant to be a two way road with strong involvement from the customer and quick feedback about the impact of the requested features. Although concepts like XP and Scrum stress this fact through the installment of roles like “On-Site-Customer” and “Product Owner” most industry projects have to cope with a mediocre amount of customer involvement.

Very good internally

Since two members of the team are also part of the customer - even more so due to hard requirements from the outside - the customer involvement is very good internally.

Hard to keep in Sync externally

The “real” representative of the client on the other hand is in a rather common situation and has to split his attention between the projects at hand and other important tasks. Since a customers tasks are hard to keep in sync with the projects “heartbeat” it is difficult to make all the interaction that happens internally visibile to the customer in an unambiguous way. Even if this does not yet pose a problem it should be closely monitored since it could become the root of misunderstandings and “unmanaged expectations”

Multimedia to the rescue?

There are many ways to improve the communication between team and customer even if the formar can't be on-site all the time. From a scrum point of view the scrum (standup) meeting is a fundamental synchronization point and should not be missed as an opportunity to involve the customer (but keep in mind the chicken/pig analogy). If he can't be present on-site for the standup meeting you can even let him join via iChat/AV (since Macs are used in the project video-conferencing comes for free) or other multi-media solutions (AIM and Skype come to mind).

Use more of the Wiki?

Alternatively (as it is common when crossing time-zones is an issue) it could be worthwhile to record the standups (a mac with an iSight and iMovie is all thats necessary to do that) and publish them e.g. on youTube (be sure to mark them as private!) or in the project wiki. The project wiki could also be used to publish intermediate results of the acceptance tests (even before the final run). Another idea would be to use a blog for transcripts of the standup meetings or to keep track of the development status.

Finishing

Another important difference between “conventional” and agile projects is the strong emphasis on finishing in the latter. There are several key points in the agile toolkit to ensure that things are really finishes when they are considered to be finished.

Some of these could be revisited to improve on the stability of the project.

End of task

The end of a task seems to be easily discernible, but in fact it isn't. It should be kept in mind, that a task should be ticked of by in accordance with the customer, not solely by virtue of the developers in charge.

This practice seems to be in place, but should be watched carefully.

End of story

As a task should be ticked in accordance with the customer, a story should only be ticked off by the customer - not only in accordance with him. Not only does this symbolism give the ticked off stories a different weight. It also forces the customer to look more consciously into the story than the other way round. With the given availability of the customers representative it might be an additional safety measure to install this practice if it isn't already installed.

End of iteration

Depending on the method in question the concept of the end of an iteration is interpreted with some subtle differences. According to Scrum at the end of an iteration a story is either finished or not - there is no partially done story. Without too deep a discussion of this topic it nonetheless is considered a good practice to base the estimates for the upcoming iteration only on finished stories. Even if the velocity is reduced tremendously it still is more realistic than awarding an “almost completed” story 80 percent of it's weight. Taking into account the number of software development projects that burned more that 80 percent of their budget after they were announced to be “80 percent done” this topic becomes even more important. (Warning sign: Should the uncompleted story from iteration 1 still be incomplete at the end of iteration 2 we would call the project “in trouble”)

Internal Feedback

Agile projects are not only recognizable by the fact that they follow a so called agile process but also by the fact that they adopt their behavior to new requirements. Therefore the quality of the internal feedback is rather important to the sustainability of any agile venture.

Very well established

The feedback processes in this project seem to be very well established.

Most of the possible improvements already „in the works“

Most of the improvements that were discussed during the interviews were either already planned or their installation had even already begun.

Self-Regulation in effect

For this project the necessary self regulation processes seem to work.

Additional Ideas

Only few additional hints resulted from this visit. The only remarkable one is the introduction of a 0-iteration

0-Iteration

The introduction of a 0-Iteration - an iteration where no business value is generated but the process is lived through the whole cycle - could help to establish better baseline for estimates and architectural chalenges.

Standup Meetings

Around the assessment of the project the question of the value of standup meetings and their “written in stone” rules arose several times. Here we give a short discussion of our view on the topic.

Signup of Tasks. Even if all people worked together the whole previous day (which is hard to believe in the first place) the standup should be the place to “re-sync” everybody about *their personal perceptions and their personal tasks between this standup and the next*

Conciseness. Standup meetings really should be kept short. Each discussion that starts in the standup should be noted, assigned a special place in the day's schedule and be cut out. There needs to be a clear distinction between the “syncing” that happen in the standup and the “work” that happens outside the meeting. In a tutoring situation this might sometimes be hard to achieve but it still should be the goal.

Are they necessary? Even if it was just to keep the customer “in the loop” the standup meetings still would pull their weight.

teaching/labs/xp/2008b/assessment.txt · Last modified: 2018/05/09 01:59 (external edit)

SEWiki, © 2019