-
- Previous Labs
- Installation
- Technical
- Process
- Archive
Software Engineering for Smart Data Analytics & Smart Data Analytics for Software Engineering
Most of the observations where obtained from the standup meeting at 14:00 on September 22rd 2008. Additional information was gathered in follow-up interviews with the customer and the project lead/coach. Some informal input was received from informal discussion with some of the experts. Although some of the suggestions from this report are already applied they are still mentioned here to give a complete picture.
The current situation seems to be perceived much less optimistic than after the first iteration. The main concerns are:
While not all of these problems are considered “real” from my point of view some of them are surprising whereas others are “normal” and surely would be remedied after a few more iterations.
While most of the positive points from the first report still hold true this second report tries to focus on the possible optimizations and therefore paints a seemingly darker picture. It should be kept in mind that the lab itself still is well above industry standard and some measures (e.g. in the realm of architectural decisions) where taken deliberately due to the learning situation
Compared to the first visit the speed or velocity of the project was less clear and seemed less important to the team. I consider this to be a trouble indicator that should lead to direct action.
The progress was hard to recognize to both customer and other observers (i.e. me). This results from a number of facts. Most prominent among these is the fact that the team mostly related to their internal vocabulary and referred to a lot of co-related tasks in relative terms without relating these results to the story-board.
We suggest to try some modifications both to the story-board as well as to the procedures inside the standup meeting.
In theory - to reduce the dependence on the knowledge co-related tasks - the tasks should be flagged “completed” (and signed up for) during the standup. In reality this would impose an avoidable strain for all project members to either estimate the tasks to fit exactly into the time between two standups (and hold true to that estimate) or to have a lot of idle time between the end of a task and the next standup.
To remedy this (and according to the original, first edition XP principles) each task can be marked, when the owner(s) consider it done, whereupon they should either sign up for a new task or build new pairs with others (some slack allowed). But the reporting of the progress should still be done in the standup, e.g. by moving the task cards accordingly. The resulting process looks like:
To avoid frustration it is important to give a clear indication of how much has been accomplished already. This is especially true for projects (like tho one in this lab) where there is a huge amount of research involved and stories are likely to be done multiple times. One way to keep the accomplished stories visible - besides the visible burndown charts - is the usage of a “done” section on (or alongside) the story board (see three-tier-story-board below)
While tasks can be ticked off by developers (as noted before) and are generally not so much a point of discussion stories are the objects of customer negotiation and their boundaries have to be as clear as possible (with sensible effort). Otherwise the use of “yesterday's weather” to determine tomorrows results gets obfuscated by unclear boundaries and scope creep.
A story should be closed only after *all* the tasks are completed. If someone thinks a story is completed even if there are uncompleted tasks the remaining (suddenly unnecessary) tasks should be removed only *after* a clear communication of the reason why the tasks have become unnecessary. One should keep in mind that tasks are a means to an end and the real measurement for story completion lies in the user acceptance tests.
Nonetheless even after it is finished from a development point of view the story should be kept visible and clearly marked as “not yet considered implemented by customer”. To avoid visual clutter and help the team focusing on the task at hand a “three-tier-story-board” approach is advocated by some members of the agile community.
One section for Stories under consideration (related to the product backlog entries if a product backlog is used), a second section for “active” Stories (including displayed Tasks) and a third section for completed Stories. In sections one and three the tasks can be “hidden” (e.g. behind the story-cards or might even be non-existing in section one) but the story cards themselves should never be “space reduced” by either stacking or removing them. The visual feedback is much higher this way and there are less translations involved. This also calls for a stable layout of the sections - although some width adjustments might be useful with progress…
Even after the team considers a Story as “closed” that status still needs to be verified. That should be done by the customer's representative and visibly documented. With the introduction of a “completed” section, this could be achieved by flagging these stories with colors, e.g. Yellow (in the completed section) means considered completed by team, Green (in the completed section) means accepted by customer and Red (in the completed section) indicates a story that has been considered “not completely implemented” by the customer and goes either back to section one immediately (if the errors seem to be easily fixable) or is included in the next planing session).
The “story” is on the same abstraction level as the customer communication. In theory. In most real life project the stories tend to be somewhere between real user stories and technically influenced system stories. Whether this is due to the fact that developers and business people have a different understanding of the boundaries and abstraction levels or if it is simply because the system becomes part of the business is besides the point for this discussion. The fact remains that the “type” of each story differs more of less from the theoretical “ideal story” and this deviation can be used to identify certain opportunities for improvement.
Additional tests - as they where defined in the current iteration to enhance the stability of the product - are an indicator that the idea of “finishing” is not quite the same as it is in the original XP ruleset. That does not imply that these stories should be ignored - right now there are definitely a correct choice to enhance the stability - but the goal should be to have such tests included in the original user stories (e.g. as tasks) even at the cost of some velocity.
The chances are high, that - without the test stories - some of the original stories would have become dragging stories which are carried from one iteration to the next. Apart from the fact that the aspect of finishing is not well adressed by such stories, the velocity takes an immense punishment from dragging stories if only *completed* stories are counted in on the velocity tally. This im plies an in-depth discussion of the concept of splitting stories and the “when and why” of the splitting. Although this discussion would be to much for this paper, my personal choice would be to split as early as possible - if the customer is available!
Some features (not in the fdd way but simple things like “it is possible to read the error message somehow”) tend to become stories. It is immensely important to keep a clear distinction between those features that are dictated by common sense or technical necessity and those that are based upon customer requirements. Especially if the customer is present at the standup meeting it is paramount to have a clear distinction of his different roles. Whereas the Scrum process has a clear concept of who is allowed to call which shots (chicken and pigs) the XP rules are not that clear. My personal advise at this point would be: Never discuss the boundaries of a story of a task within the standup meeting. If there is any confusion about any given point - even as small as the question of how to disable debug messages - that can't be clarified immediately schedule a follow-up discussion and communicate the results later on.
The management of errors (whether they are discovered by the customer or inside the development team) should be uniform. Since the team employs continuous integration practices there never should be an error from development in the integration stage - thus all errors discussed here have to be errors discovered outside the implementation of the erroneous functionality. (Errors that occur during the development of a task are a natural byproduct of software development and should be gone by the time the tasks are checked back into the scm) For each error discovered by a third party there should be some kind of artifact that can be estimated, e.g. a task card. How that task card is prioritized and how it is mixed in between the other existing tasks is a subtle point and must be clarified for the project/lab but goes beyond this discussion.
Only after the error-task has officially been scheduled should it be worked upon. Unrelated to the application of TDD in other areas of the development effort each error-task should at first be verified by writing a test that exposes the erroneous behavior. Only then should the error be fixed and checked into the scm (together with the testcase of course).
In agile projects there are some very visible architectural concepts like YAGNI, no BDUF and DRY. Unfortunately one architectural concept - which is incorporated in the whole test-first concepts - that let many a project falter is often not treated as that important (perhaps because there is no fancy acronym): the concept of loose coupling.
The current architecture of the lab has a high end-to-end dependence. This is an impediment to the development of loosely coupled, separately testable units and should be addressed by an architecture with more levels of indirection (although this will surely lead to other problems as . But these other problems (like a bigger codebase and more tedious work) are better understood and more easily addressed (e.g. by smart code browsers and meta programming) )
Currently the architectural concepts are only manifested in the minds of the team members. Although this is good in a way (no outdated documentation) it is hard to discuss architectural decisions on this basis.
Despite the fact that the agile manifesto advocates “working software over comprehensive documentation” it never has been said that there should be no documentation. Even Kent Beck (among others) sometimes advocates the idea to use a documentation of the architecture *if it is necessary* - and judging by the effort that went into architectural discussions during the course of the lab it is necessary in this case. Still - according to Kent Beck IIRC - the documentation should not be longer than one page - and I don't think he was thinking of A3 or larger but more of “US-Letter, handwritten!”.
The basic idea is to use a metaphor (see below) that explains the systems architecture in a comprehensive way - and since we're in a technical domain metaphors as “like a webbrowser without javascript” are OK. But while one metaphor might be sufficient to describe the 40.000 foot view of the application(s) it won't suffice on an implementation detail level. Especially not in a client server environment where there are at least three “applications” involved - the client, the server and the middleware. Therefore that 1-page-after-the-fact-architecture-description consists mostly of a long list of components and “like” descriptions that clarify the concepts
(An example of such a short architecture description (not from this project): The server is like a blackboard where everyone can create and remove entries. The clients can read the whole blackboard at once or register as listeners like in the observer pattern. Some clients reside physically on the server and contain the game logik. Enduser clients contain the UI and some related logic. They communicate with the server in the same manner feedreaders do via the Atom protocol but use a binary protocol.)
Even if the architecture is not created in a BDUF effort it still needs to be maintained by a specific set of people. Depending on team structure and size these could be designated people or the whole team. The key is to actively collect the different architectural decisions and unite them continuously. The observations mentioned under “Complexity & coupling” are considerable obstacles against a real emergent architecture. A emergent architecture is possible with feature that are never used (although that somewhat defeats the purpose), with duplicated concepts (although that increases the workload) but hardly ever without loose coupling.
Although this part is applicable for a number of projects the issues are real and relate to the project in the lab.
It might be a good idea to consider inverting the dependency graphs between client, server and middleware. Independent from the physical implementation of the communication protocol the functionality of client and server should not be dependent on the logical protocol but the logical protocol should depend on the client and server implementation. Even more important the client should be able to work completely without the server in a mock environment. The same is true for the server as well. Of course the transposition of functionality that originates either in the server or in the client to the protocol might lead to changes in the originator but these should be minor and since there would be a working reference implementation the changes could be made in a safe way (using refactoring and test driven approaches).
One of the underestimated practices from XP is the metaphor. Even if it seems hard to find one most of the time the concepts of software systems are already described by metaphors: desktop, queue, pipe, filter, window etc. So instead of trying to find some arcane metaphor that can only be stated in sentences spanning a couple of lines try to use a couple of metaphors instead. Architectural descriptions are full of them anyway so why not use them for the 40.000 feet view as well?
The automated build process is a cornerstone of agile development. Only by combining a source code management system (scm) with a completely automated way to convert source artifacts to shippable artifacts a team can responsibly refactor ruthlessly and cut away dead wood. An automated build-process does not necessarily mean a build system like cruise-control but cc helps a lot of course. The visible feedback provided by the traffic light attached to the build server is a great way to keep the whole team on track. Still, some points could be optimized.
An effort should be made to enable a traceability of tasks and builds. Given the atomic checkin of svn tasks it is possible to check in all those modifications that where done to solve one task together and even record that task in the checkin comments. Even if “traceability is a myth” as some people (e.g. Kirk Knoernschild) suggest, this still gives a clearer view of what is to be expected in a given build than just the file specific comments.
The concept of different stages that are fed from the build system based on rules related to the maturity of the respective builds should be considered. Although some of the ideas are incorporated in the Rails build the deployment should work accordingly. One model that could be adopted comparatively easy works with three different stages integration, test and release. The integration stage is equivalent to having only one target system where each build is installed. The test stage is the stage where customers run their test and release denotes the “final” released stage (for the iteration…). Although this approach requires more (logical) servers and client “hub”s it enables clearer distinction and faster delivery of tasks. On average a task should be in the test stage less than half an hour after the developers checked it in.
In general agile processes try to avoid a separation of team-members to allow for osmotic knowledge transfer. A 100% co-located workspace is hardly ever achievable, but the ideal should nonetheless be the goal.
Some of the observed separation seems to stem from different hardware requirements. This is probably the only way to handle the situation right now but for future projects/labs it might be worthwhile to try to break this dependency. Especially if that leads to a two-group-arrangement where students are in one room while university staff is in the other. (This advice is to be taken with more than only a grain of salt since it purely relies on my personal perception - none of the interviews revealed any perceived problems with the room situation)
This is generally considered harmful!
Whenever groups get separated by function (leaders, architects, developers, testers, UI designers) this is an unhealthy sign. The fact that the separation of workplaces in the lab indicates such a separation is softened by the fact, that the “customer and team lead” people spend a good deal of their time with the rest of the team, but the “facing the wall” positions make osmotic communication a lot harder.
Due to the fact that the customer was on site most of the time for the last two iterations the interaction between team and customer was clearly improved.
The customer's situation in the lab was rather close to that of real life customer representatives. Although the project is vital there are still many things to do that don't relate directly to the inner workings of the development effort and force the customer to be - at least partially - off-site. Even if most agile processes call for something like an on-site customer this is hardly achievable.
Although this time the customer was much more involved than in the first iteration there still was room for improvement. It is strongly suggested to implement some kind of “push” mechanism (respectively a push-pull protocol) to keep the customer informed and up to date. Establishing a project blog and creating end-of-business tasks (eob-tasks) to keep a kind of project-diary this way could be one option.