The Web Service Group web team began using agile development methodologies in August of 2011. Here are some of the methods and tools we’re using to facilitate that:
Tasks and issues
We use a task management tool that allows us to record entries for all support and development tasks. To help us with the agile aspect, we plan iterations based on know priorities or tasks that will return the highest value to the community. All known issues, as well as support and development tasks, go into our global backlog and are estimated, prioritized, and planned for inclusion in our road map. pulled into our regular iteration backlogs at the appropriate time.
We find user stories handy, What is a User story?
A user story describes functionality that will be valuable to a user of a system. User stories are composed of three aspects.
As a <user type>, I want to <function> so that <benefit> .
As an executive, I want to generate a report to understand which departments need to improve their productivity.
When tasks are first created, they are pushed into the global backlog. During iteration planning we pull tickets into the iteration backlog, which becomes our tasklist for that iteration. Currently, our iterations span a two to three week period. This allows us to focus on specific work for a defined period, and to switch to the next priority in succession.
The iteration backlog is basically a list of unassigned tasks to be completed in the current iteration. Any developer looking for work will go to the iteration backlog, find a task that suits their skills, and grab it. Of course, high priority tasks are picked first. Once a developer is done with the ticket, she/he can either push it back to the iteration backlog for someone else to work on, or close it if it is complete. All work is thoroughly tested by other developers on the team, so every task is handled by more than one person.
Our tasks are generally recorded with 'user stories' that describe in human-understandable language what components are need to fulfill the requirements. Before planning a new iteration, we go through all tickets that haven't yet been estimated and assign 'story points' to them. Story points are an estimate of effort, not time, and are based on team experience and collective understanding of the type of work being handled.
At the iteration planning meeting, we look at all tickets in the backlog and decide which ones will go into the next iteration. We base the selection on high value work for any new functionality we're building, high priority work or anything associated with a specific timeline, and important or urgent support issues and bug fixes.
Describes a requirement that must be satisfied in the final solution for the solution to fulfill its business objectives (otherwise site cannot launch)
Any requirement that has legal implications
Requirements are important to project success, but are not necessary for delivery in the current delivery time frame, but should be included if possible.
Requirements are as important as MUST, although SHOULD requirements are often not as time * critical or have workarounds, allowing another way of satisfying the requirement, so can be held back until a future delivery time frame.
Less critical requirements that are considered desirable but are not necessary.
Requirements that could be included if time and resources permit.
Least critical requirements that do not prevent the business from proceeding (lowest * payback items)
Requirements that we would do if time is available
Requirements that may be considered for the future.
As a result, WOULD requirements are not planned into the schedule for delivery. This, however, does not make them any less important.
During each iteration, we hold daily 15-minute stand-up scrum meetings. In those meetings, each team member is expected to answer three questions:
- What did I do yesterday?
- What am I doing today?
- What is stopping me (impediments) from doing my work?
We also get an update from the scrummaster on how many tickets have been closed using a burndown chart (on a whiteboard).
Any code developed during an iteration goes through a code review process. We use an automated code review tool as a first pass, then a manual code review is performed. This facilitates knowledge transfer, as well as cleaning up the code.
Continuous integration and improvement
We practice continuous integration and improvement. When something is ready to be deployed, it gets deployed as soon as possible instead of waiting until the end of an iteration to do all releases. This reduces the risk of multiple changes being deployed at once causing problems and means we can get new features out to our users as fast as possible. The code review process described above, alongside developer and QA testing means our deployments generally go very smoothly. Also, we're constantly adding automated tests to our toolset to reduce the possibility of bugs being introduced.
Towards the end of an iteration, we invite stakeholders to attend a review of the work that has been done during the iteration. Normally, the stakeholders are the requesters of the work being shown. The goal is to show work that has recently been deployed, is about to be deployed, or represents progress on tasks that might span two or more iterations. Ultimately, the point of the review meeting is two-fold: increase awareness of the work being released, and to start conversations about what improvements can be applied to future releases.
At the end of an iteration, we hold a team retrospective meeting to discuss how the iteration went. This is an opportunity for all team members to say what they thought went well, and what they think can be improved. A record of these discussions is kept so that they can be referred to and worked on to improve future iterations.