The best way teams I have been on have found to estimate via a group is Planning Poker.
There are many benefits related to planning as a group (see my previous post on this topic for more). One thing that was difficult to grasp for me at first, is the idea of group estimating. Have everyone estimate each task? Even tasks they know they aren’t going to be responsible for? Yes! The collaborative nature of group estimating helps further dig up hidden features/assumptions as well as provides other benefits.
Teams I have been part of have tried a number of different ways to group estimate like:
- Group determines (informally) who they think the task is going to be completed by, and defers to that person’s estimates.
- Group members all write down a hour estimate on a piece of paper, let everyone know what they thought, and then negotiate until some consensus is gained.
- Play planning poker for estimation.
- All were used to some success level, but the first two generally took much longer to complete the estimation process and also reduced group ownership of all tasks to more individual ownership of certain tasks. In one instance, the negotiation process for one estimate (not the details about the estimate) took over 30 minutes, and even after that time, we just took the highest estimate to be able to move on.
Looking for a better way, the team discovered Planning Poker. Here are the high level details:
- Each team member has a set of ‘cards’, each with a single number. We use a variant of the Fibonacci sequence for card values (1,2,3,5,8,13,20,40,100′).
- A item to be estimated is read to the group. Any team member who has questions about functionality/etc. are encouraged to ask their question(s). This continues until all questions are answered the best they can be.
- The group facilitator asks each team member to pick a single card. (Says: ‘Estimate!’) The card is not shown to other team members at this point and verbal estimates are avoided to reduce the chance of influencing other team members.
- All team members estimate (minus the Scrum Master and Product Owner if they are in the room)
- Once all team members have a estimate card, they are all flipped over so all can see.
- When there is a significant variance between estimates the person with the highest estimate and also the lowest estimate are asked to briefly explain why they picked the number they did. This usually exposes differing assumptions by each team member and allows for some quick discussion on which assumptions are valid.
- The team is then asked to re-estimate.
- This process if repeated until there is group consensus.
- Side note: This Planning Poker in detail page has a detailed outline of the process if it’s totally new to you.
A few tips/takeaways from my experience to help with playing Planning Poker:
- It rarely takes more than two or three rounds of estimating to have total group consensus. Yes… Really!
- Invariably when starting group estimating in general, team members will ask if they can ‘abstain’ from estimating some tasks. We have chosen as a team not to allow this. All team members need to participate and do the best that they can. This has helped all team members get better at estimating tasks they usually wouldn’t be asked to participate in as opposed to relying on specific experts and disengaging.
- If there is a small variance in estimates (3/4 of team picked 5 and 1/4 picked 2 for example), the team will discuss amongst themselves what they think is best and usually pick a number instead of doing another round of estimating.
- As a team, we have chosen to stick with estimates that correspond to numbers on cards. This eliminates the tendency to say: “you have 5, I have 2, lets just average them out to 3.5 and use that estimate”.
- We eliminated the 1 and 3 cards, as we found the difference between 1,2,3,5 to be too small to be concerned about as a team. This further encourages ‘bucket picking’ even at the low end of estimating to validate that we are really just sizing activities, not committing to specific time frames. (This decision has been debated a few times since in sprint retrospectives, asking to put the 1 and 3 back, but the team has decided to keep them out for now.)
- Have a way to limit question/discussion length prior to estimation. Some times team members can get caught in the details and ramble on for quite some time. I have seen some groups use a 2 minute timer that any team member can start, to limit the current discussion point of which when the timer is done, a round of estimation is required, keeping the process moving. Point here is: group is not trying to precisely estimate the tasks, but instead they are sizing them and that just needs to be ‘within the ballpark’ as the main value is gained in that short amount of time.
- Consider a ‘I need a break’ card. Some Planning Poker decks have a picture of a pie on one card, meaning ‘I need pie!’ When this card is shown, the group takes a mini-break. Group estimation can be quite taxing, so breaks are important to keep people fresh and avoid the pitfall of ‘lets just get this done’ estimation.
- Like most things, the first few times your group uses Planning Poker, it will take longer to get consensus, but over a number of sprints, estimation goes much faster as the group gets comfortable with the process in general.
Next up in this series, my take on how to best manage Sprint backlog task allocation to team members.
Decomposing a User Story involves taking the result your user is looking for (stated as a User Story) and breaking it down into a number of tasks that the team can work on individually. Here are five tips I have found to be very useful:
1) Decompose User Stories into tasks as a team
Group planning is a cornerstone of Agile development. Though it may feel inefficient at times, the benefits are well worth it. See my previous post: Agile Planning: Plan/Estimate As A Group, Really? for more information/details.
2) Attempt to size your tasks to take one team member between 1/2 day to 3-4 days to complete
Motto here equals: Allow a “Race to Done” situation
Creating tasks that are smaller than a handful of hours end up taking to much time administratively to create/track/update/etc. Tasks that are larger than 3/4 days (some would say that is too big, but teams I have been on have found it to be workable) really should be broken into a couple of tasks if possible. They just take too much time to be able to race to done effectively.
3) Create tasks that result in a deliverable unit of work when completed
When decomposing a user story, be sure to break the story down into tasks that can be completed in a small amount of time (Point 2) but don’t focus on the time so much as ensuring you are creating tasks that result in a deliverable unit of work.
Don’t break down a ‘Maintain User’ feature into (like you might have in the past):
- Build the UI
- Build the biz logic
- Build the data tier
Instead create vertical slices of functionality when possible:
- Implement Add User
- Implement Edit User
- Implement Delete User
- Implement Add/Edit/Delete User Automated UI tests
- When a team member takes on the ‘Implement Add User’ task, it’s a contained unit of work, straight forward to know when completed, and also not dependent on other tasks being completed to be able to be tested (like Build UI/Data tier/Biz logic all are dependent on each other to deliver functionality to the user).
4) Don’t get caught deep diving into the details of each task
This is more difficult in practice than in theory. Knowing that task estimation is just around the corner for the team, it’s natural for a experience developer to what to define all details possible, down to ‘how many stored procedures are we going to be creating’. This, in theory at least, helps ensure the estimation process will be more precise. I don’t buy into this theory, at least not from a time invested by the team to get this extra level of preciseness. Certainly attempt to ask the functional questions when decomposing a User Story, so hidden functional ‘gotcha’s are uncovered, but also realize the team is just defining/sizing effort at this point, not writing up a ‘development specification’.
5) Ensure testing/automation tasks are included
On the teams I have worked with over the past years, we have always had a group of professional Quality Assurance Analysts. Our Scrum teams are no different so these types of tasks don’t usually get forgotten, but for the many teams that don’t have QA pros integrated into their Agile teams, I can imagine this to be something missed. A motto of ‘get the functionality to the user as quick as possible’ would seem to lead to that. Just because the team is Agile, doesn’t mean there isn’t any testing that should go on! Automating tasks where appropriate is also very important, given the high amount of regression testing that is needed when sprinting via 2-4 week timeframes.
Next up in this series, group estimate via Planning Poker.
I don’t like Story Point estimating. There I said it. I know many have had success with Story Point estimating, and the Scrum guru Mike Cohn advocates it in his books/etc. I have just found it to be too abstract, and difficult for developers (and myself) to grasp when starting out using Agile techniques.
In my experience, when developers/engineers/etc. are asked to estimate in hours (which is very much the norm in software), they aren’t really thinking in hours. Truth be told, I don’t think many actually think in hour blocks when estimating, but instead think in terms of ‘days of work’ or partial days of work. Here’s an example of what a developer is thinking when giving an estimate: “Hmm.. I think this task should take me about a day, maybe day and half to complete, so lets make it 8*1.5 = 12 hours” Tell me you don’t do that? There wasn’t any self talk on thinking part one of the task would take 2 hours, part two, 6 hours, etc. but rather they would ‘chunk’ their time into days.
So in comes Story point estimating. We don’t want to be estimating User Stories from a calendar time perspective, but instead relatively against each other. This allows for quick estimates that give size, but not ‘commitment’ to time which is what most developers feel an estimate is. Hour estimates come during User Story decomposition, part of Sprint planning. Unfortunately, how do you define one Story Point? What is your logical point of reference?
This is where the ‘Ideal Day’ metric works better for me. This metric was shared with me by Pete Carroll and is really an abstraction of the number of hours you would normally expect a developer to be productive during a typical day, subtracting time for meetings, bathroom breaks, etc. This will vary from organization to organization, but has a large benefit over Story Point Estimation IMHO. Mainly it is the default metric the developers are already thinking in as I eluded to above. There isn’t any translation in their head, no trying to define an ambiguous metric. Instead it’s more gut feel that is natural to all developers with some experience while still allowing for the relative estimating of User Stories to take place. All Ideal Day estimates should be in round numbers, (ie 1,2,3. not 1.5, 2.34, etc).
Trick here is to realize the Ideal Day metric is still an abstraction of time estimates. We don’t plan off the Ideal Day on a timeline, but instead use team velocity matched with Ideal Days to equate to time-lining User Stories from a high level. The Velocity metric will help even out the group estimation variance just like with Story points, but it feels so much more natural to the team.
Next up in this series, decomposing user story tips.
So you decided to go ‘Agile’ with your team? Maybe you have read a book or two on Scrum, XP, or something similar. Many of the base ideals of Agile development make intuitive sense, but this idea of “Group Planning/Estimating”… Really? Surely that isn’t going to work with ‘my’ team.
I know this is where I was several months ago. The team I work with has been using a number of Agile concepts for years to get projects done, and doing it quite successfully I might add. I was reading up on Scrum, and intrigued by its simplicity yet didn’t really get the idea of how important group collaboration is to its success right away. I had originally balked at the idea of having Daily Standups (after having them for months now, I realize how wrong I was on this), so the idea of having the whole team get into a room for several hours to “plan” (which includes some estimating) every two weeks (our chosen sprint length) just sounded so inefficient. I mean… isn’t planning/estimating the PM’s job? Including all those people and taking them away from “development time” to plan?
Well, I am here to say that a transition to group planning can be difficult, but once you work through the growing pains, it’s well worth the effort. Don’t short-change yourself by going 1/2 way either. If you are planning for a 2 weeks sprint, and it’s only taking 1/2 hour to complete, you probably aren’t planning as a group, but more just planning individually and meeting for a short time to pick tasks individually for the sprint. Most people I have spoken with or read online say it should take around 2 hours per week in the sprint to complete the planning processes. (4 hours for a two week sprint) This is dependent on the size of the team, complexity of the project, etc… but it’s a good rule of thumb, and one that I have found to be about right given the number of sprints I have been involved with.
There certainly is work that needs to be done prior to the planning meeting, especially by the ScrumMaster and Product Owner, to groom the project backlog as well as ensure the User Stories are in a state ready to be handed to the team. But the process of taking a feature and decomposing it into development tasks (unless most of the features you are building are trivial in nature), should be done by the team as who knows better than the ones who will be doing the work? Picking the tasks as a team, golden. Estimating the work as a team, what better way to ensure all team members have at least a semi-good understanding of the work being committed to? Having a say in this process also breeds ownership by the team and it’s members. They have some ‘skin in the game’ from the planning stages, and that helps set them up for success to meet the goals of the sprint which is great for all involved.
My next post will cover the ‘Ideal Day’ metric, and how I have found it helps the team size User Stories (features).
As I wait in the airport for my flight to board, I figured I would put together a quick Mobile Connections 2011 ’Top 5 take-a-ways’ post from my perspective. Lots more detail in my previous posts for each session, this is just my ‘mind dump’ without looking back at my individual session notes.
My original “big goals” coming into the conference were to get a feel from the experts on where cross platform mobile development is headed, if there are any tools to build for the four major platforms with one code base and if so… what tools are leading the charge now, and expected to lead in that space going forward.
#1 Take Away: Cross platform development via one code base (including HTML5) is tough at best, crazy to try at worst. A number of the experts flat out said, if you want a average at best application, go ahead and try to use a cross platform tool. Average meaning it won’t specifically feel like other apps on each platform… compromises have to be taken because of the lack of support for some features on each platform, as well as the different UI styles. If you want a decent application, the UI needs to be built with native code. Plain and simple. Furthermore, this isn’t changing anytime soon so just get used to it.
#2 Take away: The “cloud” term has different definitions to different people (no surprise there), but the idea of the ‘private cloud’ vs ‘public cloud’ really hit home for me. Our ability to leverage the cloud technologies without actually having to put our data/etc on some ‘public’ server is attractive… especially for a transition in the short term, as the technologies around public cloud based security/etc mature. The ability to do hybrid Cloud offerings, having your web servers hosted by a public cloud provider but the data being hosted on company owned cloud technologies sounds great for SaaS providers that have sensitive data they need to protect the best they can, while still trying to allow for max scalability. Cloud and Mobile really go hand and hand now, if you expect to support a significant number of mobile users anyway.
#3 Take Away: NoSQL solutions are super fast, and scale 1000’s of times better than Disk Based SQL solutions. If you are going to be supporting numbers of mobile clients in the tens of thousands or more, you need to be utilizing this type of technology. Redis seems to be mentioned in every conversation regarding this platform type. (NoSQL at is most basic is a fully in memory key-value data store). Excellent tips were shared in my notes writeup of the Architecting Back End Systems for Mobile session.
#4 Take Away: SaaS companies (and others using web technologies) need to look at the product offerings that BiTKOO has. Their Keystone app is an amazing abstraction of Authentication/Authorization and from a coding perspective, is really Plug and Play. Also, their SecureWithin application gateway brings about many possibilities regarding accessing corporately stored data on the web securely. More info can be found in my notes writeup regarding the session BiTKOO CEO gave.
#5 Take Away: The speakers at these conventions are top notch from a know how perspective. The value they provide to the attendee’s in question answering after the sessions alone, is worth the cost/time invested to attend.
I do feel it’s important to throw in this ‘bonus’ takeaway… I will call it 1a as it’s a continuation of the first takeaway:
#1a Take Away: I did attend a workshop yesterday related to the RhoMobile toolset for cross platform development. Though I wasn’t crazy about how the session was conducted, the products they have do look very promising. Using web developer skills (Ruby), the tool supports all the major Mobile OS’s (WP7 and WinCE support too, in about a month) and generates Native code for each platform. It has a number of excellent features of which the one I liked most was it’s support for specific style sheets per OS. So you build your UI using web programming skills and the product styles the UI to look like the ‘normal’ app presentation for the OS. It comes with stock style sheets for each OS and it really does work well. Has support for camera, BlueTooth, etc, as well as a mapping control that makes the use of the OS’s preferred mapping API very nicely. The toolset also has a local data storage tier that takes advantage of SQLLite. If the platform you deploy to doesn’t have SQLLite embedded into it, the tool will deploy a binary representation of it so you can plan on a single local data source across all platforms. This tool has great promise from what I can tell.
Uses Awesomium control to embed web content into WPF application for kiosks/etc.
Natural User Interface (NUI) = touching the screen (or manipulating the screen without touching it)
- The content should define the experience
- The “Grandma Huckaby Test”: the ability to effectively use the kiosk without training
- No one should have to touch the machine to update content (remote deployment while running)
- Updating content should happen centrally and should have automated delivery
- Cant go to deep screens wise (maybe 2/3 levels deep at most)
If something is moving (even simple animation) a humans attention is caught. You are going to look at it.)
Touch Capable Hardware Implementations:
- Capacitive – Think electric impulse (iPhone and others)
- Infrared – Expensive ones. Think laser pointer(s) (best fidelity of touch… costs 10’s of thousands of $$)
- Resistive – Think push down and drag (old, No ‘cool’ devices use this anymore)
Tip: 98% of time 2 people is all a device like this needs to support, though people think it will need to support more. The use cases just don’t support the need to support more.
Given a typical user experience < 5 min on a kiosk type device, you need to keep the navigation shallow and intuitive.
.NET 4 has decent support for touch. Before .NET 4 was very minimal support.
.NET 4 turned touch into a first class citizen for developers.
WPF does support true distributed computing. (with .NET 4 version)
Convinced we gotta do mobile apps native. The user experience in particular requires it.
“Azure is easy for .NET Dev’s”
Important aspects of storage:
- Space consumption & Transactional cost
- Some storage is designed for unlimited storage but you pay per transaction
- Other storage mechanisms are designed for limited storage but unlimited transactions
Biggest problem with Azure now is: No way to really know how much this thing is going to cost.
Kinect can authenticate (differentiate between faces… and when voice is there, voices too)
Hooking Kinect into the windows OS sure looks to be a step to having a ‘Minority Report’ type User Interface for computers.
This session was an excellent way to end the conference for me. Tim is an excellent speaker and showed some really interesting technologies. Peaked my interest regarding looking into some possible UI design changes we might be able to make.
Speaker: Wei-Meng Lee
Wei’s talk was very good, but unfortunately for me, it covered many points that I had already been introduced to in the two other Location based sessions I had attended within the past two days.
Here are the main takeaways I got from the session that were not already covered in the other sessions:
TIP: (most common problems) Requires INTERNET permission in the android Manifest.XML file to get mapping control to work correctly.
Troubleshooting tip: if you don’t have internet access working in the emulator, or the Map API Key is not entered in your code, you won’t be able to see the map.. (95 % of the issues people have are these two)
Confirmed Native support for GeoCoding and reverse GeoCoding by Google Maps too.
Don’t use both location Manager = “GPS” and “Network” at same time… write the code to turn one off back and forth… otherwise your coords will change often.