One of the most difficult tasks in software development is estimating also known as guessing. Here are a couple of thoughts about the topic.
- Estimates aren't Commitments. At least they shouldn't. Lets assume you estimate you need 1 hour from your home to the airport, through all formalities and into the plane. Do you leave your house 1 hour before departure of the plane? Probably not. You'll add all kinds of buffers and leave your house a lot earlier. Same should apply to any kind of project. If you think it takes 6 months it is a bad idea to promise a delivery date in 6 months. Promise 7, 9 or 12 months, depending how bad it is when you show up late.
- Estimates aren't measurments. If you want to build a shelf, you can measure the space you have available, and if you measure properly you can cut the boards according to that measurement and they will fit. If they don't it is probably your fault because you didn't measure properly. With the tools in your house you might be able to measure size up to a couple meters with an error of about 1/1000 and if you want to measure more precise you just have to invest in better tools and measure more carefully.Not so with software. If you estimate a task to take a day you might be done after a day. Or maybe after 6 hours. Or you give up after 3 days. There is no way to be sure. You can't just put more effort in estimating to make it more precise. Tools won't help you much. The only thing reducing the margin of error is actually implementing it.
- Some basics in statistics. If you roll a dice once you will score approximately 3.5 points. With a margin of error of about 100%. If you roll the dice 1 million times the average value will be 3.5 with pretty good precision. This is no accident, but pretty strong math. The basic idea is that you can add up indipendent random events and that the relative uncertainty is smaller for the added up events than for each single event. This is good news since if you estimate a project you typically break it down into tasks and estimate each task. Then you add up everything and get your result. Which has a smaller range of error.
- The rule above applies does not apply. It holds only when the events (guesses/estimates) follow a normal distribution and are independent of each other. Neither is true for software estimates. Go figure.
- We learn through feedback
It's almost impossible to learn anything without getting feedback on how you are doing. Imagine learning chess without anybody ever telling you when you make a bad or even illegal move. You just have a book with the rules and play games against yourself. How likely do you think it is to ever become a respectable chess player? Nil? I think so too. But with estimating we are often in such a situation. When you do waterfall projects you often do estimates for 100s of tasks in one go and only a year later you get the feedback: The sum of your estimates was 20% of. It's really hard to learn this way. The situation improves a lot when you follow agile practices, where you typically estimate smaller batches, closer to the time when they are actually performed. This enables you to actually learn how long tasks take.
Talks
Wan't to meet me in person to tell me how stupid I am? You can find me at the following events: