15

I am very often in an environment where the employees performing the work are constantly rolling over to new employees and these employees are allowed to estimate on their own how long tasks will take. I have found that the error in their estimates always seems to be the same. Like if Bob over estimates on task 1, he is very likely to over estimate on task 2 as well. Same thing if Jim under estimates on 1 he will probably under estimate on task 2 as well.

So my question, is there any way to perform a quick test of sorts to identify these trends before starting into actual work? This post seemed to hit some of my questions, and I like the PERT method as described by Bill the Lizard. Ideally what I would like is to be able to very quickly identify if I need to change the weights of Optimistic, Expected, and Pessimistic for that person in order to better estimate how long it will take.

Since there is such a fast roll over it seems like about the time the employee has done enough estimates/work to know how to treat their estimates they end up moving on. So, any suggestions on how to handle this? Would going with something other then PERT work better?

And PS, the fast roll over is due to it being a student driven group so it is expected that the student will leave once they graduate.

Kellenjb
  • 253
  • 1
  • 5

5 Answers5

9

You should never change estimates provided by estimators. All you can do is to add corrections, for example:

               ML     BC     WC      PERT
               ------------------------------
Bob             8      6     12       8.3
John            7      8     20       9.3
Alex           16     12     60      22.7 
                               --------------
      Average                        13.4
   Correction                        -4.2
                               --------------
       Result                         9.2

Every correction has to be documented individually:

"I think that Alex over-estimated the complexity of the task"

This way you will always keep track of estimating history and will be able to get back when the tasks are completed and validate your estimates and corrections (!).

yegor256
  • 7,050
  • 4
  • 29
  • 50
6

Actually there is a very good, simple but not so quick method: compare people's estimates with real times they spent working on those tasks.

The observation that people miss their estimate usually by similar percentage isn't new. Pretty much the same was brought by Joel Spolsky in his article on Evidence Based Scheduling. And Evidence Based Scheduling is a great tool to deal with the problem you point - it basically takes into account the situation and uses it as information to analyze.

Once you learn how much an estimator misses their estimates you can apply this knowledge to their current estimates to calculate more real ones. Joel explains it very well in the article.

Generally if you look for a method to learn how good someone's estimates are there's no better approach than basing on historical data to verify what's the person's track record so far.

Also you may find more ideas in this thread as it discussed similar issue.

Pawel Brodzinski
  • 19,896
  • 56
  • 131
4

I don't know of any quick test. As you have no prior knowledge about their estimating accuracy, I would always let them estimate in group. You can use planning poker for this (I'm sure students will love this), but also just letting them discuss their individual estimates in group will quickly give you a much better estimate. And it really doesn't take so much time. As Yegor already suggested, make good notes of risks connected with he worst case and any assumptions made. Not only as a historical record, but to manage!

If you have good records of past performance, you could establish a table like

  • Class A: Been done before - low=-5%, high=+8%
  • Class B: Some new development - Low = -10% High = +12%
  • Class C: New development - Low = -10% High = +15%
  • Class D: New technology - Low = 10% High = +17%

where the percentages are calculated based upon past performance.

But even then I would still let them make the single point estimate in group.

Stephan
  • 5,252
  • 1
  • 21
  • 31
2

Some research suggests that software project duration estimation accuracy follows a lognormal distribution. So if you have enough estimation data for an employee, you can statistically test whether or not their pattern conforms to the norm or if their is something else going on.

Here is a link to one such study:

-Ralph Winters

http://web.ecs.baylor.edu/faculty/grabow/Fall2010/csi3374/secure/Estimating/Little06.pdf

1

The process by which a team arrives at the estimate is what drives how precise an estimate is. You need to facilitate an iterative approach using several competent people, challenge extremes, discuss, iterate, use historical values where you can, use parametrics where you can, iterate, iterate again, finalize. Further, an estimate is NEVER a single point. It is ALWAYS a range from minimum, to most likely, to maximum. In fact, maximum never truly ends, i.e., the tail on the right goes out to infinity. PERT is a simple formula that helps identify a potential target; however, it is based on a BETA distribution. Not all tasks fit this distribution. So it has limitations.

Where you target your time and cost is risk management. It becomes more of an exercise of the degree of risk you want to assume. So long as the target falls within the estimate distribution, it is a technically correct target. Is it too risky, too optimistic, those are the questions to ask. Then, it is not about "correcting" the target, which implies an error; it is about "adjusting" the target to meet your degree of risk aversion.

Example: After several iterations you team determines, at the project level, it will take between 45 to 65 days, most likely 52 days. Your team comes back with a target baseline of 48 days. Forty-eight is greater than 45; thus, it is a correct target. However, you determine that 48 days falls on the 20th percentile, too risky for your tastes. So you move it to 55 days, or the 60th percentile. You now have a 60% chance of hitting that target. Neither is more correct than the other. One is less risky, however.

David Espina
  • 37,143
  • 4
  • 34
  • 91