From: Tomasz on
I understand you favour XP approach and estimation based on experience
or rather actual historical performance.

Unfortuanatelly the point at which we normally have to make the
estimate is the RFP response or at best after getting requirements.
In such case you do not have the comfort of having the hard data and
have to use some kind of estimation.

Regards,

Tomasz

From: H. S. Lahman on
Responding to Tomasz...

> I'm new to both Use Cases and Use Case Point Estimation.
> I've ben reading some articles about it and found the following
> (recommended as a reference on this site as well):
> http://www.geocities.com/shiv_koirala/fp/usecasepoints.html
>
> As a person that learns best on practical examples I liked the approach
> taken in this article.
>
> However:
> 1. The final estimate seems to me a bit excessive for such a simple
> application.

Remember that the constants used in the AUCP computation should be based
on empirical data for the particular development environment. For
example the sort of software being developed or even the style of
writing use cases can "average" constants for lack of better data; they
need to be updated with local experience.]

> 2. As I understand from other articles this estimate includes:
> Analysis, Design, Implementation and Unit Testing, it does not include
> Testing effort which would add another 90 hours of effort.

FWIW, I would expect /all/ developer effort to be included in the
estimate. [I could see an argument for estimating testing effort
separately because notions like 'complexity' might be different for
software and test construction. In that case, though, I would expect
unit testing to be separated out as well.]

> 3. The use case presented there seems a little to detailed (I mean it
> goes down to almost pseudo code) and the resulting estimate is very
> large.

I agree that it is too detailed in some areas. For example, the
criteria for validating credit card information should be just
referenced in the use case step, not explicitly defined. (Of course the
point estimation needs to evaluate that criteria in order to assign
complexity points to the use case step.)

>
> Can anyone give me their practical comments on this example?
> I mean is this use case correct? If not how would a correct use case
> look like?

In the methodology I use we only employ use cases very informally, so I
am loathe to respond here explicitly.

However, I would point out that, in general, all use cases do is recast
/existing/ requirements into a form that makes it easier for the
developer to implement against them. As a result the ultimate criteria
for the utility of use cases is that they /do/ make it easier for the
developer. That leaves a lot of room for variation, especially where
the developers already have substantial knowledge of the problem space
(i.e., the developers can "read between the lines").

That presents a problem for employing use case points. One way that
problem is manifested is in tailoring the constants for AUCP (or those
used for estimating complexity). Thus use cases written with a lot of
assumptions about developer problem space knowledge could yield few
points but more effort per extracted point than use cases that are more
explicit.

One can use an empirical track record to tinker with the constants but
there is a more serious problem. The more effort per point there is,
the more <absolute> uncertainty there will be in the effort estimate.
(A similar logic applies to things like complexity ratings.)

Bottom line: since only use case formatting is standardized and not use
case content, there is potentially a lot of variability in the way use
cases are written and that will indirectly affect the way points are
counted and converted to effort estimates. IOW, I think that before use
case points can be used effectively we will need a lot more practical
guidance on how to write use cases than Cockburn's book provides.


*************
There is nothing wrong with me that could
not be cured by a capful of Drano.

H. S. Lahman
hsl(a)pathfindermda.com
Pathfinder Solutions -- Put MDA to Work
http://www.pathfindermda.com
blog: http://pathfinderpeople.blogs.com/hslahman
(888)OOA-PATH



From: AndyW on
On 20 Sep 2005 06:37:27 -0700, tomasz(a)01systems.com wrote:

>Thanks for the reply.
>
>So far I've been using functional decomposition and estimation on a
>fine grained level based on previous experience. I've combined it with
>an estimation for a minimum and maximum effort to get an idea of
>possible risk. We usually do the estimates in a group and then discuss
>the differences to find the holes in our assumptions as well.

This is called 'informed best guess' it works, so long as everyone is
fully aware of all the details and gotchas in the project. As soon as
unknowns get factored in, the estimate will become inaccurate.

>However
>finally on a low level it is based on judgement and experience and does
>not seem to be accurate as much as we would like. Based on this
>approach I've been thinking that expected effort would be around 1 week
>with maximum of 2 weeks (with fully perfect documentation, etc.) and
>possibly 2-3 days if we were under real pressure. Thus I was really
>surprised when I've seen the 200-300 hours estimate.

There is no mathmatical model to give an accurate estimate. A good
estimate needs to factor in capability and maturity of the software
organisation. Since this cannot be calculated in advance, historical
data is always used. The more recent the historical data, the more
accurate the estimate. The estimates are then refined taking into
account the results of risk analysis - which is a subset of the
requirements analysis phase initially - but an ongoing process
throughout the project - which is why estimates are always refined.

Microsoft are a good example of refining estimates. They start of
with giving a year, then as that year gets close, they'll say - 2nd
half of the year, then as that gets closer they'll say 3rd quarter,
then October, then they'll give an actual date. This is how real
estimates work. An RFP usually starts out as being a broad estimate
and project planning is the process that refines it.
>
>I would like to ask you a couple of questions:
>1. It seems to me that use case estimation is based on processing
>(transactions) and ignores complexity of data. I mean let's assume you
>have the same use case, same transactions, but instead of capturing 4-5
>fields you need to capture 100 (no special processing just simple
>validation). How will this affect use case estimate?

Here is a problem. To get the Use Case you must have done some
analysis, and in order to do some analysis you've probably already
given the customer an estimate - which cant actually be done
accurately until you have the Use Case.

What that means is that normally 2 estimates are given. The first one
is normally based on experience. As a programmer I gather data on
every task I do, and I keep that data in a spreadsheet. When the boss
comes up and says - how long to do X. I can usually find a couple of
very similar tasks in the s/sheet that allow me to give a fairly
accurate reply.

Some times the estimate I need to give is a compound one, that is, its
a complex task that needs to be broken down into sub tasks (pretty
much what a project is). In this case I gather a series of estimates
and join them together to make a bigger one. In this case you need to
send your boss away, and tell them when they can expect the estimate.

In answer to the question, when you have 5 fields and want to make it
100 fields, just multiply out the 5 field estimate 20 times. This
results in what is called a 'rough ballpark' estimate.

Its called a ballpark estimate because its accurate enough to work
from, but not precise because it doesnt take into account all the
environmental factors and production overheads that occur in s/w
development. The art in creating an accurate estimate is by factoring
in everything that may affect the project (the risk).

>2. Can you recommend a good book or even better good online training
>(best with assessment or exam) for Function Points or Use Case Points
>estimation?
Not really, just read every article you can get your hands on.

The important field you need to study is called 'software metrics'.
Basically, you need to gather data on the process, you cant gather
accurate data unless you have the right questions. Your questions
will be on size, cost and duration.

On top of software metrics, the other important field is project
metrics (a subset of business metrics). Once you have your questions,
liase with your PM - most of the data for that is in the project plan
(look at historical actuals). You'll need to know things like work
rates, costings etc.

There are formal techniques on estimation, most of them are
mathmatical clap-trap. But they do form a usefull purpose for giving
an indication of what kind of questions you should be asking in order
to gather your data.

The Use Case example posted earlier gave some examples of software
metrics (such as mccabe cyclometric complexity etc).

Another really good subject area to learn about is BPE (business
process engineering - or BPR as its often called). Inside this area
is a technique called Business Process Analysis. The end result is a
set of business rules. Business rules often get turned into a set
of functional requirements and non-functional requirements that form
the RFP. Functional requirements can be broken down into function
poinits (bits of the system to be developed) and non-functional
requirements can be broken down into other areas of work that need to
be developed (it gives you an idea of your environmental overhead).

You calculate the estimate based on both functional and non-functional
requirements. The result is then multiplied by your companies work
rate and hourly cost to get a rough idea of size, duration and cost.


>3. How do you estimate effort for the RFP responses - when you do not
>have enough details to perform Use Case or Function Points analysis?
RFP responses are not formal estimations, they are just ballparks at
best (ballpark is a technical term for roughly accurate guess). Note
that in large projects, there is a formal methodology for performing
RFPs - usually it involves a bucket load of analyssis. You'll often
hear about an RFP costing several million dollars to produce. But for
most s/w development, the formal process isnt used.

In normal s/w development, you give a ballpark estimate (otherwise
called a guess), then as the project goes on and you get better data,
you continuously refine the estimate.

Service contracts are really good for highlighting this process. In a
service contract the customer will often tell you several months in
advance what work is coming up. You'll then give a best guess
estimate for each chunk of work, so that the customer can fix the
budget for it (often they will choose to drop work based on your
estimate).

When the actuall task comes along, you'll then do some analysis and
probably give a more accurate estimate (use of function points often
works). On this one, they'll probably give the yay or nay to start
the work. (note: A good services company will have a clause in their
contract defining the difference between the initial estimate and the
one given after the initial analysis is done. Performance KSI's are
always measured on the latter).

Once you have completed that [project] work, you have the actual time
taken (you use this data in your next estimate for any similar work).

Once you have gone thru this cycle with a customer you now have a set
of data and herein becomes the industry problem.

Most companies wont gather the data, or they will discard the data
when the project or task finishes. There are many stupid people in
the industry that believe all projects are different and you should
start again for each one (only the methodology should be re-used).

Intelligent people realise that within an organisation, software
development is simply a production line technique. They will take
data from one project and use it on another where the techniques are
similar. Doe enough projects and eventually you'll have enough data
to provide complete coverage. Do enough jobs of a similar nature and
your estimates become refined to your actual historical averages.

Once you start doing this, your initial ballpark estimate given above
moves to being an estimate based on similar work actually completed
and is no longer a guess. In the trade, this is called process
maturity (a subset of software maturity).

An organisation reaches a certain level of maturity when all of its
estimates (both at the rfp level and programmer level) are based on
historical averages (tweaked for specific differences).

Once you hit this level of maturity, you should notice a huge drop in
project costs. So all the extra effort does pay off in the long run.

Andy
From: Phlip on
Tomasz wrote:

> I understand you favour XP approach and estimation based on experience
> or rather actual historical performance.
>
> Unfortuanatelly the point at which we normally have to make the
> estimate is the RFP response or at best after getting requirements.
> In such case you do not have the comfort of having the hard data and
> have to use some kind of estimation.

Religion does not apply. You must work with a different meaning of
"estimate" and a different value of it.

--
Phlip
http://www.greencheese.org/ZeekLand <-- NOT a blog!!!


From: tomasz on
You are right, sometimes it feels like it. Especially that I've seen
many people and have also been forced to create estimates and schedules
that give the desired result.

I've once had a situation where the customer's RFP gave very incomplete
information - basically some description of business flows but nothing
about the associated data and its processing (amounting to more then
60% of application). The customer refused to provide more details. The
only choice I had was to compare it with a similar application we have
done previously, which due to differences between customers was quite
inadequate.