From: Tomasz Zajaczkowski on

>> 1. The final estimate seems to me a bit excessive for such a simple
>> application.
> Remember that the constants used in the AUCP computation should be
> based on empirical data for the particular development environment.
> For example the sort of software being developed or even the style of
> writing use cases can "average" constants for lack of better data;
> they need to be updated with local experience.]

Sure but there should always be some sanity check on the result. Especially
that the same guy has an article about function points where he arrives at
an estimate of 7 days having same requirements.
As I understand the AUCP covers the variation in sort of software, experience,
etc. by applying technical and environmental factors - which affect the estimate
much but in any case won't be able to do anything with a wrong assumptions/input
(which I belive is the case here). As usual garbage in, garbage out.

>> 2. As I understand from other articles this estimate includes:
>> Analysis, Design, Implementation and Unit Testing, it does not
>> include Testing effort which would add another 90 hours of effort.
>>
> FWIW, I would expect /all/ developer effort to be included in the
> estimate. [I could see an argument for estimating testing effort
> separately because notions like 'complexity' might be different for
> software and test construction. In that case, though, I would expect
> unit testing to be separated out as well.]

I was refering to QA effort. I mean I've read a couple of articles about
estimating Testing effort based on Use Cases (similar formulas to those for
Development) thus I've assumed that UCP includes Development (with developer
testing) effor only. However I might have jumped to conclusions to fast.

>> Can anyone give me their practical comments on this example?
>> I mean is this use case correct? If not how would a correct use case
>> look like?
> In the methodology I use we only employ use cases very informally, so
> I am loathe to respond here explicitly.
>
> However, I would point out that, in general, all use cases do is
> recast /existing/ requirements into a form that makes it easier for
> the developer to implement against them. As a result the ultimate
> criteria for the utility of use cases is that they /do/ make it easier
> for the developer. That leaves a lot of room for variation,
> especially where the developers already have substantial knowledge of
> the problem space (i.e., the developers can "read between the lines").

This seems to be quite a big advantage. Especially that it seems to provide
quite easy tracability from business use cases, throug system use cases to
implementat and tests.


From: Tomasz Zajaczkowski on
>> So far I've been using functional decomposition and estimation on a
>> fine grained level based on previous experience. I've combined it
>> with an estimation for a minimum and maximum effort to get an idea of
>> possible risk. We usually do the estimates in a group and then
>> discuss the differences to find the holes in our assumptions as well.
>>
> This is called 'informed best guess' it works, so long as everyone is
> fully aware of all the details and gotchas in the project. As soon as
> unknowns get factored in, the estimate will become inaccurate.

Correct me if I'm wrong but in any case an estimate is just an "estimate",
isn't it? If it is based on historical performance data it will probably
be more accurate. However I wonder how much better can it be. I mean: it
is usually quite rare that the next project you are working on is identical
to the previous one, you can usually find some relation but there are always
unknowns. Each estimation method requires lot of assumptions which might
not be accurate in the end. Even historical data might prove to be false
for the next project (e.g. if you base it on performance of a team that undergoes
significant personnel changes before next project starts).

Even the results of "proper" estimates are said to have potential 400% error
in the Inception phase.

>> However
>> finally on a low level it is based on judgement and experience and
>> does
>> not seem to be accurate as much as we would like. Based on this
>> approach I've been thinking that expected effort would be around 1
>> week
>> with maximum of 2 weeks (with fully perfect documentation, etc.) and
>> possibly 2-3 days if we were under real pressure. Thus I was really
>> surprised when I've seen the 200-300 hours estimate.
> There is no mathmatical model to give an accurate estimate. A good
> estimate needs to factor in capability and maturity of the software
> organisation. Since this cannot be calculated in advance, historical
> data is always used. The more recent the historical data, the more
> accurate the estimate. The estimates are then refined taking into
> account the results of risk analysis - which is a subset of the
> requirements analysis phase initially - but an ongoing process
> throughout the project - which is why estimates are always refined.

Sure we revise our estimate after each significant phase as we have much
better idea of what we are trying to achieve.

> Microsoft are a good example of refining estimates. They start of
> with giving a year, then as that year gets close, they'll say - 2nd
> half of the year, then as that gets closer they'll say 3rd quarter,
> then October, then they'll give an actual date. This is how real
> estimates work. An RFP usually starts out as being a broad estimate
> and project planning is the process that refines it.

All of this is fine, however in many cases you need to create an estimate
which is a base for the contractual agreement. In such cases you need to
provide one number (at least in our case) and stick to it.

>> I would like to ask you a couple of questions:
>> 1. It seems to me that use case estimation is based on processing
>> (transactions) and ignores complexity of data. I mean let's assume
>> you
>> have the same use case, same transactions, but instead of capturing
>> 4-5
>> fields you need to capture 100 (no special processing just simple
>> validation). How will this affect use case estimate?
> Here is a problem. To get the Use Case you must have done some
> analysis, and in order to do some analysis you've probably already
> given the customer an estimate - which cant actually be done
> accurately until you have the Use Case.
>
> What that means is that normally 2 estimates are given. The first one
> is normally based on experience. As a programmer I gather data on
> every task I do, and I keep that data in a spreadsheet. When the boss
> comes up and says - how long to do X. I can usually find a couple of
> very similar tasks in the s/sheet that allow me to give a fairly
> accurate reply.
>
> Some times the estimate I need to give is a compound one, that is, its
> a complex task that needs to be broken down into sub tasks (pretty
> much what a project is). In this case I gather a series of estimates
> and join them together to make a bigger one. In this case you need to
> send your boss away, and tell them when they can expect the estimate.

Sure this is what we do as well, wherever possible based on historical data.
However in some cases it will still be a "guesstimate" especially when your
new project uses new technology or covers different problem domain. Even
performing some quick experiments (on the new technology) to get some feel
of it, will not change the fact that it is a "guesstimate".

> In answer to the question, when you have 5 fields and want to make it
> 100 fields, just multiply out the 5 field estimate 20 times. This
> results in what is called a 'rough ballpark' estimate.

Did I sound so lame? I was actually trying to point out that I cannot see
(so far) how this case is covered by AUCP as I think the use case description
would not change even if I add many more data fields.

> Its called a ballpark estimate because its accurate enough to work
> from, but not precise because it doesnt take into account all the
> environmental factors and production overheads that occur in s/w
> development. The art in creating an accurate estimate is by factoring
> in everything that may affect the project (the risk).

Sure, however usually you will likely get quite a wild range (especially
in inital phases). And although giving a range is much better then giving
one value many people assume that the lower range is the actual estimate
and the higher one is "just your buffer".

>> 2. Can you recommend a good book or even better good online training
>> (best with assessment or exam) for Function Points or Use Case Points
>> estimation?
>>
> Not really, just read every article you can get your hands on.
>
> The important field you need to study is called 'software metrics'.
> Basically, you need to gather data on the process, you cant gather
> accurate data unless you have the right questions. Your questions
> will be on size, cost and duration.

All of those are important and you cannot really improve much, if you do
start measuring. However what, and how you measure seems to make a big difference.
In any case it seems to me that even if you measure, estimation is more of
an art then science (maybe I'm just a non believer) and whether you deliver
on estimate or not will more depend on project management and handling customer
expectations (cutting features, revising cost, agreeing on project extensions)
then on the accuracy of the estimate. The results of published surveys (e.g.
on accuracy of estimates for companies on different CMM levels) do not mention
anything about the variation in initial and final scope. It might as well
be that more mature organizations are better at PR.

Anyway what I'm trying to say is that going through the whole process yourself
and learning on your own mistakes takes long time. All the estimation, measurement
techniques, books, articles are highly theoretical and in most of cases are
not easily transferable to practical use. On the other hand practical examples
and advice seems to be scarce, thus unless you work in a company that has
already matured, your improvement options are quite limited. Especially that
you can see lot of testimonies, on the Internet and among your customers
and friends, of companies that have failed to achieve next e.g. CMM level
or have lot's of documented procedures which just take shelf space. Usually
the reasons are superficial adoption, lack of understanding and all sorts
of mistakes and wrong assumptions.
I would prefer to avoid this as much as possible. Is there any shortcut that
can get you there faster? And I do not mean avoiding the work.

Tomasz


From: Nick Malik [Microsoft] on
"Tomasz" <tzajacz(a)softhome.net> wrote in message
news:1127306247.977249.214700(a)g44g2000cwa.googlegroups.com...
>
> However all the
> methodologies, estimation techniques, even measures (as it makes a big
> difference what and how you measure, not mentioning how you interpret
> the results) described in books and articles are highly theoretical.

I disagree. Function point counting is not theoretical. It has been
practiced, in public, records being kept, for over 25 years.

> Translation to practical use are not that straightforward and examples
> or practical advise seem to be scarce (does it mean that people do not
> want to share it, or that it works just in theory?).

It means you are looking in the wrong place.

> Thus unless you
> work in a mature company, the path to improvement is difficult.

It always is. Trust me on that.

> Especially if you look at others that have failed to achieve
> improvement (ISO, CMM, etc.) or produced large procedure manuals that
> just take space on the shelf, and you do not want to repeat their
> mistakes or achieve superficial improvement.
> I do not want to waste years on finding the proper path. Learning on
> your own mistakes is all fine, but takes really long time.
> Isn't there any shortcut? And I do not mean avoiding the work.

First, get trained in Function Point Counting. The place to start is
www.ifpug.org
Then get a copy of one of Capers Jones' books. He is a researcher that
collected FP counts for projects throughout industry and published them. I
recommend "Estimating Software Cost" but his more recent "Software
Assessments, Benchmarks, and Best Practices" is an excellent book as well.

Honestly, I hadn't heard of use case points before this post. I am
skeptical. What I do know: FP counting is rigorous. You have to take a
very rigorous exam in order to get the certification (CFPS). Two certified
counters, given the same requirements, should reach the same count (within a
margin of error of 10%), without ever speaking to each other. Try THAT with
other methods!

Then, using the metrics of Capers Jones, you can get to a great initial
effort estimate that includes each of the different categories of software
effort, including analysis, design, coding, unit test, functional test,
system test, integration test, deployment, IVV, project management, and
others. Depending on the type of project you do, you have a chart to decide
which measurement to include in the count.

There are software tools that you can use to make this easier, and there is
a web site that collects and distributes count information (for a fee, of
course... a hefty fee). I find it best to use the tools, but start with the
industry averages from Capers Jones and then fine-tune for your particular
business by measuring and then tracking a few projects.

I am an agilist. I believe in Scrum and adaptive methods. I also do not
believe that you can tell your customer "I don't know" when he asks you for
an estimate of cost.

Note that using RUP or using Agile rarely changes the cost estimate when you
compare the scope performed with the cost to perform it. In other words, if
Team A creates software with features 1,2,3,4,5 using RUP and Team B creates
software with the same features using Agile, the costs will be about the
same. On the other hand, I have seen some interesting effects of using
agile. In my experience, scope creep tends to be less in an Agile project.
Also, agile projects are more likely to deliver fewer features than
originally planned than RUP projects. (there are many schools of thought
that try to answer why this is). Also, the defect count found after
deployment seems to be lower if the project performed rigorous code
inspection, either TSP/PSP style or through pair programming. Caveat
Emptor: I have no scientific study to prove it. This is anecdotal evidence
only.

So, in conclusion, start with Function Point counting and go from there.

--
--- Nick Malik [Microsoft]
MCSD, CFPS, Certified Scrummaster
http://blogs.msdn.com/nickmalik

Disclaimer: Opinions expressed in this forum are my own, and not
representative of my employer.
I do not answer questions on behalf of my employer. I'm just a
programmer helping programmers.
--


From: Tomasz Zajaczkowski on

>> 1. The final estimate seems to me a bit excessive for such a simple
>> application.
> Remember that the constants used in the AUCP computation should be
> based on empirical data for the particular development environment.
> For example the sort of software being developed or even the style of
> writing use cases can "average" constants for lack of better data;
> they need to be updated with local experience.]

Sure but there should always be some sanity check on the result. Especially
that the same guy has an article about function points where he arrives at
an estimate of 7 days having same requirements.
As I understand the AUCP covers the variation in sort of software, experience,
etc. by applying technical and environmental factors - which affect the estimate
much but in any case won't be able to do anything with a wrong assumptions/input
(which I belive is the case here). As usual garbage in, garbage out.

>> 2. As I understand from other articles this estimate includes:
>> Analysis, Design, Implementation and Unit Testing, it does not
>> include Testing effort which would add another 90 hours of effort.
>>
> FWIW, I would expect /all/ developer effort to be included in the
> estimate. [I could see an argument for estimating testing effort
> separately because notions like 'complexity' might be different for
> software and test construction. In that case, though, I would expect
> unit testing to be separated out as well.]

I was refering to QA effort. I mean I've read a couple of articles about
estimating Testing effort based on Use Cases (similar formulas to those for
Development) thus I've assumed that UCP includes Development (with developer
testing) effor only. However I might have jumped to conclusions to fast.

>> Can anyone give me their practical comments on this example?
>> I mean is this use case correct? If not how would a correct use case
>> look like?
> In the methodology I use we only employ use cases very informally, so
> I am loathe to respond here explicitly.
>
> However, I would point out that, in general, all use cases do is
> recast /existing/ requirements into a form that makes it easier for
> the developer to implement against them. As a result the ultimate
> criteria for the utility of use cases is that they /do/ make it easier
> for the developer. That leaves a lot of room for variation,
> especially where the developers already have substantial knowledge of
> the problem space (i.e., the developers can "read between the lines").

This seems to be quite a big advantage. Especially that it seems to provide
quite easy tracability from business use cases, throug system use cases to
implementat and tests.


From: AndyW on
On Wed, 21 Sep 2005 03:23:49 -0700, Tomasz Zajaczkowski
<tzajacz(a)softhome.net> wrote:

<snip>
>
>Correct me if I'm wrong but in any case an estimate is just an "estimate",
>isn't it? If it is based on historical performance data it will probably
>be more accurate. However I wonder how much better can it be. I mean: it
>is usually quite rare that the next project you are working on is identical
>to the previous one, you can usually find some relation but there are always
>unknowns. Each estimation method requires lot of assumptions which might
>not be accurate in the end. Even historical data might prove to be false
>for the next project (e.g. if you base it on performance of a team that undergoes
>significant personnel changes before next project starts).
>
Not really, most estimates form the basis of a contract somewhere
along the lines, either between the developer and the PM or the
company and the customer. It pays to get them correct.

The problem is, 'estimate' has a contextual meaning as well as a
technical one. There isnt really another industry technical term that
can be used in its place that holds the same meaning.

>Even the results of "proper" estimates are said to have potential 400% error
>in the Inception phase.
I tend to be 95% to 100% accurate with mine - depends on the scenario.
When people deliberatly ask me for a guess, I've been known to tell
them to take the preverbial soak in the lake.

<snip>
>
>> There is no mathmatical model to give an accurate estimate. A good
>> estimate needs to factor in capability and maturity of the software
>> organisation. Since this cannot be calculated in advance, historical
>> data is always used. The more recent the historical data, the more
>> accurate the estimate. The estimates are then refined taking into
>> account the results of risk analysis - which is a subset of the
>> requirements analysis phase initially - but an ongoing process
>> throughout the project - which is why estimates are always refined.
>
>Sure we revise our estimate after each significant phase as we have much
>better idea of what we are trying to achieve.

It then becomes a matter of process improvement to change the
environment to have the better idea before you do the phase rather
than after.
>
>> Microsoft are a good example of refining estimates. They start of
>> with giving a year, then as that year gets close, they'll say - 2nd
>> half of the year, then as that gets closer they'll say 3rd quarter,
>> then October, then they'll give an actual date. This is how real
>> estimates work. An RFP usually starts out as being a broad estimate
>> and project planning is the process that refines it.
>
>All of this is fine, however in many cases you need to create an estimate
>which is a base for the contractual agreement. In such cases you need to
>provide one number (at least in our case) and stick to it.

In my earlier post I said that you normally provide two. The first
being the guess, the second after some form of analysis the contract.

I dont think anyone in any industry would volunteer to do work without
knowing what it is first.

It would also be feasible in most contracts to have a clause that
allows for the estimate to be re-negotiated. This usually links to
the risk management plan and customer change control system.

>
>>> I would like to ask you a couple of questions:
>>> 1. It seems to me that use case estimation is based on processing
>>> (transactions) and ignores complexity of data. I mean let's assume
>>> you
>>> have the same use case, same transactions, but instead of capturing
>>> 4-5
>>> fields you need to capture 100 (no special processing just simple
>>> validation). How will this affect use case estimate?
>> Here is a problem. To get the Use Case you must have done some
>> analysis, and in order to do some analysis you've probably already
>> given the customer an estimate - which cant actually be done
>> accurately until you have the Use Case.
>>
>> What that means is that normally 2 estimates are given. The first one
>> is normally based on experience. As a programmer I gather data on
>> every task I do, and I keep that data in a spreadsheet. When the boss
>> comes up and says - how long to do X. I can usually find a couple of
>> very similar tasks in the s/sheet that allow me to give a fairly
>> accurate reply.
>>
>> Some times the estimate I need to give is a compound one, that is, its
>> a complex task that needs to be broken down into sub tasks (pretty
>> much what a project is). In this case I gather a series of estimates
>> and join them together to make a bigger one. In this case you need to
>> send your boss away, and tell them when they can expect the estimate.
>
>Sure this is what we do as well, wherever possible based on historical data.
>However in some cases it will still be a "guesstimate" especially when your
>new project uses new technology or covers different problem domain. Even
>performing some quick experiments (on the new technology) to get some feel
>of it, will not change the fact that it is a "guesstimate".

I doubt you'd very often come across a software development project
that is actually 'new' - that is, it hasnt been experienced by someone
else who can supply metric data.

The other thing, when something is 'new' you can often relate it to
something else that is similar. This removes the need for guestimates
and puts things closer to the certainties.

Finally, as with any contractual estimate, there is always the game of
'overs and unders. That is, estimates average out.



>
>> Its called a ballpark estimate because its accurate enough to work
>> from, but not precise because it doesnt take into account all the
>> environmental factors and production overheads that occur in s/w
>> development. The art in creating an accurate estimate is by factoring
>> in everything that may affect the project (the risk).
>
>Sure, however usually you will likely get quite a wild range (especially
>in inital phases). And although giving a range is much better then giving
>one value many people assume that the lower range is the actual estimate
>and the higher one is "just your buffer".

The range supplied is an indicator of risk. Narrow indicates minimal
risk, because your more certain of what is required, Wide, indicates
large risk because you dont have accurate information to be precise
with the estimate. In that case, go back to the customer.

Again, this is an issue of process maturity, not estimation.

>
>>> 2. Can you recommend a good book or even better good online training
>>> (best with assessment or exam) for Function Points or Use Case Points
>>> estimation?
>>>
>> Not really, just read every article you can get your hands on.
>>
>> The important field you need to study is called 'software metrics'.
>> Basically, you need to gather data on the process, you cant gather
>> accurate data unless you have the right questions. Your questions
>> will be on size, cost and duration.
>
>All of those are important and you cannot really improve much, if you do
>start measuring. However what, and how you measure seems to make a big difference.
>In any case it seems to me that even if you measure, estimation is more of
>an art then science (maybe I'm just a non believer) and whether you deliver
>on estimate or not will more depend on project management and handling customer
>expectations (cutting features, revising cost, agreeing on project extensions)
>then on the accuracy of the estimate. The results of published surveys (e.g.
>on accuracy of estimates for companies on different CMM levels) do not mention
>anything about the variation in initial and final scope. It might as well
>be that more mature organizations are better at PR.
lol. Its a refined art, based on experience (which is historical
metrics for people if you think of it that way).

Software development is pretty much like management - its both an art
and a science. Managing the project and managing the customer is
where its all at.

The CMM is pretty good, I've been using it for years. Biggest problem
is the initial effort for a level 2 organistion to move to a level 3
capability is quite difficult for short sighted managers to cope with.
Especially in an industry where the average employment cycle for staff
is only 1.5 years. Thing is, mature organisations tend to keep their
staff longer, hence their processes and estimates become more refined.

>
>Anyway what I'm trying to say is that going through the whole process yourself
>and learning on your own mistakes takes long time.
I've been at it for 20+ years and I'm still learning :)

>All the estimation, measurement
>techniques, books, articles are highly theoretical and in most of cases are
>not easily transferable to practical use. On the other hand practical examples
>and advice seems to be scarce, thus unless you work in a company that has
>already matured, your improvement options are quite limited. Especially that
>you can see lot of testimonies, on the Internet and among your customers
>and friends, of companies that have failed to achieve next e.g. CMM level
>or have lot's of documented procedures which just take shelf space. Usually
>the reasons are superficial adoption, lack of understanding and all sorts
>of mistakes and wrong assumptions.
>I would prefer to avoid this as much as possible. Is there any shortcut that
>can get you there faster? And I do not mean avoiding the work.

Best thing to do is what I tend to do when I start at a new
organisation. Fire up your spreadsheet and start gathering data
about your own performance. Apply the metrics techniques to that,
after a while you'll start to understand what your own performance is
and see ways to improve it. Once you can do that, then you have
something to demonstrate to others.

At best, if you keep a developer log book on your desk, you can read
thru it and gather some initial data to test your model with. Then
the rest really is down to time.

At one company I used to really annoy the test team by telling them
the probability of a defect occuring in a certain area of a piece of
code their were about to test. I also used to deliver zero defect
code (probabilities used to annoy them because they knew I wouldnt
give them a defect anyway). And my estimates were accurate down to the
minute (people ended up checking their watches when I delivered code -
the big laugh was when I came walking past them with a cup of coffee 5
mins before). It got to the stage that one of the senior managers had
to ask me to put defects in my code so the test team had something to
do. (it raises the point that you have to be very very careful
demonstrating advanced capability to people/managers that are not
close to that level - you can end up with bad reviews because of the
politics).

After demonstrating the capability, I would give a talk in one of the
team meetings on the techniques I was using and how they worked. It
was always good for a bout of enthusiasm for a few weeks or so
(usually until the PM messed up and the project hit a crunch phase).

But at the end of the day, the techniques that I use (and this applies
to all aspect of software development as well as metrics and
estimates) is the ones I've created myself over the last few decades.
I use the written material only as guidelines and things to think
about.