From: Tomasz on
>> 1. The final estimate seems to me a bit excessive for such a simple application.

> Remember that the constants used in the AUCP computation should be based on empirical data for the particular development environment. For example the sort of software being developed or even the style of writing use cases can "average" constants for lack of better data; they need to be updated with local experience.]

Sure but there should always be some sanity check on the result.
Especially that the same guy has an article about function points where
he arrives at an estimate of 7 days having same requirements.
As I understand the AUCP covers the variation in sort of software,
experience, etc. by applying technical and environmental factors -
which affect the estimate much but in any case won't be able to do
anything with a wrong assumptions/input (which I belive is the case
here). As usual garbage in, garbage out.
>> 2. As I understand from other articles this estimate includes: Analysis, Design, Implementation and Unit Testing, it does not include Testing effort which would add another 90 hours of effort.

> FWIW, I would expect /all/ developer effort to be included in the estimate. [I could see an argument for estimating testing effort separately because notions like 'complexity' might be different for software and test construction. In that case, though, I would expect unit testing to be separated out as well.]


I was refering to QA effort. I mean I've read a couple of articles
about estimating Testing effort based on Use Cases (similar formulas to
those for Development) thus I've assumed that UCP includes Development
(with developer testing) effor only. However I might have jumped to
conclusions to fast.

>> Can anyone give me their practical comments on this example? I mean is this use case correct? If not how would a correct use case look like?
> In the methodology I use we only employ use cases very informally, so I am loathe to respond here explicitly.
>
> However, I would point out that, in general, all use cases do is recast /existing/ requirements into a form that makes it easier for the developer to implement against them. As a result the ultimate criteria for the utility of use cases is that they /do/ make it easier for the developer. That leaves a lot of room for variation, especially where the developers already have substantial knowledge of the problem space (i.e., the developers can "read between the lines").


This seems to be quite a big advantage. Especially that it seems to
provide quite easy tracability from business use cases, throug system
use cases to implementat and tests.

From: Tomasz on
I've had a long answer to your post. Unfortunatelly it got deleted by
mistake.

Anyway here is the gist of it:

I was always thinking that an estimate is just an "estimate", correct
me if I'm wrong, and is affected by so many factors that it is just a
guideline and not a measure. It is also affected by the stage you are
in (according to sources estimate error at Inception stage can be up to
400%). I've seen some graphs showing relation of estimation accuracy to
CMM Level of the company. However as far as I've seen in many cases
delivering on time is more related to your project management and
handling customer expectations then your maturity (or maybe the
examples I've seen were not mature). As those graphs do not mention
relation between initial and final scope it might be as well that
companies at higher maturity levels are better at PR - In this case it
would mean that they drive the actual effort to meet the estimate and
not estimate to forcast the effort.

Anyway all of what you are saying is fine. However all the
methodologies, estimation techniques, even measures (as it makes a big
difference what and how you measure, not mentioning how you interpret
the results) described in books and articles are highly theoretical.
Translation to practical use are not that straightforward and examples
or practical advise seem to be scarce (does it mean that people do not
want to share it, or that it works just in theory?). Thus unless you
work in a mature company, the path to improvement is difficult.
Especially if you look at others that have failed to achieve
improvement (ISO, CMM, etc.) or produced large procedure manuals that
just take space on the shelf, and you do not want to repeat their
mistakes or achieve superficial improvement.
I do not want to waste years on finding the proper path. Learning on
your own mistakes is all fine, but takes really long time.
Isn't there any shortcut? And I do not mean avoiding the work.

One more comment:
>In answer to the question, when you have 5 fields and want to make it 100 fields, just multiply out the 5 field estimate 20 times. This results in what is called a 'rough ballpark' estimate.

Did I sound so lame? I was trying to ask about something else. So far I
do not see how such a change affects the estimate when using AUCP
(unless I'm missing something). As the additional fields do not have
any processing (except validation) there are no additional transactions
- no effect on AUCP, it does not seem to increase complexity as well so
the complexity factors do not change. Thus it seems to me that AUCP
will not show any difference in the resulting estimate whether I had 5
or 100 data fields.

Tomasz

From: H. S. Lahman on
Responding to Tomasz...

>>>1. The final estimate seems to me a bit excessive for such a simple application.
>
>
>>Remember that the constants used in the AUCP computation should be based on empirical data for the particular development environment. For example the sort of software being developed or even the style of writing use cases can "average" constants for lack of better data; they need to be updated with local experience.]
>
>
> Sure but there should always be some sanity check on the result.
> Especially that the same guy has an article about function points where
> he arrives at an estimate of 7 days having same requirements.
> As I understand the AUCP covers the variation in sort of software,
> experience, etc. by applying technical and environmental factors -
> which affect the estimate much but in any case won't be able to do
> anything with a wrong assumptions/input (which I belive is the case
> here). As usual garbage in, garbage out.

I can't account for the inconsistency. FWIW, I agree with you that the
same author estimating the same software both ways should have addressed
that.

>>>2. As I understand from other articles this estimate includes: Analysis, Design, Implementation and Unit Testing, it does not include Testing effort which would add another 90 hours of effort.
>
>
>>FWIW, I would expect /all/ developer effort to be included in the estimate. [I could see an argument for estimating testing effort separately because notions like 'complexity' might be different for software and test construction. In that case, though, I would expect unit testing to be separated out as well.]
>
>
>
> I was refering to QA effort. I mean I've read a couple of articles
> about estimating Testing effort based on Use Cases (similar formulas to
> those for Development) thus I've assumed that UCP includes Development
> (with developer testing) effor only. However I might have jumped to
> conclusions to fast.

In my world Engineering performs its own Unit testing, Integration
testing, and Systems testing before the application is passed off to QA.
However, we do estimate that effort separately from the development
effort.

>>>Can anyone give me their practical comments on this example? I mean is this use case correct? If not how would a correct use case look like?
>>
>>In the methodology I use we only employ use cases very informally, so I am loathe to respond here explicitly.
>>
>>However, I would point out that, in general, all use cases do is recast /existing/ requirements into a form that makes it easier for the developer to implement against them. As a result the ultimate criteria for the utility of use cases is that they /do/ make it easier for the developer. That leaves a lot of room for variation, especially where the developers already have substantial knowledge of the problem space (i.e., the developers can "read between the lines").
>
>
>
> This seems to be quite a big advantage. Especially that it seems to
> provide quite easy tracability from business use cases, throug system
> use cases to implementat and tests.

That's probably true for CRUD/USER processing. However, IME for larger
applications that solve complex problems use cases don't provide very
rigorous traceability. The problem is that they are valid only at the
system inputs and treat the software as a black box. For larger
applications one needs requirements at the subsystem level and those
requirements often already include design decisions or incorporate
computing space constraints. That is, in a well-formed application the
subsystems will be organized in a DAG of client/service relationships
where the client subsystems define detailed requirements for the service
subsystems. Those detailed requirements will tend to reflect design
decisions already made for the client subsystems.

Typically the developers (wearing a Systems Engineering hat) create use
cases for individual subsystems that have greater detail and are more
focused than the user-based use cases that capture customer
requirements. They will also tend to be expressed in computing space
terms (e.g., network protocols, DBMS transactions, etc.). While those
use cases are ultimately derived from the system use cases and that is
traceable in the large, the mapping tends to be a bit murky in detail.


*************
There is nothing wrong with me that could
not be cured by a capful of Drano.

H. S. Lahman
hsl(a)pathfindermda.com
Pathfinder Solutions -- Put MDA to Work
http://www.pathfindermda.com
blog: http://pathfinderpeople.blogs.com/hslahman
(888)OOA-PATH



From: Tomasz on
> In my world Engineering performs its own Unit testing, Integration
> testing, and Systems testing before the application is passed off to
> QA.
> However, we do estimate that effort separately from the development
> effort.

In any case, if you are using historical data the division here should
not be that important because your hours per use case should already
reflect full testing, isn't it?

> That's probably true for CRUD/USER processing. However, IME for
> larger applications that solve complex problems use cases don't
> provide very rigorous traceability. The problem is that they are
> valid only at the system inputs and treat the software as a black box.
> For larger applications one needs requirements at the subsystem level
> and those requirements often already include design decisions or
> incorporate computing space constraints. That is, in a well-formed
> application the subsystems will be organized in a DAG of
> client/service relationships where the client subsystems define
> detailed requirements for the service subsystems. Those detailed
> requirements will tend to reflect design decisions already made for
> the client subsystems.

For me this is actually an advantage, as I've been guilty a couple of
times of concentrating too much on the technicalities and forgetting a
bit about the user. It is also useful for UAT.

> Typically the developers (wearing a Systems Engineering hat) create
> use cases for individual subsystems that have greater detail and are
> more focused than the user-based use cases that capture customer
> requirements. They will also tend to be expressed in computing space
> terms (e.g., network protocols, DBMS transactions, etc.). While those
> use cases are ultimately derived from the system use cases and that is
> traceable in the large, the mapping tends to be a bit murky in detail.

It seems you are not very keen on use cases. What do you use instead
then?

Tomasz

From: H. S. Lahman on
Responding to Tomasz...

A clarification. I am actually retired now, so all my examples, etc.
are based on where I worked before retiring. Since I am not directly
involved in Pathfinder's tool development I can't speak authoritatively
as to how that is done. [As it happens, integrating use cases into the
development is an important part of Pathfinder's OO training.
Pathfinder just doesn't get into how the use cases themselves are
constructed and the training is limited to construction so things like
estimation are not directly addressed.]

>>In my world Engineering performs its own Unit testing, Integration
>>testing, and Systems testing before the application is passed off to
>>QA.
>>However, we do estimate that effort separately from the development
>>effort.
>
>
> In any case, if you are using historical data the division here should
> not be that important because your hours per use case should already
> reflect full testing, isn't it?

Yes and no. We used empirical data for estimating. However, we
estimated testing, diagnosis, and rework separately from the actual
software construction. Also, we did not estimate using use case points
(see below). But if we had estimated from use case points, we would
have used a different set of empirically adjusted constants for things
like complexity and effort when we estimated testing vs. construction.

>>That's probably true for CRUD/USER processing. However, IME for
>>larger applications that solve complex problems use cases don't
>>provide very rigorous traceability. The problem is that they are
>>valid only at the system inputs and treat the software as a black box.
>>For larger applications one needs requirements at the subsystem level
>>and those requirements often already include design decisions or
>>incorporate computing space constraints. That is, in a well-formed
>>application the subsystems will be organized in a DAG of
>>client/service relationships where the client subsystems define
>>detailed requirements for the service subsystems. Those detailed
>>requirements will tend to reflect design decisions already made for
>>the client subsystems.
>
>
> For me this is actually an advantage, as I've been guilty a couple of
> times of concentrating too much on the technicalities and forgetting a
> bit about the user. It is also useful for UAT.
>
>
>>Typically the developers (wearing a Systems Engineering hat) create
>>use cases for individual subsystems that have greater detail and are
>>more focused than the user-based use cases that capture customer
>>requirements. They will also tend to be expressed in computing space
>>terms (e.g., network protocols, DBMS transactions, etc.). While those
>>use cases are ultimately derived from the system use cases and that is
>>traceable in the large, the mapping tends to be a bit murky in detail.
>
>
> It seems you are not very keen on use cases. What do you use instead
> then?

Au contraire. I think use cases are quite useful, especially for
keeping focus when doing IID. They also keep everyone focused on the
Big Picture of what the software is really trying to accomplish. That
allows better problem space abstraction in an OO context.

However, we didn't use them directly for estimating and we tended to be
very informal about them because the nature of our software was such
that the developers were the world's leading domain experts. That is,
we effectively defined the requirements ourselves. So our use cases
tended to be more like problem space mnemonics than rigorous
specifications and we weren't at all fussy about format.

[We built megabuck testers for the electronics industry, where
technological change is extremely rapid. So we were developing machines
for tomorrow's problems, not today's. IOW, our market share depended on
anticipating testing problems before the customers perceived them as
serious problems.]

<FYI estimation aside>
We estimated based on feature or code size using a relative sizing
technique. (Developers are very good at knowing task A will be bigger
than task B even when their absolute estimates for either are pretty
awful.) Basically we did a rough breakdown of modules for the increment
and then ordered them by size. We would "seed" the comparisons with a
large one and a small one we already had completed. That provided
absolute size tie points to normalize the relative comparisons. Once we
had the size estimated we used historical data to estimate the effort.
</FYI estimation aside>


*************
There is nothing wrong with me that could
not be cured by a capful of Drano.

H. S. Lahman
hsl(a)pathfindermda.com
Pathfinder Solutions -- Put MDA to Work
http://www.pathfindermda.com
blog: http://pathfinderpeople.blogs.com/hslahman
(888)OOA-PATH