From: D Yuniskis on
Hi Peter,

Peter Dickerson wrote:
> "D Yuniskis" <not.going.to.be(a)seen.com> wrote in message
> news:hvqdli$f0l$1(a)speranza.aioe.org...
>> Hi Peter,
>>
>> Peter Dickerson wrote:
>>> "D Yuniskis" <not.going.to.be(a)seen.com> wrote in message
>>> news:hvkecf$843$1(a)speranza.aioe.org...
>>> [snip]
>>>> De Casteljau's algorithm.
>>> OK, so it is trivial for cubics. So, you can start with the first
>>> estimate as the average of the control point path and the start to end
>>> path. Then subdivide (binary, say) and get a new estimate. Then combine
>>> these estimates using the Romberg method. Assuming you know the
>>> convergence power you can
>> Sorry, you've lost me (easy to do, this early in the morning :< )
>
> Oh! I thought I was reiterating what you were proposing to see if I had it
> right.

Sorry, my confusion revolves around your Romberg statement. I.e., how
would that apply since I am no longer operating with "t"?

>>> also estimate the error. If the error is OK keep the result and pass it
>>> up,
>> Well, I know that the error can't be more than half of the difference
>> between the "polygon length" and chord length. I.e., once they
>> are within 2*error of each other, I *know* the length fits to
>> within "one error" (?)
>
> Yes, but that bound is quite loose.

See above.

>>> else recurse again. Somebody somewhere has to decide what good enough is,
>>> perhaps in fractional terms. The benefit of your scheme is that it
>>> doesn't need to recurse so deepy for some parts of the curve as others.
>> Well, it *could* not-need to recurse as deeply. It depends on how
>> tight the curve is. E.g., a (pseudo) circle is probably the worst
>> "overall" curve to follow as there are *no* "straight runs".
>
> Well, the curve can't be tight unless the poly is high order. The order of
> the poly, or number of control points hasn't been specified except that
> examples have been cubic. Anyway, if you specify that you want the final

Yes, cubic. (sorry, I guess I had assumed as much)

> error be some small fraction of the initial upper bound then you can use

Initial upper bound is essentially meaningless. I will probably
express error (tolerance?) as some finite value. Or, proportional
to the actual length (since very small curves may need tighter
absolute errors)

> that fraction on each division of the curve. Or you can assign half the
> limit to each half of the division. Not perfect because either way you'll
> probably do more work than strictly necessary (but you wouldn't know that
> until you'd done all the work...).

Exactly.

>> The problem witht he algorithm is that the cost of recursion can
>> quickly escalate (i.e., the cost of the computations for each
>> level of recursion is high). And, there is no way of bounding
>> this a priori. :<
>
> Actually, I don't see it that way at all. In terms of costs I think the
> biggest cost is square roots, then perhaps multiples, and then recursion.
> The problem that I see is that compute one estimate requires as many square
> roots as control points. So for a cubic thats four at the top level and 8 at
> the first binary subdivision. The test for whether the upper bound is
> sufficiently close to the lower bound can be done on the squared lengths to
> save one square root but.

Sorry, by "the cost of recursion can quickly escalate", I meant
the cost of each "iteration" of the algorithm grows quickly.
(i.e., "the cost of the computations for each level of recursion
is high").

In other words, if the "current" estimate isn't good enough,
you (I) have to subdivide to get the *next* better estimate.
I can't get "just a little bit better"... I have to take
the full step of a subdivision and computing all of the
"lengths" that this entails. (note that some of the computations
are trivial based on the lengths computed for the "current"
subdivision/estimate)

I need to look at the (bezier) math and try to identify criteria
that make "easy" curves vs. "hard" curves. Perhaps there is
something that can be gleaned from that to condition the
algorithm employed at each step. (e.g., look at the tangents
at the various control/end -points wrt the chords themselves)
--don
From: Peter Dickerson on
"D Yuniskis" <not.going.to.be(a)seen.com> wrote in message
news:i00085$tnf$1(a)speranza.aioe.org...
> Hi Peter,
>
> Peter Dickerson wrote:
>> "D Yuniskis" <not.going.to.be(a)seen.com> wrote in message
>> news:hvqdli$f0l$1(a)speranza.aioe.org...
>>> Hi Peter,
>>>
>>> Peter Dickerson wrote:
>>>> "D Yuniskis" <not.going.to.be(a)seen.com> wrote in message
>>>> news:hvkecf$843$1(a)speranza.aioe.org...
>>>> [snip]
>>>>> De Casteljau's algorithm.
>>>> OK, so it is trivial for cubics. So, you can start with the first
>>>> estimate as the average of the control point path and the start to end
>>>> path. Then subdivide (binary, say) and get a new estimate. Then combine
>>>> these estimates using the Romberg method. Assuming you know the
>>>> convergence power you can
>>> Sorry, you've lost me (easy to do, this early in the morning :< )
>>
>> Oh! I thought I was reiterating what you were proposing to see if I had
>> it right.
>
> Sorry, my confusion revolves around your Romberg statement. I.e., how
> would that apply since I am no longer operating with "t"?

It's got little to do with t pre se. The Romberg approach in a nutshell goes
like this:
Given a method of estimating a value (the pathlength) that is parameterised
by a small value (the 'stepsize' h) such that error in the estimate
decreases with decreasing h, and the estimate converges to the exact value
as h -> 0. Then one can use estimates from two different values of h (say h
and h/2) to produce a new estimate that converges more quickly. This assumes
that the dominant error in the estimate can be expressed as a power of h.
Implicitly we presume that the amount of work computing th estimate goes
like 1/h or less, but that's not strictly necessary. The method doesn't say
anything about how the estimates are computed.

In the your case of splitting the curve in half the first estimate can be
thought of as having a stepsize 1, then splitting the curve and estimating
the pathlength of each half and adding together has stepsixe 1/2. Repeating
with another round of splitting the stepsize is 1/4. Now the Romberg
estimate would be (4*estimate(1/2) - estimate(1))/3 assuming the error term
is O(h^2).

>>>> also estimate the error. If the error is OK keep the result and pass it
>>>> up,
>>> Well, I know that the error can't be more than half of the difference
>>> between the "polygon length" and chord length. I.e., once they
>>> are within 2*error of each other, I *know* the length fits to
>>> within "one error" (?)
>>
>> Yes, but that bound is quite loose.
>
> See above.
>
>>>> else recurse again. Somebody somewhere has to decide what good enough
>>>> is, perhaps in fractional terms. The benefit of your scheme is that it
>>>> doesn't need to recurse so deepy for some parts of the curve as others.
>>> Well, it *could* not-need to recurse as deeply. It depends on how
>>> tight the curve is. E.g., a (pseudo) circle is probably the worst
>>> "overall" curve to follow as there are *no* "straight runs".
>>
>> Well, the curve can't be tight unless the poly is high order. The order
>> of the poly, or number of control points hasn't been specified except
>> that examples have been cubic. Anyway, if you specify that you want the
>> final
>
> Yes, cubic. (sorry, I guess I had assumed as much)

Something concrete to assume...

>> error be some small fraction of the initial upper bound then you can use
>
> Initial upper bound is essentially meaningless. I will probably
> express error (tolerance?) as some finite value. Or, proportional
> to the actual length (since very small curves may need tighter
> absolute errors)

OK, like so many ppm of the true answer or so many ppm of the original upper
bound. I'm failrly sure that

1 >= length of curve/distance along the control points >= 0.5

>> that fraction on each division of the curve. Or you can assign half the
>> limit to each half of the division. Not perfect because either way you'll
>> probably do more work than strictly necessary (but you wouldn't know that
>> until you'd done all the work...).
>
> Exactly.
>
>>> The problem witht he algorithm is that the cost of recursion can
>>> quickly escalate (i.e., the cost of the computations for each
>>> level of recursion is high). And, there is no way of bounding
>>> this a priori. :<
>>
>> Actually, I don't see it that way at all. In terms of costs I think the
>> biggest cost is square roots, then perhaps multiples, and then recursion.
>> The problem that I see is that compute one estimate requires as many
>> square roots as control points. So for a cubic thats four at the top
>> level and 8 at the first binary subdivision. The test for whether the
>> upper bound is sufficiently close to the lower bound can be done on the
>> squared lengths to save one square root but.
>
> Sorry, by "the cost of recursion can quickly escalate", I meant
> the cost of each "iteration" of the algorithm grows quickly.
> (i.e., "the cost of the computations for each level of recursion
> is high").

I see this as the cost of the calculation not the recursion since the main
difference between this method and stepping along the curve is the order
that things are done, not how many.

> In other words, if the "current" estimate isn't good enough,
> you (I) have to subdivide to get the *next* better estimate.
> I can't get "just a little bit better"... I have to take
> the full step of a subdivision and computing all of the
> "lengths" that this entails. (note that some of the computations
> are trivial based on the lengths computed for the "current"
> subdivision/estimate)

Equivalent to restricting yourself to binary splits.

> I need to look at the (bezier) math and try to identify criteria
> that make "easy" curves vs. "hard" curves. Perhaps there is
> something that can be gleaned from that to condition the
> algorithm employed at each step. (e.g., look at the tangents
> at the various control/end -points wrt the chords themselves)

There aren't any hard curves. It's a cubic.

Peter


From: Albert van der Horst on
In article <i01bmh$lgv$1(a)news.eternal-september.org>,
Peter Dickerson <first.last(a)tiscali.invalid> wrote:
>"D Yuniskis" <not.going.to.be(a)seen.com> wrote in message
<SNIP>
>
>> I need to look at the (bezier) math and try to identify criteria
>> that make "easy" curves vs. "hard" curves. Perhaps there is
>> something that can be gleaned from that to condition the
>> algorithm employed at each step. (e.g., look at the tangents
>> at the various control/end -points wrt the chords themselves)
>
>There aren't any hard curves. It's a cubic.

Exactly. They are very smooth. It gives me the impression that
somehow Gaussian quadrature would do a terrific job.
They forego the need for small steps, with all the round off
errors that result from it.

It requires to express the length as a function of one variable,
something I don't know how to do for bezier curves.
See 4.5 Gaussian Quadrature in Numerical Recipes.

>
>Peter

Groetjes Albert

--
--
Albert van der Horst, UTRECHT,THE NETHERLANDS
Economic growth -- being exponential -- ultimately falters.
albert(a)spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst

From: Peter Dickerson on
"Albert van der Horst" <albert(a)spenarnc.xs4all.nl> wrote in message
news:l4qams.fv1(a)spenarnc.xs4all.nl...
> In article <i01bmh$lgv$1(a)news.eternal-september.org>,
> Peter Dickerson <first.last(a)tiscali.invalid> wrote:
>>"D Yuniskis" <not.going.to.be(a)seen.com> wrote in message
> <SNIP>
>>
>>> I need to look at the (bezier) math and try to identify criteria
>>> that make "easy" curves vs. "hard" curves. Perhaps there is
>>> something that can be gleaned from that to condition the
>>> algorithm employed at each step. (e.g., look at the tangents
>>> at the various control/end -points wrt the chords themselves)
>>
>>There aren't any hard curves. It's a cubic.
>
> Exactly. They are very smooth. It gives me the impression that
> somehow Gaussian quadrature would do a terrific job.
> They forego the need for small steps, with all the round off
> errors that result from it.
>
> It requires to express the length as a function of one variable,
> something I don't know how to do for bezier curves.
> See 4.5 Gaussian Quadrature in Numerical Recipes.

I did try out Gaussian quad but was disappointed. The problem is that the
length element ds/dt that you need to integrat has a sqrt in it while Gauss
does a good job for poly fits. I only tried a five point for lack of
coefficent values.

Peter


From: D Yuniskis on
Hi Peter,

Peter Dickerson wrote:
> "D Yuniskis" <not.going.to.be(a)seen.com> wrote in message
> news:i00085$tnf$1(a)speranza.aioe.org...
>>
>> Peter Dickerson wrote:
>>> "D Yuniskis" <not.going.to.be(a)seen.com> wrote in message
>>> news:hvqdli$f0l$1(a)speranza.aioe.org...
>>>> Hi Peter,
>>>>
>>>> Peter Dickerson wrote:
>>>>> "D Yuniskis" <not.going.to.be(a)seen.com> wrote in message
>>>>> news:hvkecf$843$1(a)speranza.aioe.org...
>>>>> [snip]
>>>>>> De Casteljau's algorithm.
>>>>> OK, so it is trivial for cubics. So, you can start with the first
>>>>> estimate as the average of the control point path and the start to end
>>>>> path. Then subdivide (binary, say) and get a new estimate. Then combine
>>>>> these estimates using the Romberg method. Assuming you know the
>>>>> convergence power you can
>>>> Sorry, you've lost me (easy to do, this early in the morning :< )
>>> Oh! I thought I was reiterating what you were proposing to see if I had
>>> it right.
>> Sorry, my confusion revolves around your Romberg statement. I.e., how
>> would that apply since I am no longer operating with "t"?
>
> It's got little to do with t pre se. The Romberg approach in a nutshell goes
> like this:
> Given a method of estimating a value (the pathlength) that is parameterised
> by a small value (the 'stepsize' h) such that error in the estimate
> decreases with decreasing h, and the estimate converges to the exact value
> as h -> 0. Then one can use estimates from two different values of h (say h
> and h/2) to produce a new estimate that converges more quickly. This assumes
> that the dominant error in the estimate can be expressed as a power of h.
> Implicitly we presume that the amount of work computing th estimate goes
> like 1/h or less, but that's not strictly necessary. The method doesn't say
> anything about how the estimates are computed.

Ah, OK. So, it just tries to look-ahead "better" based on
history (?). E.g., my estimate[i] are averages of polygon/chord
length and endpoint length. Track successive estimate[i]'s
and, at each step, compute an ESTIMATE[i] from Romberg?
This would have no "resale value" in the next iteration of
my algorithm but would (hopefully) be a better representation
of the *real* "curve length" -- better than my "average of
chords and endpoints" (?)

> In the your case of splitting the curve in half the first estimate can be
> thought of as having a stepsize 1, then splitting the curve and estimating
> the pathlength of each half and adding together has stepsixe 1/2. Repeating
> with another round of splitting the stepsize is 1/4. Now the Romberg
> estimate would be (4*estimate(1/2) - estimate(1))/3 assuming the error term
> is O(h^2).

>>> error be some small fraction of the initial upper bound then you can use
>> Initial upper bound is essentially meaningless. I will probably
>> express error (tolerance?) as some finite value. Or, proportional
>> to the actual length (since very small curves may need tighter
>> absolute errors)
>
> OK, like so many ppm of the true answer or so many ppm of the original upper
> bound. I'm failrly sure that
>
> 1 >= length of curve/distance along the control points >= 0.5

In some pathological cases, that's not true. (e.g., look at
your original example, I think -- sorry, early in the
morning to be doing this without pencil and paper...)

>>>> The problem witht he algorithm is that the cost of recursion can
>>>> quickly escalate (i.e., the cost of the computations for each
>>>> level of recursion is high). And, there is no way of bounding
>>>> this a priori. :<
>>> Actually, I don't see it that way at all. In terms of costs I think the
>>> biggest cost is square roots, then perhaps multiples, and then recursion.
>>> The problem that I see is that compute one estimate requires as many
>>> square roots as control points. So for a cubic thats four at the top
>>> level and 8 at the first binary subdivision. The test for whether the
>>> upper bound is sufficiently close to the lower bound can be done on the
>>> squared lengths to save one square root but.
>> Sorry, by "the cost of recursion can quickly escalate", I meant
>> the cost of each "iteration" of the algorithm grows quickly.
>> (i.e., "the cost of the computations for each level of recursion
>> is high").
>
> I see this as the cost of the calculation not the recursion since the main
> difference between this method and stepping along the curve is the order
> that things are done, not how many.

If you walk up the curve (t 0->1), you have to take a fixed
dt for each pass. Even in regions of low curvature.

I can spend those calculations elsewhere -- in places where
the curvature demands more work for a particular degree of
fitness.

The cost of the actual recursion is tiny (another stack frame
and function invocation) compared to all the number crunching
that it (hopefully) avoids (or, spends in better places).

>> In other words, if the "current" estimate isn't good enough,
>> you (I) have to subdivide to get the *next* better estimate.
>> I can't get "just a little bit better"... I have to take
>> the full step of a subdivision and computing all of the
>> "lengths" that this entails. (note that some of the computations
>> are trivial based on the lengths computed for the "current"
>> subdivision/estimate)
>
> Equivalent to restricting yourself to binary splits.
>
>> I need to look at the (bezier) math and try to identify criteria
>> that make "easy" curves vs. "hard" curves. Perhaps there is
>> something that can be gleaned from that to condition the
>> algorithm employed at each step. (e.g., look at the tangents
>> at the various control/end -points wrt the chords themselves)
>
> There aren't any hard curves. It's a cubic.

Sorry, I was trying to describe the (total) work required
for particular "types" of curves.

E.g., (making up numbers entirely in my head :> )

(0,0) (0,0) (1000,1000) (1000,1000)

is an *easy* curve. The estimate converges instantly
(regardless of how you -- rationally -- create that
estimate).

OTOH,

(0,0) (0,400) (10,0) (0,0)

is a "hard" curve. The estimate requires multiple
iterations to get a given "goodness".

The point of my statement (above) was to try to
come up with some simple/algorithmic criteria
(short of actually *solving* the curve) that would
let you determine how hard you have to work on
a particular curve (i.e., set of control points)
to get a particular degree of fit.