From: Knud Thomsen on
Any such approximation:

f(x,y) ~ sqrt(x^2 + y^2) x >= y >= 0

can, of course, be restated in terms of a function of one variable:

f(x,y) = x g(y/x)
g(t) ~ sqrt(1 + t^2) 1 >= t >= 0

Let n be the number of linear intervals, so that we have n-1
boundaries between the intervals, and let E denote the maximum |rel.
error| for 1 >= t >= 0.

For n = 1, the minimal E solution is described in the Digital Signal
Processing literature (1):

g(t) = A0 + B0 t

A0 ~ 0.96043387 B0 ~ 0.39782473
E ~ 0.03956613

For n=2:

if t < C1 then
g(t) = A0 + B0 t
else
g(t) = A1 + B1 t

I found numerically:

A0 ~ 0.99029944 B0 ~ 0.19698281
A1 ~ 0.83953533 B1 ~ 0.56095957 C1 ~ 0.41421356
E ~ 0.00970056

Correspondingly, the approximation for n = 3 can be evaluated:

if t < C1 then
g(t) = A0 + B0 t
else if t > C2 then
g(t) = A1 + B1 t
else
g(t) = A2 + B2 t

Having also found approximate values for n = 3 through optimization,
and after studying the error curve for n <= 3, I put forward the
following general hypothesis about n-interval approximations
minimizing E:

The positions of the boundaries between intervals are:

Ci = tan(i/n pi/4)

where the extreme error -E is observed, while the other extreme error
+E is found at:

Di = tan((i+1/2)/n pi/4)

If this hypothesis holds we have for each interval i 3 linear
equations in 3 unknowns (Ai, Bi, E):

(Ai + Bi Ci) / g(Ci) - 1 = -E
(Ai + Bi Di) / g(Di) - 1 = +E
(Ai + Bi Cj) / g(Cj) - 1 = -E j = i+1

Note that this pattern of extremes means that the optimal
approximations are continuous.

Making the substitutions:

cos(x) = 1/g(x) |x| < pi/2
u = i/n pi/4
d = (1/2)/n pi/4

we get:

Ai cos(u ) + Bi sin(u ) - 1 = -E
Ai cos(u+ d) + Bi sin(u+ d) - 1 = +E
Ai cos(u+2d) + Bi sin(u+2d) - 1 = -E

and:

Ai = (sin(u+2d)-sin(u)) / (sin(d)(1+cos(d)))
Bi = (cos(u)-cos(u+2d)) / (sin(d)(1+cos(d)))
E = (1-cos(d)) / (1+cos(d)) = tan^2(d/2)

Finally, after simplifying and substituting back:

Ai = 2 cos(pi(2i+1)/(8n)) / (1+cos(pi/(8n)))
Bi = 2 sin(pi(2i+1)/(8n)) / (1+cos(pi/(8n)))
E = tan^2(pi/(16n)) = sqrt(Ai^2 + Bi^2) - 1

So E is, as expected, the same for every interval.

Some example solutions are shown below to 15 decimal digits.
When the max. |rel. error| was estimated by sampling the error at
10^9 equidistant t-values, 1 >= t >= 0, values were identical to E to
the number of decimal places given.

n = 1
A0: 0.960433870103420 B0: 0.397824734759316
E : 0.039566129896580

n = 2
A0: 0.990299443464736 B0: 0.196982806714329
A1: 0.839535330284040 B1: 0.560959573474318 C1: 0.414213562373095
E : 0.009700556535264

n = 3
A0: 0.995704054482187 B0: 0.131086925630476
A1: 0.927848468647977 B1: 0.384327419541100 C1: 0.267949192431123
A2: 0.796761543017501 B2: 0.611376634941088 C2: 0.577350269189626
E : 0.004295945517813

n = 4
A0: 0.997586552631728 B0: 0.098253699538935
A1: 0.959249860867075 B1: 0.290985264044832 C1: 0.198912367379658
A2: 0.884049734902819 B2: 0.472534428039902 C2: 0.414213562373095
A3: 0.774876073407052 B3: 0.635924358965760 C3: 0.668178637919299
E : 0.002413447368272

n = 5
A0: 0.998456287491326 B0: 0.078580214015339
A1: 0.973870980006853 B1: 0.233805736384182 C1: 0.158384440324536
A2: 0.925305736902132 B2: 0.383274185566494 C2: 0.324919696232906
A3: 0.853956395641204 B3: 0.523305152286065 C3: 0.509525449494429
A4: 0.761579813800798 B4: 0.650450609406125 C4: 0.726542528005361
E : 0.001543712508674

I haven't been able to find references to this problem for n > 1, but
please inform me if you know any.
Maybe someone can prove (or rather unlikely, disprove) the hypothesis
on which the solutions are based?

Notes:
1. Try googling "alpha max plus beta min algorithm"

Best regards,
Knud Thomsen
From: Knud Thomsen on
To prove the above hypothesis I guess one would need to show that:

1. The suggested approximation has n+1 extrema of equal magnitude
(=E) and with alternating signs
2. That such approximation is the one minimizing the maximum |rel.
error|

Knud
From: Knud Thomsen on
On Sep 15, 4:38 pm, Knud Thomsen <sa...(a)kt-algorithms.com> wrote:
> To prove the above hypothesis I guess one would need to show that:
>
> 1. The suggested approximation has n+1 extrema of equal magnitude
>    (=E) and with alternating signs
> 2. That such approximation is the one minimizing the maximum |rel.
>    error|
>
> Knud

Please read: .. 2n+1 extrema of ..

Knud