From: Chad on
On May 8, 4:34 pm, p...(a)informatimago.com (Pascal J. Bourguignon)
wrote:
> Chad <cdal...(a)gmail.com> writes:
> > On May 8, 1:28 pm, Kai-Uwe Bux <jkherci...(a)gmx.net> wrote:
> >> Chad wrote:
> >> > The question is about the harmonic() function in the following
> >> > code....
>
> >> > #include <stdio.h>
>
> >> > double harmonic(unsigned long n)
> >> > {
> >> >   double acc = 0;
> >> >   int i;
>
> >> >   for (i = 1; i <= n; i++) {
> >> >     acc += 1.0/i;
> >> >   }
>
> >> >   return acc;
> >> > }
>
> >> Hint: summing up the terms in reverse order (starting with the smallest)
> >> handles rounding errors much better and is numerically preferable.
>
> > Why would summing up the terms in reverse order handle rounding errors
> > much better?
>
> How much is 1.0e0 + 1.0e-8 ?
> How much is 1.0e0 + 1.0e-7 ?
>
> What is the result of:
>   {
>     float f=1.0e0;
>     int i;
>     for(i=0;i<10;i++){
>         f+=1.0e-8;
>     }
>     return(f);
>   }
> vs.
>   {
>     float f=0.0e0;
>     int i;
>     for(i=0;i<10;i++){
>         f+=1.0e-8;
>     }
>     return(1.0e0+f);
>   }
> ?
>
> (For double, s/8/16/ and s/7/15/).
>

I still don't get what you are talking about. Give me 12 hours to
think about this.

From: Chad on
On May 8, 8:51 pm, Chad <cdal...(a)gmail.com> wrote:
> On May 8, 6:03 pm, "bart.c" <ba...(a)freeuk.com> wrote:
>
>
>
> > Chad wrote:
> > > On May 8, 4:25 pm, Kai-Uwe Bux <jkherci...(a)gmx.net> wrote:
> > >> Chad wrote:
> > >>> On May 8, 1:28 pm, Kai-Uwe Bux <jkherci...(a)gmx.net> wrote:
> > >>>> Chad wrote:
> > >>>>> for (i = 1; i <= n; i++) {
> > >>>>> acc += 1.0/i;
> > >>>>> }
> > >>>> Hint: summing up the terms in reverse order (starting with the
> > >>>> smallest) handles rounding errors much better and is numerically
> > >>>> preferable.
>
> > >>> Why would summing up the terms in reverse order handle rounding
> > >>> errors much better?
> > > Maybe I'm missing the broader concept here, I tried this in python. I
> > > get the same results.
>
> > > Here is what I get when I go one way...
> > >>>> acc = 1.0
> > >>>> for x in range(1,50):
> > > ...   acc = acc + 1.0/x
> > > ...   print acc
> > > 5.47920533833
> > > And here is what I get when I reverse it...
> > >>>> acc = 1.0
> > >>>> for x in reversed(range(1,50)):
> > > ...   acc = acc + 1.0/x
> > > ...   print acc
> > > 5.47920533833
>
> > I think you need to print more decimal places. Where you have ...833...
> > above, I get differences after ...832942...
>
> > Also you're starting with acc=1.0, which itself will hide some accuracy (so
> > when adding 0.02, you're losing another couple of decimals). Perhaps start
> > with acc=0.0 and add the 1.0 a bit later.
>
> I saw the difference when I took the output to 40 decimal places..
>
> [cdalten(a)localhost oakland]$ python
> Python 2.6.2 (r262:71600, May  3 2009, 17:04:44)
> [GCC 4.1.1 20061011 (Red Hat 4.1.1-30)] on linux2
> Type "help", "copyright", "credits" or "license" for more information.>>> acc = 1.0
> >>> for x in range(1,50):
>
> ...   acc = acc + 1.0/x
> ...>>> print "(%.40f)" % acc
>
> (5.4792053383294225810118405206594616174698)
>
> [cdalten(a)localhost oakland]$ python
> Python 2.6.2 (r262:71600, May  3 2009, 17:04:44)
> [GCC 4.1.1 20061011 (Red Hat 4.1.1-30)] on linux2
> Type "help", "copyright", "credits" or "license" for more information.>>> acc = 1.0
> >>> for x in reversed(range(1,50)):
>
> ...   acc = acc + 1.0/x
> ...>>> print "(%.40f)" % acc
>
> (5.4792053383294252455470996210351586341858)
>
>
>
> I'm still not sure how I would start off with acc = 0.0 and then add
> 1.0 later on. Who knows, maybe I will have it figured out before
> someone tells me how to do it.

Okay, I believe that I figured out how to start off with acc = 0.0 and
then add 1.0 later on. Here is what I came up with..

[cdalten(a)localhost oakland]$ python
Python 2.6.2 (r262:71600, May 3 2009, 17:04:44)
[GCC 4.1.1 20061011 (Red Hat 4.1.1-30)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> acc = 0.0
>>> for x in range(1,50):
.... acc = acc + 1.0/x
....
>>> acc = acc + 1.0
>>> print "(%.40f)" % acc
(5.4792053383294234691902602207846939563751)
>>>
[cdalten(a)localhost oakland]$ python
Python 2.6.2 (r262:71600, May 3 2009, 17:04:44)
[GCC 4.1.1 20061011 (Red Hat 4.1.1-30)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> acc = 0.0
>>> for x in reversed(range(1,50)):
.... acc = acc + 1.0/x
....
>>> acc = acc + 1.0
>>> print "(%.40f)" % acc
(5.4792053383294252455470996210351586341858)
>>>

I see the difference when I sum from the largest to the smallest but
not vice versa when I have acc = 0.0 and then add 1.0 a bit later.
Maybe I need to carry this out a few more decimal spots.
From: Pascal J. Bourguignon on
Chad <cdalten(a)gmail.com> writes:

> On May 8, 4:34�pm, p...(a)informatimago.com (Pascal J. Bourguignon)
> wrote:
>> Chad <cdal...(a)gmail.com> writes:
>> > On May 8, 1:28�pm, Kai-Uwe Bux <jkherci...(a)gmx.net> wrote:
>> >> Chad wrote:
>> >> > The question is about the harmonic() function in the following
>> >> > code....
>>
>> >> > #include <stdio.h>
>>
>> >> > double harmonic(unsigned long n)
>> >> > {
>> >> > � double acc = 0;
>> >> > � int i;
>>
>> >> > � for (i = 1; i <= n; i++) {
>> >> > � � acc += 1.0/i;
>> >> > � }
>>
>> >> > � return acc;
>> >> > }
>>
>> >> Hint: summing up the terms in reverse order (starting with the smallest)
>> >> handles rounding errors much better and is numerically preferable.
>>
>> > Why would summing up the terms in reverse order handle rounding errors
>> > much better?
>>
>> How much is 1.0e0 + 1.0e-8 ?
>> How much is 1.0e0 + 1.0e-7 ?
>>
>> What is the result of:
>> � {
>> � � float f=1.0e0;
>> � � int i;
>> � � for(i=0;i<10;i++){
>> � � � � f+=1.0e-8;
>> � � }
>> � � return(f);
>> � }
>> vs.
>> � {
>> � � float f=0.0e0;
>> � � int i;
>> � � for(i=0;i<10;i++){
>> � � � � f+=1.0e-8;
>> � � }
>> � � return(1.0e0+f);
>> � }
>> ?
>>
>> (For double, s/8/16/ and s/7/15/).
>>
>
> I still don't get what you are talking about. Give me 12 hours to
> think about this.


I wasn't requiring you to think, only to use your C compiler.


-*- mode: compilation; default-directory: "/tmp/" -*-
Compilation started at Sun May 9 08:37:38

SRC="/tmp/f.c" ; EXE="f" ; gcc -I. -L. -g3 -ggdb3 -o ${EXE} ${SRC} && cat $SRC && ./${EXE} && echo status = $?
#include <stdio.h>

float biggest_first()
{
float f=1.0e0;
int i;
for(i=0;i<10;i++){
f+=1.0e-8;
}
return(f);
}

float smallest_first()
{
float f=0.0e0;
int i;
for(i=0;i<10;i++){
f+=1.0e-8;
}
return(1.0e0+f);
}

int main(){
printf("1.0+1e-7 = %.7f\n",1.0e0+1.0e-7);
printf("1.0+1e-8 = %.7f\n",1.0e0+1.0e-8);
printf("biggest first = %.7f\n",biggest_first());
printf("smallest first = %.7f\n",smallest_first());
return(0);
}


1.0+1e-7 = 1.0000001
1.0+1e-8 = 1.0000000
biggest first = 1.0000000
smallest first = 1.0000001
status = 0

Compilation finished at Sun May 9 08:37:39


--
__Pascal Bourguignon__
From: Richard Harter on
On Sat, 08 May 2010 22:28:50 +0200, Kai-Uwe Bux
<jkherciueh(a)gmx.net> wrote:

>Chad wrote:
>
>> The question is about the harmonic() function in the following
>> code....
>>
>> #include <stdio.h>
>>
>> double harmonic(unsigned long n)
>> {
>> double acc = 0;
>> int i;
>>
>> for (i = 1; i <= n; i++) {
>> acc += 1.0/i;
>> }
>>
>> return acc;
>> }
>
>Hint: summing up the terms in reverse order (starting with the smallest)
>handles rounding errors much better and is numerically preferable.

This is true as far as it goes; however the accumulated error
will still grow. Summation without accumulating round off error
is non-trivial. A useful article is:

http://en.wikipedia.org/wiki/Kahan_summation_algorithm

>
>> int main(void)
>> {
>> unsigned long s;
>>
>> printf("The value is: %g\n", harmonic(10));
>>
>> return 0;
>> }
>>
>>
>> Would this function run in linear time?

As a side point there is a relationship between summing the
harmonic series and natural logarithms, to wit:

sum(n terms) ~= ln(n) + .577... (Euler's constant)
>
>Well, it's linear in n. However, that is not the usual measure of run-time
>complexity in computer science. The usual measure is to express the run-time
>of an algorithm as it depends on the _length_ (number of bits) of the input.

This is rather odd formulation. The essential feature of the
algorithm is the summation of n items; the natural formulation is
to express the complexity in terms of n.

That said, there is a problem with saying that the function is
linear in n. There are O(n) fundamental arithmetic operations,
that is true. However if we are looking at asymptotic behaviour
(which is what the order function expresses) then we need to
consider the precision used and how it grows with n.

>
>> Just curious, because for
>> whatever reasons, I keep thinking that this runs in logarithmic time.
>
>That not correct either. It's exponential in the bit-length of n: if n has k
>bits, the worst case is n = 2^k - 1, in which case, the for loop is
>traversed about 2^k times. Thus, the run-time is O(2^k).

This is wrong, or at least it is wrong if the function is doing
anything meaningful. The number of bits in the reciprocals must
be at least O(k) bits. The cost of the additions will be at
least O(n*log(n)) or, using k, O(k*2^k). Note that O(2^k) and
O(k*2^k) are not the same thing.



Richard Harter, cri(a)tiac.net
http://home.tiac.net/~cri, http://www.varinoma.com
It's not much to ask of the universe that it be fair;
it's not much to ask but it just doesn't happen.