From: Javier on

Henry Bigelow ha escrito:

> > 120x larger?
>
>
> sorry for the confusion. by "larger" i meant memory use. here are the
> stats for Chameneos:
>
> SBCL: 62,656 MB
> GCC C: 524 MB

Let see the differences (I hope some corrections from others if I'm
wrong):

1) SBCL loads the entire CL and extra libraries, including
documentation, the debugger and the compiler. Just starting SBCL eats
nearly 30Mb on my system (OSX), on Linux it may be even larger
(depending on the memoy model used, which I think OSX is better for
this). A similar thing happens with the Java virtual machine.
2) The memory used may be actually a "fake" because SBCL takes more
memory that it really need. For example, on my system it reserves 3Gb
of virtual memory just at startup. Of course, it doesn't mean that it
is ever going to use it all.
3) SBCL is not the only implementation. Try ECL, CLISP, and the
evaluation versions of Allegro and LispWorks. They all are probably
going to consume much less memory.

From: Henry Bigelow on


> My advice to you is to use R (http://www.r-project.org).
> It is a pleasant programming language, and there is a lot of
> contributed code, which includes Bayesian stuff and bioinformatics
> (not sure if it includes the intersection of the two).
> In any event R has a large, active user community and chances
> are good you'll find people with similar interests.
> I say this after having written a lot of statistical code (including
> Bayesian inference) in a variety of languages.
>

thanks robert. i'll check it out. i've heard R mentioned many times,
and i'd tried 'bayes net toolbox' written in matlab, but it didn't look
fast enough for my purposes. but mostly i just wanted to write one
myself, if just to get a better understanding of the algorithms.


> It's not clear to me that Lisp's particular strength (code = data)
> is going to be much of a win for you. If you were writing a general
> purpose Bayesian inference package, probably so. But I'm guessing
> that what you are going to do is derive some equations by hand,
> code them, and then run them on enormous data sets. I don't
> see much scope for code = data there. YMMV.
>

this is encouraging, but i would still like to see the huge memory and
speed differences in some of those benchmarks fixed, if possible.

thanks again,

henry

> Lisp isn't a bad choice in this context; it is probably better than C
> or Perl.
>
> FWIW
> Robert Dodier

From: Nicolas Neuss on
"Henry Bigelow" <hrbigelow(a)gmail.com> writes:

> but i'm not sure why certain lisp programs in the shootout are so
> large. how do i interpret the results of the shootout? for example,
> lisp fannkuch is 26 times slower than the c version, and lisp chameneos
> is 120x larger.
>

Lisp functions should be started and used in a Lisp environment and not
inside a C environement (Unix). Then compiled functions are usually as
small or smaller as their C counterpart.

Nicolas
From: Henry Bigelow on

pbunyk(a)gmail.com wrote:
> There is this page on Shootout site:
> http://shootout.alioth.debian.org/gp4/miscfile.php?file=benchmarking&title=Flawed%20Benchmarks
> and the first link off it is quite educational... ;-)
>
> > a more important question is:
> >
> > is this shootout really definitive? are all the programs up there
> > written such that one would consider them both elegant and efficient,
> > or are some very poorly written?
> > is it a popular shootout, or are there other benchmarks that people
> > like better?
>
> This is the first time I've heard of this particular competition (and
> set of benchmarks) -- it looks rather cool! I doubt it has the same
> clout as SPEC though... :-)
>
> Hey, while you are at learning lisp, why not debug that 120x memory
> program to see why this happens? profile and time are your friends...
> ;-)
>
> CL-USER>CL-USER> (time (main 5000000))
> 10000000
> Evaluation took:
> 34.466 seconds of real time
> 12.708794 seconds of user run time
> 21.369335 seconds of system run time
> [Run times include 0.044 seconds GC run time.]
> 0 page faults and
> 79,975,616 bytes consed.
> NIL
> CL-USER>
>
> -- yes, almost 80 MB consed -- but why the heck threading overherad is
> so high? (most of runtime is in system time, uh-huh...)

thanks paul! by the way, i read the paper analyzing a lisp fannkuch
benchmark for efficiency, and it mentioned the use of 'aref' and
several other things. i looked at the actual benchmark, and was hoping
to find some telltale sign that it didn't use any of these ideas, but i
didn't. i have no idea whether the shootout fannkuch lisp benchmark is
written the way this paper describes.

can i interest you or anyone else to optimize one or more of these
benchmarks, so they aren't such negative publicity for lisp?

BTW for those reading, the shootout is:

http://shootout.alioth.debian.org/gp4/benchmark.php?test=all&lang=sbcl&lang2=gcc

and the offending benchmarks are called fannkuch and chameneos.


henry


>
> Paul B.

From: Henry Bigelow on

Nicolas Neuss wrote:
> "Henry Bigelow" <hrbigelow(a)gmail.com> writes:
>
> > but i'm not sure why certain lisp programs in the shootout are so
> > large. how do i interpret the results of the shootout? for example,
> > lisp fannkuch is 26 times slower than the c version, and lisp chameneos
> > is 120x larger.
> >
>

> Lisp functions should be started and used in a Lisp environment and not
> inside a C environement (Unix).

what do you mean exactly? how can you avoid running your program in an
operating system?


> Then compiled functions are usually as
> small or smaller as their C counterpart.

when i said 120x larger, i should have said 'uses 120x as much memory
during runtime'. the actual binary is probably not much larger than
the c binary for these benchmarks, and the size of source code is all
within a factor of 2 or 3.

>
> Nicolas

First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13
Prev: Pocket Lisp Machine
Next: Next Generation of Language