From: Rahul Jain on
Tamas K Papp <tkpapp(a)gmail.com> writes:

> But he wrote 2 books about CL, which is pretty weird if he hates it.
> One can of course argue that he started hating it later on, but that
> doesn't compute either: writing even a single book should be enough to
> realize that you hate something.

He never liked it. You can tell clearly from what he chooses to ignore.
He didn't write books about CL. He wrote books about Arc, and then
realized what he was writing about. :)

--
Rahul Jain
rjain(a)nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Barry Margolin on
In article
<404e1698-289b-441e-9d0b-635b432f7781(a)a21g2000yqc.googlegroups.com>,
w_a_x_man <w_a_x_man(a)yahoo.com> wrote:

> Guy L. Steele, Jr., July 1989:
>
> I think we may usefully compare the approximate number of pages
> in the defining standard or draft standard for several
> programming languages:
>
> Common Lisp 1000 or more
> COBOL 810
> ATLAS 790
> Fortran 77 430
> PL/I 420
> BASIC 360
> ADA 340
> Fortran 8x 300
> C 220
> Pascal 120
> DIBOL 90
> Scheme 50

This is a little unfair, because the Common Lisp base language includes
lots of things that would be in libraries in most other languages, e.g.
hash tables, sequences, association lists, and portable pathnames. In
fact, even types that are basic primitives in Lisp, such as linked
lists, imaginary numbers, and symbols, would generally be in libraries
in traditional languages.

--
Barry Margolin, barmar(a)alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
*** PLEASE don't copy me on replies, I'll read them in the group ***
From: Pascal Costanza on
On 18/12/2009 03:51, Barry Margolin wrote:
> In article
> <404e1698-289b-441e-9d0b-635b432f7781(a)a21g2000yqc.googlegroups.com>,
> w_a_x_man<w_a_x_man(a)yahoo.com> wrote:
>
>> Guy L. Steele, Jr., July 1989:
>>
>> I think we may usefully compare the approximate number of pages
>> in the defining standard or draft standard for several
>> programming languages:
>>
>> Common Lisp 1000 or more
>> COBOL 810
>> ATLAS 790
>> Fortran 77 430
>> PL/I 420
>> BASIC 360
>> ADA 340
>> Fortran 8x 300
>> C 220
>> Pascal 120
>> DIBOL 90
>> Scheme 50
>
> This is a little unfair, because the Common Lisp base language includes
> lots of things that would be in libraries in most other languages, e.g.
> hash tables, sequences, association lists, and portable pathnames. In
> fact, even types that are basic primitives in Lisp, such as linked
> lists, imaginary numbers, and symbols, would generally be in libraries
> in traditional languages.

I think there is a change in perspective now. Younger generations
nowadays consider that these things should be part of a language, and
that Common Lisp is actually too small and should have even more.


Pascal

--
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: George Neuner on
On Fri, 18 Dec 2009 10:42:10 +0100, Pascal Costanza <pc(a)p-cos.net>
wrote:

>On 18/12/2009 03:51, Barry Margolin wrote:
>
>> This is a little unfair, because the Common Lisp base language includes
>> lots of things that would be in libraries in most other languages, e.g.
>> hash tables, sequences, association lists, and portable pathnames. In
>> fact, even types that are basic primitives in Lisp, such as linked
>> lists, imaginary numbers, and symbols, would generally be in libraries
>> in traditional languages.
>
>I think there is a change in perspective now. Younger generations
>nowadays consider that these things should be part of a language, and
>that Common Lisp is actually too small and should have even more.

I find many members of the "younger generations" to be unacceptably
ignorant in too many areas. IMO, much of what they "consider" is not
worth the concern of a reasonable person. Now, if politicians were
reasonable ...

I don't have any problem with proscribed standard libraries. I do
have a problem with people who should know better deliberately
pandering to ignorance by conflating the notions of language and
library. (I'm not saying you are doing so, btw, but only that the
practice is widespread and annoys me whenever I encounter it).

George
From: Thierry Pirot on
ccc31807 writes:
> On page 61 of Graham's ANSI Common Lisp, he defines bin-search and
> finder, which I have copied below.
>
> I have struggled with this for days, and still don't understand these
> definitions completely. This morning, I wrote bin-search-2, which took
> about ten minutes to write (and about thirty minutes to debug), but I
> understand it thoroughly -- and I am just the merest beginner, barely
> a toe in the water.

[...]

> ;;; ---------ch_04.lisp-----------------------
> (defparameter v (vector 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33
> 35 37 39 41))
>
> (defun bin-search (obj vec)
> (format t "bin-search ~a ~a~%" obj vec)
> (let ((len (length vec)))
> (and (not (zerop len)))
> (finder obj vec 0 (- len 1))))
>
The parentheses in that copy of bin-search are wrong.
Is this genuinely Graham's ? I don't own the book to check.

> (defun finder (obj vec start end)
> (format t "finder obj: ~a, vec: ~a, start: ~a, end: ~a~%" obj vec
> start end)
> (format t " ~a~%" (subseq vec start (+ end 1)))
> (let ((range (- end start)))
> (if (zerop range) ; outer if test
> (if (eql obj (aref vec start)) ; inner if test
> obj ; the return obj
> nil) ; else return nil
> (let ((mid (+ start (round (/ range 2))))) ; outer then
> (let ((obj2 (aref vec mid)))
> (if (< obj obj2)
> (finder obj vec start (- mid 1))
> (if (> obj obj2)
> (finder obj vec (+ mid 1) end)
> obj)))))))
>
I doubt that such a code might be understandable or correct ;
indeed it is not : just evaluate (bin-search 0 #(1 2)) and blow the stack.

What is the origin of this bug and the way to correct it ?
The bug is about fiddling with indices and splitting their range.
Other posts in this thread have emphasised
the advantage to separate functionalities amongst functions.
finder is a step in that direction, however it still dangerously mixes
the job of finding with the job of dichotomy,
which can be handled in the following quick and dirty protocol
to deal with ranges --- as conses --- and their splitting.

(defun range (start end) ;end inclusive
(if (<= start end)
(cons start end)
(error "Invalid range ~s ~s. " start end)))
(defmethod range_of ((v vector))
(unless (equalp #() v) (range 0 (1- (length v)))))
(defun range_len (range)
(if range (1+ (- (cdr range) (car range))) 0))
(defun range_mid (range)
(when (and range (< (car range) (cdr range)))
(+ (car range) (floor (range_len range) 2))))
(defmethod range_splits (range)
"The cons of two complementary 'even' subranges of the range range. "
(let ((mid (range_mid range)))
(when mid ;is_splittable
(cons (range (car range) (1- mid))
(range mid (cdr range))))))

Now, one should wonder about this protocol,
is it reliable --- anyway it is easy to unit-test ---,
is it handy --- more than start and end parameters --- to use ?
Let's see,

(defun bin-search (obj vec)
(format t "bin-search' ~a ~a~%" obj vec)
(unless (equalp #() vec)
(labels ((finder (range)
(format t "range:~a~%" range)
(format t " ~a~%" (subseq vec (car range) (1+ (cdr range))))
(let ((splits (range_splits range)))
(if splits
(if (< obj (aref vec (cadr splits)))
(finder (car splits))
(finder (cdr splits)))
(when (= obj (aref vec (car range))) obj)))))
(finder (range_of vec)))))

The original finder was rather self contained and used a lot of variables,
now it involves a lot of functions,
although it's essentially only range_splits.
It also performs a slightly different algorithm [1], using no pivot :
if the range is splittable
then search in the appropriate split,
else --- the range stands for a singleton ---
check if the element of the singleton is the searched obj.

> (defun bin-search-2 (obj vec &optional (start 0))
> (let*((len (length vec))
> (mid (floor (/ len 2))))
> (format t "bin-search-2 obj: ~a, vec: ~a, start ~a, end: ~a, mid: ~a~
> %" obj vec start len mid)
> (cond ((zerop len) nil)
> ((= obj (svref vec mid)) obj)
> ((< obj (svref vec mid)) (bin-search-2 obj (subseq vec start
> mid)))
> ((> obj (svref vec mid)) (bin-search-2 obj (subseq vec (+ mid
> 1) len))))))
>
> ;; I don't know how to set 'start' to 0 by default, other than by
> making it optional.
> ;; I would appreciate help with this.
>
You simply don't need it, it is always 0.
So your function can be written like

(defun bin-search (obj vec &key (cmp #'cmp))
(format t "bin-search''' obj: ~a vec: ~a ~%" obj vec)
(let* ((len (length vec))
(mid (floor len 2)))
(when (/= 0 len)
(case (funcall cmp obj (aref vec mid))
(0 obj)
(-1 (bin-search obj (subseq vec 0 mid)))
(+1 (bin-search obj (subseq vec (1+ mid))))))))

where I added

(defgeneric cmp (x y)
(:documentation "+1, 0, -1 according to x being after, with, before y. "))
(defmethod cmp ((x integer) (y integer)) (signum (- x y)))

To cope with the problem of the consing of subseq, maybe using

(defun subvec (a_vector start &optional (end (length a_vector)))
(make-array (- end start)
:displaced-to a_vector
:displaced-index-offset start
:element-type (array-element-type a_vector)))

instead of subseq can help, but I know little about displaced arrays
and a quick profiling shows only slight differences in Clisp.
I guess it should be different for bitvectors or vectors or big elements.
I hope someone will clear this.



[1] This algorithm exhibits a nice pattern, for processing by dichotomy,
that can be translated, without further explanations, as

(defun spliterator (rektor f_atom &optional (splitter #'split))
(labels ((f (x)
(let ((splits (funcall splitter x)))
(if splits
(funcall rektor (f (car splits)) (f (cdr splits)) splits)
(funcall f_atom x)))))
#'f))

(defun bin-search (obj vec)
(unless (equalp #() vec)
(funcall
(spliterator
(lambda (x y splits) (if (< obj (aref vec (cadr splits))) x y))
(lambda (x) (when (= obj (aref vec (car x))) obj))
'range_splits)
(range_of vec))))

--
Take it Easy Don't Hurry Be Happy

Thierry

((LAMBDA (LAMBDA) `(,LAMBDA ',LAMBDA)) '(LAMBDA (LAMBDA) `(,LAMBDA ',LAMBDA)))