From: Yannick Duchêne (Hibou57) on
Le Tue, 11 May 2010 09:58:16 +0200, Maciej Sobczak
<> a écrit:

> On 11 Maj, 00:24, Yannick Duchêne (Hibou57) <yannick_duch...(a)>
> wrote:
>> Only as long as you only rely on fopen ;
> This function is defined in the C standard.
>> there is also an "int open(char * filename, int flags)" which is widely
>> used.
> This one is not defined in the C standard and as such is not part of
> the C language.
So C writers do not enforce standards. This "open" is even more widely
used than "fopen", I use to see it every-where.

Note : I knew open is not part of ISO/ANSI C, I was teasing you about a
trouble with C sources.

>> Weither or not this "int" actually stands for a pointer is another
>> story (as in C one will always be able to cast from int to void*, this
>> may be true indeed).
> No this is not true either - assuming that you mean the "open"
> function defined in the POSIX standard, the return value is the lowest
> *numbered* unused file descriptor. This is aligned with the semantics
> of other functions like select.
> Another reason for why this cannot be a pointer is that NULL is
> defined to be a pointer value that does *not* point to any object; at
> the same time STDIN_FILENO, denoting a file descriptor for standard
> input, has a value 0, which is equivalent to NULL.
True, clever notice

> Interestingly, Unix file descriptors are *much safer* than standard
> C's FILE pointers in that they are free from undefined behavior. Of
> course, it would be even better to have a distinct type for them.
> (OK, that's enough for off-topic confusions, I duck away to reinvent
> My_Better.Text_IO on top of Ada.Streams ;-) )
Please, what does mean “to duck away” ?

pragma Asset ? Is that true ? Waaww... great
From: Yannick Duchêne (Hibou57) on
Le Tue, 11 May 2010 10:35:24 +0200, Dmitry A. Kazakov
<mailbox(a)> a écrit:

> On Mon, 10 May 2010 23:30:28 +0200, Ludovic Brenta wrote:
>> Maciej Sobczak <> writes:
>>> Coming back to I/O - what I miss in Ada is the equivalent of fread in
>>> C - that is, an operation that reads *up to* the given number of
>>> bytes. Or maybe there is something that I didn't notice? Such an
>>> operation is an important basis for custom buffered input.
> It is a basis for creating inefficient time and space consuming programs.
> But we had this discussion before.
Can you tell more please ?

pragma Asset ? Is that true ? Waaww... great
From: Warren on
Niklas Holsti expounded in news:84sfkjF3btU1(a)

> Warren wrote:
>> Niklas Holsti expounded in news:84r4k5Ftk8U1(a)
>>> ... if you read character by character, use the function
>>> Text_IO.End_Of_Line to detect the end of an input line. This works
>>> the same way in all systems, whatever line termination character
>>> (sequence), if any, is used. Follow with Skip_Line to go to the start
>>> of the next line.
>> I assume that is true before you read the next "char". If I
>> then Skip_Line as you say, will I also get End_Of_Line true
>> if the next line is empty (null)?
> Yes, with one problem, which is that it is hard (that is, I don't know
> how) to detect when the very last line in the input file is null. This
> is because End_Of_Line returns true also at end of file, and
> returns true also when the data remaining in the file is an end-of-line
> (and end-of-page) before the true end of file. One consequence is that
> truly empty file (like /dev/null) looks the same as a file with one
> line (like echo "").

I see. In my current app, that's not a big deal, but it
is good to be aware of that.

One thing that comes up in C based streams is that sometimes
the last line does not end in a LF (and/or CR). So then
you get EOF, but no indication of a line ending. So
usually you code it to "fix the line" when you see that.

> Try this program on a file with some empty lines:

I see. With 2 null input lines, it reports:

$ ./linelen <t.t
Line 1 has 0 characters.

Well, it seems that just about everything has a
wart somewhere.

From: Warren on
=?iso-8859-15?Q?Yannick_Duch=EAne_=28Hibou57=29?= expounded in

> Le Tue, 11 May 2010 09:34:42 +0200, Niklas Holsti
> <niklas.holsti(a)tidorum.invalid> a �crit:
>> Yes, with one problem, which is that it is hard (that is, I don't
> I was to say this is a matter only with Standard_Input, and in the
> context of "/dev/null" which you gave, you have no need to open
> "/dev/null" as a file of lines if have no reason to think "/dev/null"
> is indeed a file of lines.

But feed that same program with 2 empty lines,
and the program reports:

$ ./linelen <t.t
Line 1 has 0 characters.

This is clearly defective. It either returns null lines
or it doesn't. Here we have a bit of both (a "Microsoft
solution" ;-)

> The only remaining case is the one of the standard input.

"Standard input" (emphasis on "standard") has nothing to
do with it. Whether you read it as standard input or
from a "non-standard" source, it behaves the same way.

The file I used above, is a well formed text file.

$ wc -l t.t
2 t.t

yet the program only sees 1 line. Not a huge problem-
just a little wart.

But the real question is- is this a design or
implementation wart?


From: Dmitry A. Kazakov on
On Tue, 11 May 2010 17:23:44 +0200, Yannick Duch�ne (Hibou57) wrote:

> Le Tue, 11 May 2010 10:26:02 +0200, Dmitry A. Kazakov
> <mailbox(a)> a �crit:
>>> But you have no way to know when you've read
>>> a empty line in a lexer routine that is reading
>>> character by character.
>> A lexer routine shall never do that. You either read lines and then parse
>> them, or else you do stream input and the line end is to be determined by
>> the lexer (i.e. by the language being parsed).
> With all due respect, really don't agree with that. A lexer is not
> required to be line oriented.

Most languages I know are line oriented. Ada is, C++ is. Even bash is. If
the language had no lines, no comments like //, ignored LF, how would you
report errors? As numbers of Unicode code points from the beginning of the
source file?

BTW, a pure stream lacks not only EOL, but also EOF. So a truly
stream-oriented program must be infinite. (:-))

Dmitry A. Kazakov