From: Peter Olcott on

"Phil Carmody" <thefatphil_demunged(a)yahoo.co.uk> wrote in
message news:87tyrme5rg.fsf(a)kilospaz.fatphil.org...
> "Peter Olcott" <NoSpam(a)OCR4Screen.com> writes:
>> Actually I hope that you are right and I am wrong, it
>> would
>> make my job much simpler.
>>
>> I also figured out how you could be mostly right, and
>> thus
>> have the superior solution
>
> Finally you are showing your true genious!
>
>>, this would mostly depend upon
>> the details of exactly how time slicing is implemented:
>> The
>> size and frequency of each time slice. I am estimating
>> that
>> both the size and the frequency may vary over some range
>> of
>> values.
>>
>> My original solution might still provide slightly better
>> performance for the high priority jobs, at the expense of
>> a
>> lot of trouble and potential for error, and substantial
>> reduction in the performance of the other jobs.
>
> If your other jobs might suffer from _errors_ depending on
> how they're scheduled, then _you're programming them
> wrong_.
>
>> The performance degradation that I might expect would be
>> a
>> small fraction of the high priority jobs might take about
>> twice as long. If it is as bad as this, this would be
>> acceptable. I doubt that it would be worse than this.
>
> You're "expecting" things, and yet you've shown a complete
> ignorance of how things actually are. Your expectations
> are
> therefore basically meaningless.
>
> Implement the simplest working solution, measure its
> performance,
> and identify the bottlenecks. Focus on those bottlenecks,
> rather
> than elsewhere.

This is the conventional wisdom that I have found to be
erroneous from decades of development experience. It may be
fine for applications programming with human to machine
interfaces, but, to optimize machine to machine interfaces
and performance I have found that understanding all of the
underlying details, and coding most everything to be as fast
as possible generally produces substantially more efficient
systems. I have also found it to be the case that the
development (and ongoing maintenance) costs of these systems
is minimal.

About 95% of the time is spent on design, with coding being
the most detailed level of design. An additional 5% of the
time is spent on testing and debugging. One aspect of the
design is to design code that is easy to test. It is best to
make these tests automated, thus making regression testing
simple.

The reason for this is because it is so cheap to throw away
a less than optimal design and start over with a new design
when compared to the price that must be paid for improving a
pre-existing already implemented design. At the early
(investigative) stage of this process some of the initial
designs may prove to be very much off-the-wall.

>
> Of course, the best way of making the places where there
> are
> bottlenecks faster is by using the -funroll-loops switch.
>
> Phil
> --
> I find the easiest thing to do is to k/f myself and just
> troll away
> -- David Melville on r.a.s.f1


From: Peter Olcott on

"Ersek, Laszlo" <lacos(a)caesar.elte.hu> wrote in message
news:Pine.LNX.4.64.1004081113340.13156(a)login01.caesar.elte.hu...
> On Wed, 7 Apr 2010, Peter Olcott wrote:
>
>> "David Schwartz" <davids(a)webmaster.com> wrote in message
>> news:67a7c2a3-c7b1-4555-89c4-ae6dffe40fbc(a)r18g2000yqd.googlegroups.com...
>>
>>> It would be absolutely idiotic in the extreme for the
>>> kernel to delay a process that attempted to 'pread' one
>>> byte from the beginning of a file until a 'pwrite' to
>>> the end of that file from another process of 100MB from
>>> file 'mmap'ed from a slow NFS server finished.
>>
>> Documentation indicates that they are atomic.
>
> (I hope my attempt at fixing top-posting succeeded to pair
> matching paragraphs.)
>
> Fixing inaccuracies in the Linux manual pages is an
> ongoing activity, AFAICT. I suggest writing a test program
> for the situation described above.
>
> lacos

I have to top post whenever I am replying to someone that
some how turns quoting off, or there would be no way to tell
who is saying what.

That these operations are atomic is explained to be their
fundamental design principle in two different editions of
Advanced Programming in the UNIX Environment.

I already cited this once before, and someone else confirmed
that it is correct, once before.


From: David Schwartz on
On Apr 7, 6:42 pm, "Peter Olcott" <NoS...(a)OCR4Screen.com> wrote:

> Documentation indicates that they are atomic.

But, again, you are confusing two different notions of atomicity. They
are "atomic" in the sense that an intervening operation on the same
file descriptor cannot cause the read or write to take place at a
different offset. So you can't just implement 'pread' as a seek
followed by a read.

But they are not atomic against operations on other descriptors that
reference the same file. Nor is the read or write part of the
operation itself atomic against the file.

They are only atomic in the sense that the seek is 'glued' to the read/
write operation. In fact, the standards never define these operations
as atomic in any sense and simply say that they don't involve the file
position.

"The pread() function performs the same action as read(), except that
it reads from a given position in the file without changing the file
pointer." "The pwrite() function performs the same action as write(),
except that it writes into a given position without changing the file
pointer." (And 'write' is explicitly described as not atomic.)

Manual pages are not vetted the way standards are. Though it's always
possible they're truly atomic on a particular platform (horrible as
that would be) most likely it's just someone trying to explain the
requirements in the standards in easier to understand words. Also,
easier to misunderstand words.

DS
From: David Schwartz on
On Apr 8, 4:40 am, Rainer Weikusat <rweiku...(a)mssgmbh.com> wrote:

> [T]he C library must not silently turn a single-threaded process into a
> multi-threaded one.

Why not? Or, to put it another way, why must a platform even have any
notion of a 'single-threaded process'?

DS
From: Ersek, Laszlo on
On Thu, 8 Apr 2010, David Schwartz wrote:

> On Apr 8, 4:40�am, Rainer Weikusat <rweiku...(a)mssgmbh.com> wrote:
>
>> [T]he C library must not silently turn a single-threaded process into a
>> multi-threaded one.
>
> Why not? Or, to put it another way, why must a platform even have any
> notion of a 'single-threaded process'?

If it intends to conform to the SUS...

http://www.opengroup.org/onlinepubs/9699919799/functions/sigprocmask.html

----v----
The pthread_sigmask() function shall examine or change (or both) the
calling thread's signal mask, regardless of the number of threads in the
process. The function shall be equivalent to sigprocmask(), without the
restriction that the call be made in a single-threaded process.

In a single-threaded process, the sigprocmask() function shall examine or
change (or both) the signal mask of the calling thread.
----^----

See also the rationale of pthread_atfork():

http://www.opengroup.org/onlinepubs/9699919799/functions/pthread_atfork.html#tag_16_402_08

----v----
There are at least two serious problems with the semantics of fork() in a
multi-threaded program. [...]
----^----

My favorite nitpick is multi-threaded libraries, also treated in the
rationale. They may completely change (euphemism for "screw up") the
semantics of a previously perfectly fine, single-threaded program, if it
calls fork(), or handles signals in a non-completely-defensive way, or
both (-> system(), perhaps popen()).

(I'm quite sure I didn't tell you anything new, so I guess I didn't
understand what you meant or am plainly wrong.)

lacos