From: Peter Olcott on

"Ian Collins" <ian-news(a)hotmail.com> wrote in message
news:82789qFp8jU15(a)mid.individual.net...
> On 04/ 9/10 11:39 AM, Peter Olcott wrote:
>> "Ian Collins"<ian-news(a)hotmail.com> wrote:
>>>
>>> You'll soon find people don't have time to untangle your
>>> posts. There's plenty of outlook users who have had the
>>> decency to apply the fix once it is pointed out to them.
>>>
>> If you want to provide the exact link that will solve the
>> problem I will look in to it.
>
> I've never had to suffer outlook, and given this group's
> topic you probably won't find many users here.
> http://jump.to/outlook-quotefix looks promising.
>
> --
> Ian Collins

It didn't work. David's posts were the only ones with
quoting turned off, and they still have quoting turned off.


From: Casper H.S. Dik on
"Peter Olcott" <NoSpam(a)OCR4Screen.com> writes:

>(1) Exactly how reliable are Named Pipes?

Very reliable.

>(2) Is there a completely certain way that a write to a file
>can be flushed to the disk that encompasses every possible
>memory buffer, including the hard drives onboard cache? I
>want to be able to yank the power cord at any moment and not
>get corrupted data other than the most recent single
>transaction.

fsync(); if your fsync() implementation doesn't then your
implementation is broken.

Unfortunately, many implementations are broken as many implementations
do not flush the disks' write-cache; there are also cases where
the disk ignores attempts to flush the write cache.

Originally, Sun supplied hardware disabled the write cache for SCSI
and other disks. UFS never flushed the write cache if it was
enabled which is why it was disabled.

Fsync() to file requires a sync to disk of:
- the data
- the inode (updating the length of the file)
- the indirect blocks
- the directory where the inode lives.
(the data of the directory and the inode of the
directory if the length changes)

Not all implementations of ufs perform all updates needed
when fsync() is called.

With ZFS, the write cache is enabled but it *is* flushed so
when you call fsync(). As writes are ordered, calling fsync()
on a file will makes sure that inode, the directory, etc
are all consistent (ZFS believes that metadata and data
are equally important)

Casper
--
Expressed in this posting are my opinions. They are in no way related
to opinions held by my employer, Sun Microsystems.
Statements on Sun products included here are not gospel and may
be fiction rather than truth.
From: Casper H.S. Dik on
"Peter Olcott" <NoSpam(a)OCR4Screen.com> writes:

>I just checked the third party provider provides UPS. Most
>experts seem to say not to worry about the disk drive's
>onboard cache if UPS is available. It looks like the only
>alternative that can be counted on for this would be to
>disable write caching for the drive.


I disagree; in holland we have reliable powergrid. My experience
is that UPS fail more often than the powergrid. I.e., even if
you have a UPS, you may still lose the data in the write cache.

This needs to be fixed in software; ZFS is able to handle
this properly with the write caches enabled.

(Note that most PC drivers are designed for deployment on
Windows with the write cache enabled; it is not clear that
disabling the write crache will work for all models.

Casper
--
Expressed in this posting are my opinions. They are in no way related
to opinions held by my employer, Sun Microsystems.
Statements on Sun products included here are not gospel and may
be fiction rather than truth.
From: Peter Olcott on

"Casper H.S. Dik" <Casper.Dik(a)Sun.COM> wrote in message
news:4bbef33e$0$22934$e4fe514c(a)news.xs4all.nl...
> "Peter Olcott" <NoSpam(a)OCR4Screen.com> writes:
>
>>(1) Exactly how reliable are Named Pipes?
>
> Very reliable.
>
>>(2) Is there a completely certain way that a write to a
>>file
>>can be flushed to the disk that encompasses every possible
>>memory buffer, including the hard drives onboard cache? I
>>want to be able to yank the power cord at any moment and
>>not
>>get corrupted data other than the most recent single
>>transaction.
>
> fsync(); if your fsync() implementation doesn't then your
> implementation is broken.
>
> Unfortunately, many implementations are broken as many
> implementations
> do not flush the disks' write-cache; there are also cases
> where
> the disk ignores attempts to flush the write cache.
>
> Originally, Sun supplied hardware disabled the write cache
> for SCSI
> and other disks. UFS never flushed the write cache if it
> was
> enabled which is why it was disabled.
>
> Fsync() to file requires a sync to disk of:
> - the data
> - the inode (updating the length of the file)
> - the indirect blocks
> - the directory where the inode lives.
> (the data of the directory and the inode of the
> directory if the length changes)
>
> Not all implementations of ufs perform all updates needed
> when fsync() is called.
>
> With ZFS, the write cache is enabled but it *is* flushed
> so
> when you call fsync(). As writes are ordered, calling
> fsync()
> on a file will makes sure that inode, the directory, etc
> are all consistent (ZFS believes that metadata and data
> are equally important)

I have a choice of three different Linux distributions:
Ubuntu, Fedora, and CentOS
How can I tell which of these has the most robust flushing
to disk?

Is there a way that I can empirically test this?

Are there any SQL providers that do a better job than the OS
on flushing to disk?
SQLite and MySQL were recommended.

>
> Casper
> --
> Expressed in this posting are my opinions. They are in no
> way related
> to opinions held by my employer, Sun Microsystems.
> Statements on Sun products included here are not gospel
> and may
> be fiction rather than truth.


From: Jasen Betts on
On 2010-04-07, Peter Olcott <NoSpam(a)OCR4Screen.com> wrote:
>
> A 3.5 minute long low priority process could be already
> executing when a 50 ms high priority job arrives. The 3.5
> minute long low priority must give up what it is doing
> (sleep) so that the 50 ms high priority process has
> exclusive use of the CPU. If the 50 ms job does not have
> exclusive use of the CPU it may become A 500 ms job due to
> the lack of cache spatial locality of reference. I am trying
> to impose a 100 ms real-time limit on the high priority
> jobs.

write your 210000ms job to check for waiting 50ms jobs every 25ms or so.

one way to do this waould be to take and immediately relinquish a
mutex every N loops. in the low priority task
but in the high priority task take the mutex and keep it.

--- news://freenews.netfront.net/ - complaints: news(a)netfront.net ---