From: Rod Speed on
Cronos wrote
> Ed Light wrote
>> Cronos wrote

>>> Not my words but wanted your input on the below: especially Rod Speed.

>> Lots of people have Rod filtered out. I wouldn't take him seriously, nor ever read his posts.

> Wasn't just Rod but a few other people in this group claimed defrag is
> never needed too but he is the one peson in this group who I have seen
> state it many times over the years and I think he is FOS and is giving
> bad advice.

You're so stupid you dont even realise that extra file fragments
cant cause a failure, just can take longer to do something, and
only then with some particular ops like copying files around etc.


From: Cronos on
Ed Light wrote:

>It was interesting how at one point in
> time converting it to NTFS from fat32 sped it up radically.

I did that once and it was an extremely slow process. Never again!

> This may not apply to NTFS at all, but in my first computer book, a DOS
> manual, it said to defrag regularly to keep the OS from losing some
> fragments. That always stayed with me, relavant or not.

Yes, I seriously think Rod is out to lunch on the subject of defrag. The
other people I have been conferring with have told me how to test it my
self with a 2.5gb dummy file so think I am going to check it out and do
some testing so I can be certain on this topic once and for all.
From: 3877 on
Ed Light wrote:
> On 1/4/2010 3:32 PM, Cronos wrote:
>> Ed Light wrote:
>>> On 1/3/2010 9:43 PM, Cronos wrote:
>>>> Not my words but wanted your input on the below: especially Rod
>>>> Speed.
>>> Lots of people have Rod filtered out. I wouldn't take him seriously,
>>> nor ever read his posts.
>>
>> Wasn't just Rod but a few other people in this group claimed defrag
>> is never needed too but he is the one peson in this group who I have
>> seen state it many times over the years and I think he is FOS and is
>> giving bad advice.
>
> He is/or used to be abusive, thus the communal filtering. Normally
> we're unaware of him.
>
> Yes, he gives some good, some very strange advice.
>
> If you have watched on something like Hard Disk Sentinel's performance
> tab, a hard drive whipping through big contiguous files and crawling
> through large batches of small, non-contiguous files,

That has nothing to do with fragmentation. By definition you do not
see much fragmentation of small files because they hardly ever
are bigger than a chunk of free space.

then seen in a
> defragger report how many fragments big files can wind up in, you'd
> favor defragging.

> I service a friend's Win XP NTFS computer once a year, and she has
> really big Outlook Express e-mail files. It slows down and then the
> defragging speeds it up again.

Mine do not and I bet I have much bigger email files than she does.

Compacting the files does make a big difference and I bet that
that is happening at defrag time.

It was interesting how at one point in
> time converting it to NTFS from fat32 sped it up radically.
>
> This may not apply to NTFS at all, but in my first computer book, a
> DOS manual, it said to defrag regularly to keep the OS from losing
> some fragments.

FAT never ever did lose any fragments.

Presumably that claim was actually a mangling of the quite
different problem, recovery if you do not have proper backups.
That is certainly much easier with files that are not fragmented.

> That always stayed with me, relavant or not.

It was never true even with DOS.


From: Rod Speed on
Cronos wrote:
> Ed Light wrote:
>
>> It was interesting how at one point in
>> time converting it to NTFS from fat32 sped it up radically.
>
> I did that once and it was an extremely slow process. Never again!
>
>> This may not apply to NTFS at all, but in my first computer book, a
>> DOS manual, it said to defrag regularly to keep the OS from losing
>> some fragments. That always stayed with me, relavant or not.
>
> Yes, I seriously think Rod is out to lunch on the subject of defrag.
> The other people I have been conferring with have told me how to test
> it my self with a 2.5gb dummy file so think I am going to check it
> out and do some testing so I can be certain on this topic once and
> for all.

You're so stupid that you cant grasp that no one
but a fool uses real world systems anything like that.


From: David Brown on
Cronos wrote:
> David Brown wrote:
>> Cronos wrote:
>>> Not my words but wanted your input on the below: especially Rod Speed.
>>>
>>
>> Would these be words taken from the adverts for commercial defrag
>> software?
>
> Nope. They are from a poster in another Usenet group when the subject
> came up. He said he didn't want to get involved in a flamewar here so I
> am posting what he said anonymously and will post the replies back to
> him. I like a good flamewar, myself.

I'd place as much faith in that anonymous poster's comments as I would
in someone claiming that their music system sounds much better with
their new gazillion dollar speaker cables than it did with the old
bazillion dollar cables. I am not claiming he is lying - he may truly
believe what he writes (though he may also be an astroturfer). But it
is still nonsense.

To take two specific points - to my knowledge, not even MS has managed
to make file system drivers that are so poor that the cause errors with
fragmented files. I can't rule out that there are corner cases that
cause failures, but they will be very rare.

Secondly, it may well be the case that a particular database server has
very low timeout settings. But these will /never/ be so low that
fragmentation will cause a timeout. There are all sorts of variable
delays in a system, especially windows, and timeouts will be large
enough to take those into account.

There was a time when fragmentation was a big issue in DOS and early
Windows, mainly because the disk head had to move back and forth so much
between the FAT blocks and the data blocks. DOS and windows had pretty
much nothing in the way of disk caching at the time, so everything was
read directly from the disk and thus fragments were very costly. With
more "modern" systems, such as windows NT (or any form of *nix), you
have caching of the block allocation data structures, better file
systems, read-ahead, on-drive caches, etc., which hugely reduce the impact.