From: Dustin Cook on
ASCII <me2(a)privacy.net> wrote in news:4b4305bb.2908015(a)EDCBIC:

> Dustin Cook wrote:
>> If you need something that will make dummy 2.1
>>gig files by allocating free space; I'm willing to provide it.
>
> Maybe a copy of [42.zip]?
> That'll fill up your drive quite handily!
>

Sorry, I don't know the app? I was just going to write something up real
quick; and of course provide source code as well. Nothing fancy; hence the
2.1gig limitation per run.. shrug.

Anyways, it's pointless to do it. If an individual doesn't wish to defrag;
who am I to complain? heh.


--
.... Those are my thoughts anyways...

From: Dustin Cook on
ASCII <me2(a)privacy.net> wrote in news:4b440f7f.5408515(a)EDCBIC:

> Dustin Cook wrote:
>>I have diskeeper actually; I've been using it for years.
>
> How would you compare it to Raxco's PerfectDisk?


I've never used perfectdisk.


--
.... Those are my thoughts anyways...

From: Cronos on
Dustin Cook wrote:

> Sorry, I don't know the app? I was just going to write something up real
> quick; and of course provide source code as well. Nothing fancy; hence the
> 2.1gig limitation per run.. shrug.
>
> Anyways, it's pointless to do it. If an individual doesn't wish to defrag;
> who am I to complain? heh.

I defrag, but not every week like Microsoft set it to. That scheduler
doesn't work properly anyway. It's supposed to defrag at next idle time
if the defrag task didn't run at the scheduled time and it never does
that I can see.
From: Cronos on
Dustin Cook wrote:

> I've never used perfectdisk.
>
>
It's the one that gets the most praise. I do have a copy of Diskeeper
that a friend gave me so have used the full version too. Perfectdisk has
more options than Diskeeper but that was a some years back and don't
know how they compare now. For example, Perfectdisk has what they call
Smart defrag which is just using their own disk map for layout instead
of the Microsoft one. It moves most often used files to the fastest area
of the HDD.
From: Cronos on
Dustin Cook wrote:

> It's all about that access time my friend. How long is it going to take
> to seek out file a, and then how much longer will it take to pull record
> #34746 out of it. If it's fragmented, your computer's hard disk is going
> to be spending time it could otherwise have not wasted hunting for
> invididual pieces.
>
>
>
>
>

Below is not me:

I'd place as much faith in that anonymous poster's comments as I would
in someone claiming that their music system sounds much better with
their new gazillion dollar speaker cables than it did with the old
bazillion dollar cables. I am not claiming he is lying - he may truly
believe what he writes (though he may also be an astroturfer). But it
is still nonsense.

To take two specific points - to my knowledge, not even MS has managed
to make file system drivers that are so poor that the cause errors with
fragmented files. I can't rule out that there are corner cases that
cause failures, but they will be very rare.

Secondly, it may well be the case that a particular database server has
very low timeout settings. But these will /never/ be so low that
fragmentation will cause a timeout. There are all sorts of variable
delays in a system, especially windows, and timeouts will be large
enough to take those into account.

There was a time when fragmentation was a big issue in DOS and early
Windows, mainly because the disk head had to move back and forth so much
between the FAT blocks and the data blocks. DOS and windows had pretty
much nothing in the way of disk caching at the time, so everything was
read directly from the disk and thus fragments were very costly. With
more "modern" systems, such as windows NT (or any form of *nix), you
have caching of the block allocation data structures, better file
systems, read-ahead, on-drive caches, etc., which hugely reduce the impact.