From: alfo on
On 19:22 6 Oct 2009, Ato_Zee wrote:

> On 6-Oct-2009, "Joep" <available(a)> wrote:
>> He only asked if defraggers checked a volume prior to moving
>> data. So, yes/no will do.
> OP generalised, there are many defraggers, I pointed out that
> there is no consistency, either within any one of them, it
> depends on the type of data integrety error, or between
> how different defraggers handle the problem.
> One I met compounded the problem, by seemingly
> duplicating, without resolving, the corruption first.
> There is no black and white, yes/no answer.
> No defragger can cope with loss of data integrity on
> a failing drive.
> Defragging if of questionable value and the MS$ utility
> does a fair job. Smart placement is just bells and
> whistles.
> Better to spend your money on backup than
> defragging.

Sorry to have read your replies sooner. Thanks for the info. I
asumed a Checkdisk was always being performed whenever I used my
PerfectDisk but I wasn't sure. From what you say I shouldn't rely on
a defragger doing a check of file system integrity. Thanks.
From: Joep on
"Ato_Zee" <ato_zee(a)> schreef in bericht
> On 11-Oct-2009, "Joep" <available(a)> wrote:
>> >>>>> Not so, the drive can more than adequately cope with fragmentation.
>> >
>> >>>> Ah, so a drive copes with fragmentation itself?
> The drive has the MFT and its mirror, It knows wher the
> requested data is, and will deliver files/data within the time
> given in its spec.

Yes, it is bloody obvious it knows where the data is, that's the whole idea
isn't it? FYI, the MFT mirror isn't used for this and the mirror isn't what
it suggests it is (a complete mirror). What spec? Even if 'it' knows where
the data is, it still has to read it from the disk.

> Drives are slow mechanical
> devices as compared with the purely electronic parts of a
> system.


> Many assert that it is not worth defragging as it produces
> no discernable improvement in system performance.

Well, they're wrong. One could argue that the improvement ain't that big of
a deal, that's open for discussion.

> Much like registry cleaners.

Utterly different beasts.

From: Joep on
"alfo" <alfo(a)> schreef in bericht
>>>> He didn't ask for defraggers to fix things.
>>> If OP is not interested in fixing things the query has no
>>> meaning. Concern about checking the volume implies
>>> concern about data integrity.
>> Of course it has. He's possibly afraid a defragger may corrupt
>> a file system in inconsistent state. That's something entirely
>> different than asking a defragger to fix corruption.
> Hello Joep. I'm the OP. I want to avoid making the data in my
> partitons inaccessible by defragging if the defragger did not
> ensure file system integrity before it started work.

That was that I figured.

> FWIW I use PerfectDisk (it's now at v.10 but I use v.7).
> Out of interest, does DiskTune do all the checking which Checkdsk
> does?

Nope. Orginally I did, check if dirty bit was set (using chkdsk), but I
figured that it was wiser to have chkdsk do the repair instead of suggesting
that it was DiskTune doing the fixing of the file system. Currently I only
check the dirty bit (I think thats what the Vista defragger does as well).
However, defragging using the defrag API is harmless even if file system is

From: Ato_Zee on

On 11-Oct-2009, "Joep" <available(a)> wrote:

> What spec?

Drive mfrs give access time specs.

> Even if 'it' knows where
> the data is, it still has to read it from the disk.

Into its buffer. All that defragging does is ensure that it can
do this as a synchronous read.
If you want your data delivered faster buy a server drive
with high spin speed, a larger cache than consumer level
drives, and think SCSI or RAID striping.
Then get yourself an Intel Extreme Processor for
its larger on board cache.
From: Rod Speed on
Joep wrote
> Rod Speed <> wrote
>> Joep wrote
>>> Rod Speed <> wrote
>>>> Joep wrote
>>>>> Ato_Zee <ato_zee(a)> wrote
>>>>>> Joep <available(a)> wrote

>>>>>>>> System performance is a hardware issue,
>>>>>>>> Drive cache size, spin speed, access time,
>>>>>>>> pagefile optimisation, and a few other variables.

>>>>>>> Like fragmentation and placement on disk

>>>>>> Not so, the drive can more than adequately cope with fragmentation.

>>>>> Ah, so a drive copes with fragmentation itself?

>>>> He didnt say that.

>>> It's more productive if you then try to explain to me what it is he's saying.

>> It makes a lot more sense for him to do that himself if he wants to.

> Well, why then say 'he didnt say that' in the first place.

Because he didnt say that.

>>>>>> With adequate RAM drive access is not an issue.

>>>>> At one point a file has to be read from disk /written to disk.

>>>> You quite sure you aint one of those rocket scientist fellas ?

>>>>> No matter the amount of memory a fragmented file will take longer than an unfragmented file placed near the start
>>>>> of the disk.

>>>> Wrong when its a media file and the access to the
>>>> file is entirely dependant on the media play speed.

>>> Yes, and so?

>> So you were just plain wrong.

> Well, if all you do is play your media files all day then maybe,
> assuming your statement is correct in the first place.

Corse its correct.

And there are plenty of other examples where fragmention
has no effect on real world file use too, most obviously
with non serial access to files like with databases etc.

In fact there arent very many situations where the speed
of serial access to large files happens much anymore.

The most common situation now remaining is file copying
and it makes a lot more sense to not copy large files around
instead. Put them where they need to be in the first place.

Even with backup, the extra time that fragmentation can
produce is largely irrelevant because anyone with even
half a clue does backup in the background anyway.