From: David Brown on

Please try not to snip relevant quotations:

> Cronos wrote:
>> Ed Light wrote:
>>
>>>
>>> But a very fragmented large file is like a bunch of little files
>>> spread all over the place, isn't it?
>>
>> Yes.
>

Cronos wrote:
> David Brown wrote:
>
>> A very fragmented large file is like a single large file, it's just
>> that its contents are on different parts of the disk.
>
> Well, duh, that is what he meant and what I meant too. Of course it is
> not a file split into many smaller files but it may as well be because
> the result is the same. Now you are arguing *semantics* in a poor
> attempt to discredit me so take a hike.

A large fragmented file is /not/ like a bunch of little files spread all
over the place. It is almost entirely different, except for the minor
"spread all over the place" point (not that fragments of a file are "all
over the place" - typically they will be fairly close on the disk). I
have no need to "discredit you" - you do it fine yourself by agreeing to
such a silly statement. (Note that I am discrediting your arguments
here, not you personally.)

Try to figure out roughly what happens when a program opens a file and
reads it. Think about what the OS has to do, what parts of the disk it
needs to read, and how much of that information may be already available
in the different levels of cache. Draw some diagrams. You will find
that opening a file takes dozens of reads from different places on the
disk, some of which may be cached, and few of which can be done ahead of
time (i.e., the OS does not know which blocks to read until the earlier
reads are complete). Once a file is opened, each new fragment typically
only takes a single seek, and that target is known in advance - the OS
can pre-read the file before the application asks for the data. This is
why working with many small files is time-consuming, while fragmentation
of larger files has almost no significant effect.
From: Bilky White on
"Ed Light" <nobody(a)nobody.there> wrote in message
news:0020780a$0$2145$c3e8da3(a)news.astraweb.com...
> On 1/3/2010 9:43 PM, Cronos wrote:
>> Not my words but wanted your input on the below: especially Rod Speed.
> Lots of people have Rod filtered out. I wouldn't take him seriously, nor
> ever read his posts.
>

But Roddy's posts are hilarious, I wouldn't miss them for all the sheep in
New Zealand.

From: Bilky White on
"3877" <3877(a)nospam.com> wrote in message
news:7qdisoF2pvU1(a)mid.individual.net...
> Ed Light wrote:
>> On 1/3/2010 9:43 PM, Cronos wrote:
>>> Not my words but wanted your input on the below: especially Rod
>>> Speed.
>
>> Lots of people have Rod filtered out. I wouldn't take him seriously, nor
>> ever read his posts.
>
> Everyone ignores yours.
>

Except you, evidently.

From: David Brown on
Cronos wrote:
> David Brown wrote:
>
>> Is this you (Chronos) talking, or is it a quotation (I don't want to
>> accuse you of making arrogantly ignorant comments if you didn't write
>> them).
>>
>> <snip, because it is impossible to tell who wrote what in this jumble>
>
> No, none of it is me.

Well, I'm happy to discuss defragging with you or anyone else here, but
it is very difficult to have a discussion via hearsay and third-party
quotations. At least say which parts you agree with or disagree with.
From: 3877 on
Ed Light wrote:
> 3877 wrote:
>
>>> If you have watched on something like Hard Disk Sentinel's
>>> performance tab, a hard drive whipping through big contiguous files
>>> and crawling through large batches of small, non-contiguous files,
>>
>> That has nothing to do with fragmentation. By definition you do not
>> see much fragmentation of small files because they hardly ever
>> are bigger than a chunk of free space.
>
> But a very fragmented large file is like a bunch of little files
> spread all over the place, isn't it?

Nope, the difference is that with the bunch of little files,
you need to access the directory information with each one.

That is only done once with a single large file even if it is fragmented.