From: Bob I on


Leythos wrote:

> In article <OJs07Hc9KHA.5476(a)TK2MSFTNGP06.phx.gbl>, birelan(a)yahoo.com
> says...
>
>>Brian V wrote:
>>
>>
>>>What about defragmentation with a RAID system? Doesn't this system eliminate
>>>file defragmentation? I am under the impression that it is two copies of
>>>everything (one on each drive), it is a faster (and ??more stable system??)
>>>and more reliable system?
>>
>>RAID 0 is nothing more than Mirrored Drives, it won't be faster or more
>>stable, only provides a identical copy in the event a harddrive fails.
>
>
> RAID-0 is not a mirror and does NOT provide a COPY at all.
>
> RAID-1 IS a MIRROR.
>

Thanks for correcting the typo.

From: HeyBub on
Leythos wrote:
> In article <OxXHsVS9KHA.5412(a)TK2MSFTNGP06.phx.gbl>, heybub(a)gmail.com
> says...
>>
>> Leythos wrote:
>>> In article <#1wndj28KHA.3176(a)TK2MSFTNGP05.phx.gbl>, heybub(a)gmail.com
>>> says...
>>>>
>>>> Lisa wrote:
>>>>> I was told by a computer repairman that it's not necessary to
>>>>> defrag my laptop. If the hard drive gets full, remove files and
>>>>> always make sure I'm using a virus protection.
>>>>> What are your thoughts?
>>>>
>>>> I can envision a situation in a data center with hundreds of
>>>> thousands of transactions per minute where defragging may be of
>>>> some slight benefit (assuming an NTFS file system).
>>>>
>>>> I can also imagine a user devoted to daily defragging experiencing
>>>> a power interruption during a critical directory manipulation
>>>> process.
>>>
>>> On a small computer with many add/delete/grow/shrink operations,
>>> defrag can significantly impact file access times and can be very
>>> noticeable to users if their system was badly file fragmented before
>>> the defrag.
>>>
>>> White-Space fragmention is not normally an issue, but a file that is
>>> fragmented into 8000 parts will have an impact on system
>>> performance.
>>>
>>> This argument has gone on for decades, but it's the people that
>>> maintain systems across many areas that know the benefits of defrag.
>>
>> Ignorance can be fixed - hence the original question. It's knowing
>> something that is false that's the bigger problem.
>>
>> Considering your example of 8,000 segments, consider: A minimum
>> segment size of 4096 bytes implies a minimum of 32 meg file. A
>> FAT-32 system requires a minimum of 16,000 head movements to gather
>> all the pieces. In this case, with an average access time of 12msec,
>> you'll spend over six minutes just moving the head around. Factor in
>> rotational delay to bring the track marker under the head, then
>> rotational delay to find the sector, and so on, you're up to ten
>> minutes or so to read the file.
>>
>> An NTFS system will suck up the file with ONE head movement. You
>> still have the rotational delays and so forth, but NTFS will cut the
>> six minutes off the slurp-up time.
>>
>> De-fragging an NTFS system DOES have its uses: For those who dust
>> the inside covers of the books on their shelves and weekly scour the
>> inside of the toilet water tank, a sense of satisfaction infuses
>> their very being after a successful operation.
>>
>> I personally think Prozac is cheaper, but to each his own.
>
> Why do you even consider discussing FAT-32?
>
> You do know that the default cluster size for NTFS (anything modern)
> is 4K in most instances, right?

In a FAT-xx system, the head has to move back to the directory to discover
the next segment. This is not the case with NTFS; pieces are read as they
are encountered and reassembled in the proper order in RAM.

>
> How does that impact your math now?

It doesn't.

>
> You might want to start learning about drives, formats, RAID,
> clusters, etc... before you post again.

Heh! I'll wager I know more about the things you mentioned than you can ever
imagine. I started my career designing test suites for 2311 disk drives on
IBM mainframes and have, mostly, kept up.


From: Bill in Co. on
HeyBub wrote:
> Leythos wrote:
>> In article <OxXHsVS9KHA.5412(a)TK2MSFTNGP06.phx.gbl>, heybub(a)gmail.com
>> says...
>>>
>>> Leythos wrote:
>>>> In article <#1wndj28KHA.3176(a)TK2MSFTNGP05.phx.gbl>, heybub(a)gmail.com
>>>> says...
>>>>>
>>>>> Lisa wrote:
>>>>>> I was told by a computer repairman that it's not necessary to
>>>>>> defrag my laptop. If the hard drive gets full, remove files and
>>>>>> always make sure I'm using a virus protection.
>>>>>> What are your thoughts?
>>>>>
>>>>> I can envision a situation in a data center with hundreds of
>>>>> thousands of transactions per minute where defragging may be of
>>>>> some slight benefit (assuming an NTFS file system).
>>>>>
>>>>> I can also imagine a user devoted to daily defragging experiencing
>>>>> a power interruption during a critical directory manipulation
>>>>> process.
>>>>
>>>> On a small computer with many add/delete/grow/shrink operations,
>>>> defrag can significantly impact file access times and can be very
>>>> noticeable to users if their system was badly file fragmented before
>>>> the defrag.
>>>>
>>>> White-Space fragmention is not normally an issue, but a file that is
>>>> fragmented into 8000 parts will have an impact on system
>>>> performance.
>>>>
>>>> This argument has gone on for decades, but it's the people that
>>>> maintain systems across many areas that know the benefits of defrag.
>>>
>>> Ignorance can be fixed - hence the original question. It's knowing
>>> something that is false that's the bigger problem.
>>>
>>> Considering your example of 8,000 segments, consider: A minimum
>>> segment size of 4096 bytes implies a minimum of 32 meg file. A
>>> FAT-32 system requires a minimum of 16,000 head movements to gather
>>> all the pieces. In this case, with an average access time of 12msec,
>>> you'll spend over six minutes just moving the head around. Factor in
>>> rotational delay to bring the track marker under the head, then
>>> rotational delay to find the sector, and so on, you're up to ten
>>> minutes or so to read the file.
>>>
>>> An NTFS system will suck up the file with ONE head movement. You
>>> still have the rotational delays and so forth, but NTFS will cut the
>>> six minutes off the slurp-up time.
>>>
>>> De-fragging an NTFS system DOES have its uses: For those who dust
>>> the inside covers of the books on their shelves and weekly scour the
>>> inside of the toilet water tank, a sense of satisfaction infuses
>>> their very being after a successful operation.
>>>
>>> I personally think Prozac is cheaper, but to each his own.
>>
>> Why do you even consider discussing FAT-32?
>>
>> You do know that the default cluster size for NTFS (anything modern)
>> is 4K in most instances, right?
>
> In a FAT-xx system, the head has to move back to the directory to discover
> the next segment. This is not the case with NTFS; pieces are read as they
> are encountered and reassembled in the proper order in RAM.

But that's not quite the whole story though: The bottom line is that the
files are scattered in fragments all over the hard drive, no matter what
file system you are using, so there will have to be multiple disk sector
seeks and accesses to get them collected together into RAM memory. And if
you've defragged the drive, the number of wildly scattered storage locations
on the drive for these fragments will be greatly reduced (since they will be
in more contiguous sectors), so the net total seek and access times would be
reduced, naturally.


From: Erwin Moller on
HeyBub schreef:
> Leythos wrote:
>> In article <#1wndj28KHA.3176(a)TK2MSFTNGP05.phx.gbl>, heybub(a)gmail.com
>> says...
>>> Lisa wrote:
>>>> I was told by a computer repairman that it's not necessary to defrag
>>>> my laptop. If the hard drive gets full, remove files and always
>>>> make sure I'm using a virus protection.
>>>> What are your thoughts?
>>> I can envision a situation in a data center with hundreds of
>>> thousands of transactions per minute where defragging may be of some
>>> slight benefit (assuming an NTFS file system).
>>>
>>> I can also imagine a user devoted to daily defragging experiencing a
>>> power interruption during a critical directory manipulation process.
>> On a small computer with many add/delete/grow/shrink operations,
>> defrag can significantly impact file access times and can be very
>> noticeable to users if their system was badly file fragmented before
>> the defrag.
>>
>> White-Space fragmention is not normally an issue, but a file that is
>> fragmented into 8000 parts will have an impact on system performance.
>>
>> This argument has gone on for decades, but it's the people that
>> maintain systems across many areas that know the benefits of defrag.
>
> Ignorance can be fixed - hence the original question. It's knowing something
> that is false that's the bigger problem.
>
> Considering your example of 8,000 segments, consider: A minimum segment size
> of 4096 bytes implies a minimum of 32 meg file. A FAT-32 system requires a
> minimum of 16,000 head movements to gather all the pieces. In this case,
> with an average access time of 12msec, you'll spend over six minutes just
> moving the head around. Factor in rotational delay to bring the track marker
> under the head, then rotational delay to find the sector, and so on, you're
> up to ten minutes or so to read the file.
>
> An NTFS system will suck up the file with ONE head movement. You still have
> the rotational delays and so forth, but NTFS will cut the six minutes off
> the slurp-up time.

Hi Heybub,

This is the second time I hear you claiming this.
How do you 'envision' the head(s) reading all fragments in one go?

In your example: 8000 fragments. If these are scattered all over the
place, the head has to read a lot of different places before all info is in.
Compare this to one continuous sequential set of data where the head
reads all without extra seeking and/or skipping parts.

Also, and especially on systems that need a huge swapfile, after filling
up your HD a few times can lead to heavily fragmented swapfile. This
gives a performance penalty.

I have seen serious performance improvements (on both FAT32 and NTFS)
after defragging (also the systemfiles with
http://technet.microsoft.com/en-us/sysinternals/bb897426.aspx)

Others claim the same. How do you explain that?

Erwin Moller



>
> De-fragging an NTFS system DOES have its uses: For those who dust the inside
> covers of the books on their shelves and weekly scour the inside of the
> toilet water tank, a sense of satisfaction infuses their very being after a
> successful operation.
>
> I personally think Prozac is cheaper, but to each his own.
>
>


--
"There are two ways of constructing a software design: One way is to
make it so simple that there are obviously no deficiencies, and the
other way is to make it so complicated that there are no obvious
deficiencies. The first method is far more difficult."
-- C.A.R. Hoare
From: Leythos on
In article <eiaxBdg9KHA.4924(a)TK2MSFTNGP04.phx.gbl>, heybub(a)gmail.com
says...
> > You do know that the default cluster size for NTFS (anything modern)
> > is 4K in most instances, right?
>
> In a FAT-xx system, the head has to move back to the directory to discover
> the next segment. This is not the case with NTFS; pieces are read as they
> are encountered and reassembled in the proper order in RAM.
>
> >
> > How does that impact your math now?
>
> It doesn't.
>
> >
> > You might want to start learning about drives, formats, RAID,
> > clusters, etc... before you post again.
>
> Heh! I'll wager I know more about the things you mentioned than you can ever
> imagine. I started my career designing test suites for 2311 disk drives on
> IBM mainframes and have, mostly, kept up.
>

And yet you don't seem to understand that on NTFS, file fragmentation
means that the heads still have to MOVE to reach the other fragments.

Try and keep up.

--
You can't trust your best friends, your five senses, only the little
voice inside you that most civilians don't even hear -- Listen to that.
Trust yourself.
spam999free(a)rrohio.com (remove 999 for proper email address)