From: Roedy Green on
On Thu, 5 Aug 2010 05:46:48 +0100, "Boris Punk" <khgfhf(a)hmjggg.com>
wrote, quoted or indirectly quoted someone who said :

>Why does the write speed slow down the bigger the file gets? For example
>creating a 10GB file is fairly fast but get to 20GB+ and the write speed
>drops and continues to drop.

On FAT partitions, the problem is the inefficient structure to track
where all the extents of the file are.

When you write sequentially, the OS allocates chunks as you go. It is
more efficient to allocate the file all at once if you can.

If the disk is very full, it make be the OS has to work harder and
harder to find some free space as the disk fills. You might like to
try defragging and see if the effect goes away.

see http://mindprod.com/jgloss/defragger.html
--
Roedy Green Canadian Mind Products
http://mindprod.com

You encapsulate not just to save typing, but more importantly, to make it easy and safe to change the code later, since you then need change the logic in only one place. Without it, you might fail to change the logic in all the places it occurs.
From: BGB / cr88192 on

"Roedy Green" <see_website(a)mindprod.com.invalid> wrote in message
news:i2hm56h3hp8sk4jcknu876kcctchq5arv9(a)4ax.com...
> On Thu, 5 Aug 2010 05:46:48 +0100, "Boris Punk" <khgfhf(a)hmjggg.com>
> wrote, quoted or indirectly quoted someone who said :
>
>>Why does the write speed slow down the bigger the file gets? For example
>>creating a 10GB file is fairly fast but get to 20GB+ and the write speed
>>drops and continues to drop.
>
> On FAT partitions, the problem is the inefficient structure to track
> where all the extents of the file are.
>

except:
one can't get anywhere near a 10GB file on a FAT partition...
the max file size is 4GB on FAT32.

however, even then, FAT is generally IME no real slower with large files
than with small files, and infact typically goes much faster with larger
files (large numbers of small files is the performance killer...).

also, counter-intuitively, FAT32 usually performs faster IME than NTFS, even
with very large drives with a small cluster size (although, there is a
difference here between XP and Vista/Win7, where XP will seem to stall and
take a long time to mount the drive if it is large with a small cluster
size, but Vista and Win7 will mount it immediately with no long stall).

for example, a 500GB drive needs probably 16kB clusters to mount acceptably,
as 4kB clusters will make drive mounting take a long time, and 1kB clusters
will not work (exceeds max clusters limit).

(note: one typically uses Linux or similar to make drives like this, as MS
thought it sensible for whatever reason to put a 32GB drive limit in their
format utility...).


IME, both Vista and Win7 seem to have somewhat slower file copying than XP.
actually, IME it seems like on Win7, copying lots of files bogs down the
whole OS, for whatever reason... (and, even more mysteriously, switching to
the Windows Classic theme seemed to both speed up file copying and reduce
its impact on overall system performance, which was just all around a bit
odd...).



> When you write sequentially, the OS allocates chunks as you go. It is
> more efficient to allocate the file all at once if you can.
>
> If the disk is very full, it make be the OS has to work harder and
> harder to find some free space as the disk fills. You might like to
> try defragging and see if the effect goes away.
>

this could matter, and also effects NTFS.
a possible issue with NTFS and fragmentation is that it uses spans-lists,
and a very large file can involve it managing a potentially large number of
spans.


> see http://mindprod.com/jgloss/defragger.html
> --
> Roedy Green Canadian Mind Products
> http://mindprod.com
>
> You encapsulate not just to save typing, but more importantly, to make it
> easy and safe to change the code later, since you then need change the
> logic in only one place. Without it, you might fail to change the logic in
> all the places it occurs.


From: Paul Cager on
On Aug 5, 7:29 pm, "Boris Punk" <khg...(a)hmjggg.com> wrote:
> FS:
>
> Windows: NTFS
> Linux: EXT3

They are both journalling filesystems. Could it just be due to the
increased distance between the journal's append position and the
data's append position (in terms of cylinders)? I.e. there is more
seeking going on.
From: Paul Cager on
On Aug 10, 5:23 pm, Paul Cager <paul.ca...(a)googlemail.com> wrote:
> They are both journalling filesystems. Could it just be due to the
> increased distance between the journal's append position and the
> data's append position (in terms of cylinders)? I.e. there is more
> seeking going on.

Well, 5 minutes after posting that I realise I'm talking drivel. The
_extra_ seek time introduced by 10GB must be negligible.