From: robertwessel2@yahoo.com on

bj7lewis wrote:
> > a bit on NTFS beings that is the file system i prefer but thanks for
> > the help with all the internal workings :)
> By all means but NTFS and FAT is like apples and oranges they have
> differences but are still fruits... NTFS is designed for security and FAT no
> security... But at the "fruit" level they are basic the same NTFS is newer
> than FAT32 and supports TB Drives but that is just a matter of ##K cluter
> size... So if you start your design in FAT## and move your way to NTFS later
> all you really need to learn is Master Record Table (MRT) thingy and NTFS
> security but the cluster chains and the way files are allocated should be
> the same...


The layout and structures on an NTFS volume are *nothing* like on a FAT
volume. For example, NTFS doesn't have chains of clusters to define a
file's data area. It has a tree structure indexing runs of clusters.
Directories are stored in B-trees. Free space is managed via a bitmap.
NTFS can store multiple datastreams for each file. About the only
commonality is that FAT and NTFS both store data in sectors on a disk...

From: Phil Carmody on
"ragtag99" <spamtrap(a)crayne.org> writes:
> Phil Carmody wrote:
> > "ragtag99" <spamtrap(a)crayne.org> writes:
> > > I posted this on comp.lang.asm.x86, alt.os.development, comp.arch,
> > > comp.lang.c++
> > >
> > > Im working with windows xp professional, NTFS and programming with
> > > MASM, c++ (free compiler) or visual basic 6.0
> > >
> > > === question 1
> > >
> > > Primarily Im trying to design a program that has full control over a
> > > hard disk. What it needs to do is find out what sectors haven't been
> > > written to and be able to write anything there, but doesn't count
> > > towards disk space, IOW the data is user defined garbage with no
> > > consequense if overwritten.
> >
> > So there's no consequence if it's not written at all.
> > Your solution is therefore to simply not do this.
>
> If that were the case I wouldn't have asked. Not to mention academic
> pursuits, in many cases, have no practical application. But thanks for
> taking the time to respond.

Perhaps you shouldn't have asked.

What you actually asked, that is.

Perhaps you should have asked how to implement the high level
functionality that you want rather than how to implement the
misguided choice of low level functionality that you think you
need.

My logic above is sound given the premises you provide.

Phil
--
"Home taping is killing big business profits. We left this side blank
so you can help." -- Dead Kennedys, written upon the B-side of tapes of
/In God We Trust, Inc./.

From: ragtag99 on

Phil Carmody wrote:

> > If that were the case I wouldn't have asked. Not to mention academic
> > pursuits, in many cases, have no practical application. But thanks for
> > taking the time to respond.
>
> Perhaps you shouldn't have asked.
>
> What you actually asked, that is.

Then Id be asking a question i didnt want to know the answer to. That
makes no sense.

>
> Perhaps you should have asked how to implement the high level
> functionality that you want

one of the reasons ive posted on comp.lang.c++ was hoping someone there
knew practical applications of the language.

> rather than how to implement the
> misguided choice of low level functionality that you think you
> need.

....academic pursuits...
but if there is a high level way of doing it i would be interested in
getting the libraries to get the program running, then ill figure them
out after everything is all said and done.

> My logic above is sound given the premises you provide.

Your logic is saying i shouldnt have asked a computer realted question
on computer related news groups. Thats pretty solid logic.

But perhaps all this bandwidth could have been saved if instead of
cryptic replies, "witty" one-liners and cyber tuffism the replies were
like: "youre looking at the problem wrong, this site deals with
something similar..." or "c++ has a library called 'sector write'
that..." or
"someone wrote a similar program and posted the source here..." or even
"a better newsgroup for this question would be..."

but i suppose some people gotta fill there free time some way or other.
But thanks to all those that helped though :)

From: bj7lewis on
> The layout and structures on an NTFS volume are *nothing* like on a FAT
> volume. For example, NTFS doesn't have chains of clusters to define a
> file's data area. It has a tree structure indexing runs of clusters.
> Directories are stored in B-trees. Free space is managed via a bitmap.
> NTFS can store multiple datastreams for each file. About the only
> commonality is that FAT and NTFS both store data in sectors on a disk...
I said I didn't know anything about NTFS internals but I can still
theorize(NTFS is enough like FAT that you have complex list/tree that you
need to read/follow to find file sectors on the disk) coming with only NTFS
knowledge from and basic Windows networking class while not giving code
help...

From: Bill Todd on
bj7lewis wrote:
>> The layout and structures on an NTFS volume are *nothing* like on a FAT
>> volume. For example, NTFS doesn't have chains of clusters to define a
>> file's data area. It has a tree structure indexing runs of clusters.
>> Directories are stored in B-trees. Free space is managed via a bitmap.
>> NTFS can store multiple datastreams for each file. About the only
>> commonality is that FAT and NTFS both store data in sectors on a disk...
> I said I didn't know anything about NTFS internals but I can still
> theorize

Then you should explicitly label your uneducated guesses as 'theories'
rather then present them as fact.

You said, "NTFS is newer than FAT32 and supports TB Drives but that is
just a matter of ##K cluter size." That is incorrect:

1. NTFS is not, in fact, 'newer than FAT32': NTFS debuted with Windows
NT V3.1 in 1993, whereas FAT32 debuted with Windows 95 OSR2 in 1996.

2. NTFS as implemented (using 32-bit cluster IDs) supports drives up to
8 TB in size (possibly up to 16 TB - it's not clear whether it uses
signed or unsigned integers to count clusters) without deviating from
its 4 KB default cluster size at all, while its architecture (allowing
64-bit cluster IDs) supports drives up to 2^75 (or 2^76) in size using
the same standard 4 KB clusters (as well as both larger and smaller
clusters for situations where a different effective block size is
desirable for reasons unrelated to the total drive or file size).

In other words, NTFS's ability to support 'TB drives' (and files) has
nothing whatsoever to do with the cluster sizes that it supports.

By contrast, FAT32 depends heavily upon increasing its cluster size to
support large drives (and for that matter large files up to its 2 or 4
GB maximum file size, because scanning through close to a million linked
FAT32 entries to find a piece of data with a high file address can get
expensive - that's one reason why FAT32 cluster defaults reach their
maximum size of 32 KB quickly, on any drive larger than 32 GB). Its
architectural drive-size limit is 8 TB, and it requires use of 32 KB
clusters to get there.


You said, "the cluster chains and the way files are allocated should be
the same." That is also flat-out wrong:

1. NTFS does not 'chain' clusters at all, let alone use a single such
chain for the entire file as FAT32 does.

2. Unlike FAT32, NTFS uses bitmaps to track space allocation and embeds
small files in their MFT entries rather than allocating an entire
cluster as soon as a single byte exists.


You asked, "why is it prefer FAT32 is not dead is it?" I can help you
out with that as well: the larger a FAT32 file system gets, the longer
it takes to check it after an unclean shutdown (and there's *still* no
guarantee that any structural corruption resulting from that unclean
shutdown can be fixed), whereas being a journaled file system NTFS can
restart in a few seconds at most, regardless of the size of its file
system, and can recover from any normal unclean-shutdown corruption save
the incomplete write of a single, critical disk sector (which drives are
designed to protect against).

- bill