From: Motonari Ito on
Hi Hector,

Thank you for pointing out the possibility of application level
buffering. I will take a look.

But the fact is the value of "Cached" entry in task manager is
increased as the file is being read. I understand it shouldn't
increase if the physical memory is consumed in application level.

Thank you.

On Feb 27, 5:45 am, Hector Santos <sant9...(a)nospam.gmail.com> wrote:
> Motonari Ito wrote:
> > I have a 3rd party library which reads a huge media file (7GB+).
> > Obviously the library uses ReadFile() Win32 API function without
> > FILE_FLAG_NO_BUFFERING flag and the size of the file system cache is
> > increased while the file is being read.
>
> > This behavior invalidates any valuable data existing in the file
> > system cache. After the file reading completes, the whole system is
> > slowed down for a while. The problem doesn't happen when the media
> > file is located on remote machine because the file system cache is not
> > used for remote files.
>
> It might not be a file system cache (which across the network, its a
> function of the network driver), but the QT API does do buffering, in
> fact, Apple has a frivolous patent pending feature called Skip
> Protection.  So you might want to see if QT API has a option to
> disable Skip Protection or Instant On on the local machine where such
> requirements are less.
>
> --
> HLS

From: Liviu on
"Motonari Ito" <motonari.ito(a)gmail.com> wrote...
>
> HI Pavel,
>
>> Or... why not just report this problem to Apple and let them
>> sort it out?
>
> It's interesting idea. I somehow didn't think about that...

Just my 2c, but all you can ask Apple for is a special switch or setting
for their QuickTime to use FILE_FLAG_NO_BUFFERING when reading
the media file. Which, depending on the internals of their player, may
or may not be so easy a change to accommodate.

My point being that I don't see it as a problem of theirs per se. The
same effect can be duplicated at the cmd line with "copy test.tmp nul"
where "test.tmp" is a multi-GB file - which causes previously cached
(but not in active use any longer) file chunks to be discarded in favor
of the new file data being read in. What you describe as "this behavior
invalidates any valuable data existing in the file system cache" is in
fact exactly what the system-level file caching is supposed to do.

One other note, about the Task Manager "System Cache"... Much
(or most) of that memory is in fact available for other processes to
claim and use. The actual in-use cache size is displayed for example in
CacheSet http://technet.microsoft.com/en-us/sysinternals/bb897561.aspx
as the Current Size. The (way) larger number in Task Manager includes
the "standby list", see description of "repurposing a page" in the .doc
at http://www.microsoft.com/whdc/system/hwdesign/MemSizingWin7.mspx.

Liviu





From: Liviu on
"Jonathan de Boyne Pollard" <J.deBoynePollard-newsgroups(a)NTLWorld.COM>
wrote...
>> My reading of that MS KB paragraph is more along the line of "the
>> look-ahead cache is filled-in whenever the file pointer moves, and
>> the cached data is discarded as soon as it is actually requested/read
>> by the client application". If you have some authoritative source to
>> support your quite different interpretation of the same sentence,
>> please provide a pointer to such documentation.
>
> Your reading is wrong; and it doesn't take much knowledge of how
> mechanisms such as auto-detecting sequential read behaviour work to
> figure this out from first principles [...]

Principles aside for a sec ;-) but empirically the Memory / Cache Bytes
counter in PerfMon shows about the same usage when a huge file is read
with or without FILE_FLAG_SEQUENTIAL_SCAN (corroborated
by Current Size in SysInternals' CacheSet). Given that the flag is
documented to read-ahead at least twice as far as otherwise, one could
reasonably presume that the free-behind must somehow compensate for
that in order for the actual cache total to remain constant.

> And this documentation tells you, as does the MSKB article, that the
> FO_SEQUENTIAL_ONLY flag controls read-ahead, not free-behind.

Not a ring-0 person myself, but maybe that's not the only thing it
does... http://support.microsoft.com/kb/164260

|| The CreateFile API has a flag FILE_FLAG_SEQUENTIAL_SCAN
|| that is especially useful when working on files in a sequential
|| manner. It tells Cache Manager not to grow the file cache when
|| requests for this handle arrive. Therefore, Memory Manager does
|| not have to shrink the application's working set to accommodate
|| the bigger cache.

Liviu









From: Hector Santos on
Never used the QT API myself, reading the docs suggest that the apple
API library will manage the buffering, I guess to satisfy transparent
QT client applet reading with its "frivolous" patent pending look
ahead buffering error correction logic it calls "Skip Protection."
(Its frivolous because there is already prior art with decades old
file transfer protocols that use a look ahead buffer CRC error
correction logic).

In any case, it would be interesting to see using a file I/O monitor
(FILEMON.SYS) what internal library QT attributes are used to open the
file and whether its done "generically" because I somewhat doubt (a
SWAG) the library is going to check or tune the attributes if the file
is local or remote. I would tend to believe that it will allow the OS
file drivers (including network driver) to handle the OS level
buffering. But then Apple, maybe already addressed this with some
setting.

Overall, whats the issue here?

According to the OP, it is "file closure" or QT file shutdown issue
that is creating a local machine performance problem. I believe if
this is presented to Apple this way, they might take the issue more
seriously, if not already.

I see the same thing with other desktop applets such as a browser when
doing a file transfer, if the files are large (whats large?), there is
definitely an observable a file closure, cleanup, completion and/or
file moving/copying delay. In fact, I see this so often with the
FireFox browser that when it appears dead lock (and its not, you just
have to wait a long time), I will regularly do a kill process on it
just to get it to stop so I can open a new browser window. Otherwise,
I have to wait for a long time.

--
HLS


Liviu wrote:

> "Motonari Ito" <motonari.ito(a)gmail.com> wrote...
>> HI Pavel,
>>
>>> Or... why not just report this problem to Apple and let them
>>> sort it out?
>> It's interesting idea. I somehow didn't think about that...
>
> Just my 2c, but all you can ask Apple for is a special switch or setting
> for their QuickTime to use FILE_FLAG_NO_BUFFERING when reading
> the media file. Which, depending on the internals of their player, may
> or may not be so easy a change to accommodate.
>
> My point being that I don't see it as a problem of theirs per se. The
> same effect can be duplicated at the cmd line with "copy test.tmp nul"
> where "test.tmp" is a multi-GB file - which causes previously cached
> (but not in active use any longer) file chunks to be discarded in favor
> of the new file data being read in. What you describe as "this behavior
> invalidates any valuable data existing in the file system cache" is in
> fact exactly what the system-level file caching is supposed to do.
>
> One other note, about the Task Manager "System Cache"... Much
> (or most) of that memory is in fact available for other processes to
> claim and use. The actual in-use cache size is displayed for example in
> CacheSet http://technet.microsoft.com/en-us/sysinternals/bb897561.aspx
> as the Current Size. The (way) larger number in Task Manager includes
> the "standby list", see description of "repurposing a page" in the .doc
> at http://www.microsoft.com/whdc/system/hwdesign/MemSizingWin7.mspx.
>
> Liviu

From: Pavel Lebedinsky [MSFT] on
> My quick test shows file I/O through memory mapped file stills uses
> system cache. I say so because the "Cached" physical memory value
> reported by task manager is increased by the file size after the
> entire file is read through file mapping.
>
> Am I missing something?


On win7, the "cached" counter in task manager doesn't include the size
of the system file cache. It only inlcudes standby and modified pages
(see the Memory tab in resmon.exe for details). The system file cache
is considered part of "in-use" memory.

Regarding your original question, do you see a significant decrease
in Available memory while quicktime is processing the file? If not
it means the system cache is not growing, and the sequential scan
flag is working as expected.

The Cached value is expected to increase while the file is being
processed, because pages unmapped from the system cache are
placed onto the standby list.

--
Pavel Lebedinsky/Windows Fundamentals Test
This posting is provided "AS IS" with no warranties, and confers no rights.