From: Dan Stromberg on
On Mar 4, 1:15 pm, pk <p...(a)pk.invalid> wrote:
> Dan Stromberg wrote:
> > In case you're interested, I've put a fast GUI pipemeter (measures how
> > fast data is moving through a pipe or redirect and gives two estimates
> > of time-to-completion - one based on the entire transfer so far, and
> > one based on a user-specifiable number of blocks) up at:
>
> >http://stromberg.dnsalias.org/~dstromberg/gprog/
>
> > It uses a dual process design (to make things a bit faster on dual
> > core or better systems) with a cache oblivious algorithm (to self-tune
> > block sizes for good performance) - I've seen it sustain over 2
> > gigabits/second, and that despite Linux' /dev/zero insisting on a tiny
> > blocksize.  I wasn't able to construct a RAM disk large enough to get
> > anything like a sustained result with larger blocksizes than what
> > Linux' /dev/zero likes - that is, not without springing for a new
> > machine with a huge amount of RAM.  IOW, your disk or network will
> > very likely be the bottleneck, not the tool.
>
> > I hope it helps someone.
>
> This sounds similar to "pv", although pv does not have a GUI.

Um, yes, pv is similar and has a pretty nice character cell GUI as it
were. I suppose I'd neglected to mention that I put a list of similar
tools at the beginning of the gprog page, including pv.

Thanks for making sure we were aware of pv.

Interesting that pv seems to be successfully getting 128K blocks out
of /dev/zero. For some reason, gprog always gets back 16K blocks
from /dev/zero, even when requesting blocks of sizes substantially
larger. gprog automatically detects this and just starts asking for
16K.

Python folk: Any guesses why a simple file.read(blocksize) would have
such an affinity for returning 16K when redirected from /dev/zero? If
I run the program against a file on disk, it gets larger blocksizes
fine.




From: Dan Stromberg on
On Mar 4, 4:25 pm, Dan Stromberg <strom...(a)gmail.com> wrote:

> Python folk: Any guesses why a simple file.read(blocksize) would have
> such an affinity for returning 16K when redirected from /dev/zero?  If
> I run the program against a file on disk, it gets larger blocksizes
> fine.

Never mind - it was a bug in my code.

Now on a slower machine, it runs about 2.5 times faster than it did on
the faster machine.