From: Dee Earley on
On 18/06/2010 15:51, MM wrote:
> On Fri, 18 Jun 2010 12:12:18 +0100, Dee Earley
> <dee.earley(a)icode.co.uk> wrote:
>
>> Yes, there appears to be a bug in that grid that means it can't handle
>> more than 65Ki entries, but that most likely means it is VERY
>> infrequently used in that situation.
>
> No "appears" about it! It's a bug. Microsoft acknowledged it as such.

I said appears as I'm going on 3rd hand information.
I don't use the control, I haven't researched it, I haven't hit the problem.

--
Dee Earley (dee.earley(a)icode.co.uk)
i-Catcher Development Team

iCode Systems

(Replies direct to my email address will be ignored.
Please reply to the group.)
From: dpb on
MM wrote:
....

> ... But those possibilities are hamstrung if
> the grid in question has a bug or if it is restricted by design to a
> pitifully low total number of cells.

Only if you insist that the grid is the container for the entire dataset
instead of being merely a viewpoint into a (reasonable) subsection...

None of the selection criterion for subsetting you've mentioned are
affected in any manner whatsoever by there being far fewer than 65K
actual data lines presented; _NOBODY_ can keep more than a few dozen
general patterns in mind at one time and even recognize real pattern
buried in such large morass of data that they can't see just as well in
a few screens.

The only exception to this would be, of course, structured data where
there's some order already present such that scrolling down will bring
up alternative universes within the overall data set -- but having to do
that manually by scrolling through the entire database at a go is,
indeed, the same thing as leafing through the greenbar. (And frankly,
as I age and the eyesight and patience ebb, I'd prefer the greenbar for
such exploratory stuff if absolutely had to do it because at least with
it I can use the highlighter and page tabs and so on to have many
multiple views directly available that a CRT simply can't do...and
that's a damhikt... :) )

OTOH, if it's a familiar universe of data, I could indeed build the
relevant queries and get the desired subsets essentially w/o ever seeing
the raw data other than perhaps a subscreen that gives the ranges for
the screening variables that might be in the particular database subject
matter. That would be things like datestamps,
subject/product/whatever_id, etc., so that the user knows a priori
whether there's even any point in looking for certain classes of events.
But that's best done w/ another summary view, not by expecting the
user to sort thru and guess based on what he can see and remember out of
thousands of entries of who knows how many variables/record...

$0.02, etc., etc., etc., ...

--
From: dpb on
MM wrote:
> On Tue, 22 Jun 2010 08:20:14 -0500, dpb <none(a)non.net> wrote:
....

> This is nonsense. I can spot patterns in half a million rows
> easy-peasy just by scrolling through the grid

Only if the pattern is contained in a sufficiently adjacent set of data
that you can see it in close enough proximity to remember it...that's
the point. And, if it is, then you only need a very much smaller subset
of the data in the grid at any one time.

....

> There are many ways to address your apparent problems. You could apply
> a bookmark to mark certain rows or ranges of rows then do a sort to
> bring the marked rows to the top. You could select a bunch of rows and
> transfer them to another grid. You could filter out rows based on
> certain criteria. Finally, you can of course simply re-run the query
> underlying the recordset but with different criteria, having seen the
> results you got. But please don't tie me up in dogma so that I cannot
> obtain a complete overview of my data simply because this offends your
> particular design principles! That way is far too restrictive, as is
> your claim that "_NOBODY_" can do certain things. You cannot possibly
> know what everyone is capable of.

Well, it would defy the results of all studies that have been done on
human cognizance/recall if you could find _anybody_ who could retain
several hundred thousand data items in their recollection at any one time...

All your arguments above reduce to having smaller subsets of data at
which one looks; that's again all anybody here is saying is that since
that has to be done anyway, there's nothing _really_ lost by the data
not all being in a single display control at one time.

The user, unless intimately familiar w/ the data set, certainly isn't
going to be able to find what the range of any particular value in a
dataset is if it is unordered by scrolling thru entries manually from
top to bottom trying to make sure they find the largest and don't miss a
bigger one on the way down meanwhile the same thing for smallers and any
other corollary variables they're interested in. It just isn't
feasible. In the end, you have to make all these other entry methods to
be practical you've enumerated so there really is no point in having
every single datapoint in a single view.

It's no different than the oft-heard complaint that a long time series
takes an excessive amount of time to plot/display -- well, if one has an
hour of data at 100 kHz, it doesn't matter what the resolution of the
device, there's no point in plotting 360M points; there's no device that
has that many pixels, anyway. So, either decimate (wisely) to show the
overall waveform or subset time intervals, don't insist on drawing every
stinkin' measured value every bloody time.

I'm killing this thread; I'm bored talking to walls/posts...

--
From: dpb on
MM wrote:
....

> "640K ought to be enough for anybody." - Bill Gates, 1981.

The CDC 7600 had ... a 65 Kword primary memory using core and
variable-size (up to 512 Kword) secondary memory (depending on site).
.... in 1969, it carried a price tag around $5 million (more as options
and features were added). ...

Before the switchover to the CDC 6600 followed by the 7600, used Philco
S2000 with only 32K memory and 27 (!!!count 'em) 7-track tapes...

We designed and built currently operating power reactors w/ it and
predecessors (including the slide rule and desktop Monroe calculators
before HP's magic little box). It ain't how much memory, it's what you
do with what there is.

I venture 80% of memory and CPU cycles now are wasted entirely on
interface and fluff instead of doing actual useful work.

Same thing can be said for data. Out of the megabytes you may have,
only a tiny fraction can be of any visual use at any one time.

(And, sorry, guess I didn't hit the "K" button afterall....ok, there it
is...done! :) )

--