From: Nick Maclaren on

In article <4v4upnF1afrdrU2(a)mid.individual.net>,
Andrew Reilly <andrew-newspost(a)areilly.bpc-users.org> writes:
|>
|> Sure it's a fair comparison. The dedicated desktop has nothing better to
|> do than flip contexts for your pleasure. The only time when the
|> overhead is an issue is if you're trying to serve significant number of
|> people, xterminal+shared-server style. Hardly anyone does that any more,
|> so I guess everyone else has moved that GUI functionality out to the
|> terminal (i.e., dedicated desktop).

Some of us do a bit more than contemplate the glories of a GUI, or
at least would like to. I am not joking when I say that the GUI
can impact the performance of other applications, even when there
are more than enough CPUs to go round. Have you ever tried to tune
an HPC application on a desktop while doing GUI work on it?

And, as been repeatedly pointed out, the performance of GUIs is a
major pain, and is not improving.


Regards,
Nick Maclaren.
From: jacko on

BDH wrote:
> > > I have friends who are looking to hire graphics talent, but they are
> > > finding that the talent pool is drying up because the perception is that
> > > graphics is a "solved" problem. They are not even finding it in India
> > > and China. The normal computing channels have guys who think they don't
> > > know graphics and art, and they more than enough graphics arts people,
> > > but fewer and fewer in the technical coding aspects.
> > >
> >
> > Well, perhaps the problem of graphics is solved in terms of low cost
> > hardware accelerated boards and libraries to drive them, available at
> > low cost to all, but one would hope that this doesn't mean the end of
> > basic research...
>
> Personally I see 3d wavelet graphics as a promising alternative to
> meshes.

My go at 3d http://indi.microfpga.com/Majiki/Majiki.zip

From: jacko on

jacko wrote:
> My go at 3d http://indi.microfpga.com/Majiki/Majiki.zip

not yet complete, still needs some testing, but basically uses squares
and would look similar to pole position style graphics. BUT i intend
automatic majik eye generation for colour stereo vision.

not sure java is up to the real time rendering task.

From: Terje Mathisen on
Nick Maclaren wrote:
> Been there, seen that. It is one of the reasons that Cambridge (Phoenix),
> Gothenberg (GUTS) and Mitchigan *MTS) managed to put so many more users
> onto MVS-based systems than IBM could achieve. All of those alleviated
> that problem in various ways.

Almost 20 years ago, while we were developing the Oseberg North Sea oil
field, we had a couple of modules under construction in Holland
(Rotterdam/Den Haag), and the two possible alternatives for comms links
were leased SDLC or X.25.

X.25 had been considered because it seemed capable of cutting costs by
75-85%, but the 2.5 second ping time made it totally unfeasible, since
the core apps depended on tty-style remote echo for everything.

I suddenly realized that this could be fixed after-the-fact with a
better terminal emulator:

I spent 24 hours modifying a VT100/VT220 emulator I had previously
written, to maintain two complete contexts:

The screen as seen by the remote server, and a local screen.

A small macro language to allow me to specify, per remote application,
how the app worked, i.e. basically a set of rules for which keystrokes
could be handled locally and which had to trigger a flush operation,
i.e. sending all locally-handled keystrokes to the server.

I also had things like sw timeouts, so that as soon as you stopped
typing, I would also send off anything outstanding.

When I received anything from the server, I would silently update the
server context only, while comparing the local and server contexts:

If this update made the difference less, I did nothing, if not I just
copied the server context on top of the client view, which would give a
momentary flicker of the screen.

The result of all this was to increase the average packet size (X.25
payload) by nearly a factor of 10, reducing our communication costs
(which were measured in million NOKs/year) by 90%.

The only real glitches the uses ever noticed was that when they had an
input field which was forced by the server to be always uppercase, my
terminal emulator would give a local lowercase echo, then everything
would switch to uppercase a second or two later.

Using a block mode terminal, like a web html form or an IBM 3270 would
have solved the same problem much more cleanly, but neither of them were
possible options at the time.

Terje
--
- <Terje.Mathisen(a)hda.hydro.com>
"almost all programming can be viewed as an exercise in caching"
From: David Ball on
On 23 Dec 2006 10:05:25 GMT, nmm1(a)cus.cam.ac.uk (Nick Maclaren) wrote:

>
>In article <jqYih.28100$kM2.27587(a)newsfe7-win.ntli.net>,
>ChrisQuayle <nospam(a)devnul.co.uk> writes:
>|>
>|> That's a very good summary, though it doesn't seem particularly arduous
>|> or inefficient. It may have been in the early days of X, where cpu
>|> throughput and memory was limited and a lot of the graphics processing
>|> was done in software, (See the dec wrl reports on dumb colour frame
>|> buffers and how they optimised the system, for example), but isn't so
>|> serious now. You get flexibility at a systems programming level at the
>|> expense of efficiency.
>
>Unfortunately, that would not be so even if the components were
>implemented efficiently, which they aren't. The problem is in cache
>and TLB draining, and the impact on OTHER processes running on the
>same system. I have seen the use of X degrade the throughput by a
>factor of two, even though there were enough CPUs available at all
>times!
>
>|> Must look up the NeWs system to see how it works - can we have a similar
>|> summary for that as well ?...
>
>I never looked at it in depth; all I noted was that it had some features
>to alleviate this problem. Sorry.

I usually just lurk since I'm not a chip designer and haven't worked
with debugging boards and writing firmware since the 8080/8085/Z-80
days. I just wanted to point out that on the linux kernel mailing
list, getting X to be responsive without messing up the rest of the
system seemed to cause an incredible amount of problems in the
scheduler. I think they ended up with a bunch of code to try to decide
if a process was interactive and give it priority for short bursts of
cpu time, then degrade it to non-interactive if it used too much cpu.
IIRC, they spent months tuning it not to starve important processes
and actually degrade the display or mess up things like playing mp3
files. I don't follow the list as much as I used to so I'm not really
sure if they ever found something they were reasonably satisfied with.

-- David (8080/Z-80/x86 asm/c/c++ programmer who used to do firmware)