From: George Neuner on
On Tue, 27 Apr 2010 12:56:14 -0400, Robert Myers
<rbmyersusa(a)gmail.com> wrote:

>George Neuner wrote:
>
>>
>> I'm particularly interested in any previous work on compiler driven
>> speculation. AFAICT the research along that line died with Algol.
>>
>
>A search of portal.acm.org turns up over 1000 hits for "compiler
>speculation," and much of the work is relatively recent.

I've looked at a sampling of it. Much of the current research is
focused on transactional processing in various forms. While
interesting in it's own right, SpecTP is not the type of speculation
I'm after.


>I have 346 pdf documents in my own files that match "speculative," many
>of them related to compiler driven speculation--so many that digging
>through them to find the most relevant would be a big undertaking. Most
>of the work is relatively recent.

What I'm interested in is generalized auto-parallelization and using
speculative execution to boost performance by computing results (where
possible) before they're needed rather than on-demand. What I'm
pursuing right now is a constraint-logic variant of data-flow based on
memoized call-by-need in conjunction with speculative execution. The
original lines of research date to the late 70's and were targeted at
programming the expected many-processor micros that never
materialized.

Note that I'm not talking about the kind of source level rendezvous
processing as in Lucid and Oz ... what I'm referring to is closer to
the function parallelism in pHaskell and to futures in the parallel
Lisps, but with functions/futures determined by the compiler and
(potentially) executed as soon as their input is available. SpecTP
has a place in this but is not the whole of it.


I'm not really seeking a discussion on all of this because it will
quickly become very technical (and redundant as some of the things
have been discussed in comp.compilers). I just wanted more
information on what Andy was doing because his description sounded
interesting.

George
From: Robert Myers on
On Apr 27, 5:08 pm, George Neuner <gneun...(a)comcast.net> wrote:
> On Tue, 27 Apr 2010 12:56:14 -0400, Robert Myers
>
> <rbmyers...(a)gmail.com> wrote:
> >George Neuner wrote:
>
> >> I'm particularly interested in any previous work on compiler driven
> >> speculation.  AFAICT the research along that line died with Algol.
>
> >A search of portal.acm.org turns up over 1000 hits for "compiler
> >speculation," and much of the work is relatively recent.
>
> I've looked at a sampling of it.  Much of the current research is
> focused on transactional processing in various forms.  While
> interesting in it's own right, SpecTP is not the type of speculation
> I'm after.
>
> >I have 346 pdf documents in my own files that match "speculative," many
> >of them related to compiler driven speculation--so many that digging
> >through them to find the most relevant would be a big undertaking.  Most
> >of the work is relatively recent.
>
> What I'm interested in is generalized auto-parallelization and using
> speculative execution to boost performance by computing results (where
> possible) before they're needed rather than on-demand.  What I'm
> pursuing right now is a constraint-logic variant of data-flow based on
> memoized call-by-need in conjunction with speculative execution.  The
> original lines of research date to the late 70's and were targeted at
> programming the expected many-processor micros that never
> materialized.
>
> Note that I'm not talking about the kind of source level rendezvous
> processing as in Lucid and Oz ... what I'm referring to is closer to
> the function parallelism in pHaskell and to futures in the parallel
> Lisps, but with functions/futures determined by the compiler and
> (potentially) executed as soon as their input is available.  SpecTP
> has a place in this but is not the whole of it.

Thanks for that information. FWIW, I started this thread not because
I have any particular interest in transaction processing (I don't) but
because I had previously referred to the paper in question and
couldn't find a citation. I am also interested in processor trends
that are driven by considerations that otherwise would be of no
interest to me. To be perfectly honest, I wasn't even aware that
there had been work on speculative execution related to transaction
processing.

Most of the work I'm aware of is aimed at identifying those execution
paths that can be speculatively executed to speed up garden variety
computation with what were at the time standard test cases (gcc, bzip,
etc.). The speculative paths are set up by the compiler without
programmer intervention, other than making required profiling runs.
So far as I can tell, that kind of work has largely dried up for
reasons I already presented to Andy, to which he responded with
impatient hostility. Andy's sizzling response and your patronizing
response notwithstanding, the world is a different place from what it
was in the seventies, or even ten years ago.

> I'm not really seeking a discussion on all of this because it will
> quickly become very technical (and redundant as some of the things
> have been discussed in comp.compilers).  I just wanted more
> information on what Andy was doing because his description sounded
> interesting.

I'll probably have a look at what might have been said on
comp.compilers, but, as to your tone, this list is *not*
comp.compilers. Comp.arch has had long dry spells. At least people
are talking. If you need a place to be pompous, I suggest you choose
a moderated list where you are a part of the moderator's club.

Robert.


From: Robert Myers on
On Apr 28, 3:36 pm, George Neuner <gneun...(a)comcast.net> wrote:

>
> What remains mostly is research into ways of recognizing repetitious
> patterns of data access in linked data structures (lists, trees,
> graphs, tries, etc.) and automatically prefetching data in advance of
> its use.  I haven't followed this research too closely, but my
> impression is that it remains a hard problem.
>

I suspect that explains a mysterious private email I got while
publicly discussing Itanium and profile-directed optimization. The
email claimed that a well-known compiler developer that he worked for
had found means to predict irregular data access from static analysis
so that the compiler could supply prefetch hints even for an irregular
memory stride.

Robert.

From: Del Cecchi on

"Andrew Reilly" <areilly---(a)bigpond.net.au> wrote in message
news:839omqF5fbU1(a)mid.individual.net...
> Hi all,
>
> On Wed, 21 Apr 2010 15:36:42 -0700, MitchAlsup wrote:
>
>> Since this paper was written slightly before the x86 crushed out
>> RISCs
>> in their entirety, the modern reality is that technical, comercial,
>> and
>> database applications are being held hostage to PC-based thinking.
>> It
>> has become just too expensive to target (with more than lip
>> service)
>> application domains other than PCs (for non-mobile applications).
>> Thus
>> the high end PC chips do not have the memory systems nor
>> interconnects
>> that would beter serve other workloads and larger footprint serer
>> systems.
>
> I used to look down on the "PC" computers from the MIPS and SPARC
> machines that I was using, back in the early 90s, but it doesn't
> seem to
> me that the memory systems of well-specced PC systems of today leave
> anything behind that the good technical workstations of that era
> had.
> The current set of chips pretty much seem to be pin limited, which
> is the
> same situation that anyone trying to do a purpose-designed technical
> workstation would have to deal with anyway.
>
> So what is "PC-based thinking", and how is it holding us back? What
> could we do differently, in an alternate universe, or with an
> unlimited
> bank balance?
>
> Cheers,

Power7? Blue Waters?

PC based thinking is designing chips for relatively mass market
applications since otherwise the development expense makes them not
cost effective?
>
> --
> Andrew


From: Anne & Lynn Wheeler on

"Del Cecchi" <delcecchi(a)gmail.com> writes:
> Power7? Blue Waters?
>
> PC based thinking is designing chips for relatively mass market
> applications since otherwise the development expense makes them not
> cost effective?

although incremental cost for moving into niches may work.

IBM goes elephant with Nehalem-EX iron; Massive memory for racks and
blades
http://www.theregister.co.uk/2010/04/01/ibm_xeon_7500_servers/

from above:

With so much of its money and profits coming from big Power and
mainframe servers, you can bet that IBM is not exactly enthusiastic
about the advent of the eight-core "Nehalem-EX" Xeon 7500 processors
from Intel and their ability to link up to eight sockets together in a
single system image. But IBM can't let other server makers own this
space either, so it had to make some tough choices.

.... snip ...

from a thread in ibm-main mainframe mailing list
http://www.garlic.com/~lynn/2010g.html#25 Intel Nehalem-EX Aims for the Mainframe
http://www.garlic.com/~lynn/2010g.html#27 Intel Nehalem-EX Aims for the Mainframe
http://www.garlic.com/~lynn/2010g.html#28 Intel Nehalem-EX Aims for the Mainframe
http://www.garlic.com/~lynn/2010g.html#32 Intel Nehalem-EX Aims for the Mainframe
http://www.garlic.com/~lynn/2010g.html#35 Intel Nehalem-EX Aims for the Mainframe

and a reference that w/o competition can charge $18m for $3m computer

Financial Matters: Mainframe Processor Pricing History
http://www.zjournal.com/index.cfm?section=article&aid=346

from above (2006) article:

is that the price per MIPS today is approximately six times higher than
the $165 per MIPS that the traditional technology/price decline link
would have produced

.... snip ...

in this thread (from same mailing list):
http://www.garlic.com/~lynn/2010h.html#51 25 reasons why hardware is still hot at IBM
http://www.garlic.com/~lynn/2010h.html#56 25 reasons why hardware is still hot at IBM
http://www.garlic.com/~lynn/2010h.html#62 25 reasons why hardware is still hot at IBM
http://www.garlic.com/~lynn/2010h.html#63 25 reasons why hardware is still hot at IBM
http://www.garlic.com/~lynn/2010h.html#66 25 reasons why hardware is still hot at IBM
http://www.garlic.com/~lynn/2010h.html#70 25 reasons why hardware is still hot at IBM
http://www.garlic.com/~lynn/2010h.html#71 25 reasons why hardware is still hot at IBM
http://www.garlic.com/~lynn/2010h.html#79 25 reasons why hardware is still hot at IBM
http://www.garlic.com/~lynn/2010h.html#81 25 reasons why hardware is still hot at IBM

--
42yrs virtualization experience (since Jan68), online at home since Mar1970