From: "Andy "Krazy" Glew" on
Robert Myers wrote:
> On Oct 18, 7:17 am, jacko <jackokr...(a)gmail.com> wrote:
>
>> This does imply that languages which forbode the software bad days are
>> the best place to put research. I think method local write variables,
>> with only one write point, and many read points. Yes spagetti
>> languages are long in the tooth
>
> Much as I loathe c, I don't think it's the problem.
>
> I *think* the problem is that modern computers and OS's have to cope
> with so many different things happening asynchronously that writing
> good code is next to impossible

While I agree that asynchrony is hard, well, I *wish* that was where the
problems begin.

E.g. the majority of security flaws are not due to asynchrony. They are
simple buffer overflows, that would occur in a single threaded program.

It is only recently that race conditions have started creeping up the
charts as causes of security flaws.
From: "Andy "Krazy" Glew" on
Brett Davis wrote:
> Cool info though, TRIPS is the first modern data flow architecture I
> have looked at. Probably the last as well. ;(

No, no!

All of the modern OOO machines are dynamic dataflow machines in their
hearts. Albeit micro-dataflow: they take a sequential stream of
instructions, convert it into dataflow by register renaming and what
amounts to memory dependency prediction and verification (even if, in
the oldest machine, the prediction was "always depends on earlier stores
whose address is unknown"; now, of course, better predictors are available).

I look forward to slowly, incrementally, increasing the scope of the
dataflow in OOO machines.
* Probably the next step is to make the window bigger, by
multilevel techniques.
* After that, get multiple sequencers from the same single threaded
program feeding in.
* After that, or at the same time, reduce the stupid recomputation
of the dataflow graph that we are constantly redoing.

My vision is of static dataflow nodes being instantiated several times
as dynamic dataflow.

I suppose that you could call trips static dataflow, compiler managed.
But why?
From: Robert Myers on
On Oct 21, 2:09 am, "Andy \"Krazy\" Glew" <ag-n...(a)patten-glew.net>
wrote:
> Robert Myers wrote:
> > On Oct 18, 7:17 am, jacko <jackokr...(a)gmail.com> wrote:
>
> >> This does imply that languages which forbode the software bad days are
> >> the best place to put research. I think method local write variables,
> >> with only one write point, and many read points. Yes spagetti
> >> languages are long in the tooth
>
> > Much as I loathe c, I don't think it's the problem.
>
> > I *think* the problem is that modern computers and OS's have to cope
> > with so many different things happening asynchronously that writing
> > good code is next to impossible
>
> While I agree that asynchrony is hard, well, I *wish* that was where the
> problems begin.
>
> E.g. the majority of security flaws are not due to asynchrony.  They are
> simple buffer overflows, that would occur in a single threaded program.
>
> It is only recently that race conditions have started creeping up the
> charts as causes of security flaws.

I think we are just blind men feeling different parts of the elephant.

I was thinking about the problem: "Could you, given all the time,
peace and quiet, and support you wanted, write code to give a
reasonable user experience using modern hardware?" and the answer was,
"No, I couldn't."

By "reasonable user experience," I meant an experience that would
*appear* to be reasonable to the user; i.e., the computer would not
suddenly stop responding for unknowably long periods of time or simply
give me an error message in the middle of some important task. I
don't think the requirement is simply hard. I think it's impossible,
given current approaches to user interfaces and managing resources.
From: Robert Myers on
On Oct 21, 2:09 am, "Andy \"Krazy\" Glew" <ag-n...(a)patten-glew.net>
wrote:
> Robert Myers wrote:
> > On Oct 18, 7:17 am, jacko <jackokr...(a)gmail.com> wrote:
>
> >> This does imply that languages which forbode the software bad days are
> >> the best place to put research. I think method local write variables,
> >> with only one write point, and many read points. Yes spagetti
> >> languages are long in the tooth
>
> > Much as I loathe c, I don't think it's the problem.
>
> > I *think* the problem is that modern computers and OS's have to cope
> > with so many different things happening asynchronously that writing
> > good code is next to impossible
>
> While I agree that asynchrony is hard, well, I *wish* that was where the
> problems begin.
>
> E.g. the majority of security flaws are not due to asynchrony.  They are
> simple buffer overflows, that would occur in a single threaded program.
>
> It is only recently that race conditions have started creeping up the
> charts as causes of security flaws.

I think we are just blind men feeling different parts of the
elephant.

I was thinking about the problem: "Could you, given all the time,
peace and quiet, and support you wanted, write code to give a
reasonable user experience using modern hardware?" and the answer was,
"No, I couldn't."

By "reasonable user experience," I meant an experience that would
*appear* to be reasonable to the user; i.e., the computer would not
suddenly stop responding for unknowably long periods of time or
simply
give me an error message in the middle of some important task. I
don't think the requirement is simply hard. I think it's impossible,
given current approaches to user interfaces and managing resources.

You were asking the question, "What single thing could you do to
eliminate the largest number of security vulnerabilities in a single
stroke?"

We don't necessarily disagree. We were asking different questions.

Robert.
From: Robert Myers on
On Oct 21, 2:21 am, "Andy \"Krazy\" Glew" <ag-n...(a)patten-glew.net>
>
> All of the modern OOO machines are dynamic dataflow machines in their
> hearts.  Albeit micro-dataflow: they take a sequential stream of
> instructions, convert it into dataflow by register renaming and what
> amounts to memory dependency prediction and verification  (even if, in
> the oldest machine, the prediction was "always depends on earlier stores
> whose address is unknown"; now, of course, better predictors are available).
>
> I look forward to slowly, incrementally, increasing the scope of the
> dataflow in OOO machines.
>      * Probably the next step is to make the window bigger, by
> multilevel techniques.
>      * After that, get multiple sequencers from the same single threaded
> program feeding in.
>      * After that, or at the same time, reduce the stupid recomputation
> of the dataflow graph that we are constantly redoing.
>
> My vision is of static dataflow nodes being instantiated several times
> as dynamic dataflow.
>
I think I saw things headed the same way, until the ugly issue of
power/performance became paramount. Now, there are no more
transistors to throw at anything. We're not out of ideas or
transistors; we're out of watts.

Robert.