From: Jean on
Cant we build a pipelined machine with no I-Cache and only use
multiple Stream buffers (Prefetch Queues) and D- cache instead of
it ? Wont the sequential characteristics of instructions make it
work ? Comments !
From: nmm1 on
In article <89e448f1-612e-49f7-90c3-15c5c414ccb2(a)j19g2000yqk.googlegroups.com>,
Jean <alertjean(a)rediffmail.com> wrote:
>Cant we build a pipelined machine with no I-Cache and only use
>multiple Stream buffers (Prefetch Queues) and D- cache instead of
>it ? Wont the sequential characteristics of instructions make it
>work ? Comments !

Yes. It's been done. It worked. Look up 'unified cache'.


Regards,
Nick Maclaren.
From: Jean on
On Oct 15, 8:24 am, n...(a)cam.ac.uk wrote:
> In article <89e448f1-612e-49f7-90c3-15c5c414c...(a)j19g2000yqk.googlegroups..com>,
>
> Jean  <alertj...(a)rediffmail.com> wrote:
> >Cant we build a pipelined machine with no I-Cache and only use
> >multiple Stream buffers (Prefetch Queues)  and D- cache instead of
> >it ? Wont the sequential characteristics of instructions make it
> >work ? Comments !
>
> Yes.  It's been done.  It worked.  Look up 'unified cache'.
>
> Regards,
> Nick Maclaren.

Isn't a system with unified cache, one which only has a single cache
for holding both data and instruction ? Instructions will be still
fetched from cache..right ?

What I was describing is a design in which the cache exists only for
the data. In the fetch stage of the pipeline, instructions are fetched
from stream buffers and never go to cache. Fetch---Stream buffer--
Main memory. This obviously should reduce the delay of fetch stage
because heads of stream buffers wont constitute to much delay compared
to a cache.

Jean

From: Joe Pfeiffer on
Jean <alertjean(a)rediffmail.com> writes:

> On Oct 15, 8:24 am, n...(a)cam.ac.uk wrote:
>> In article <89e448f1-612e-49f7-90c3-15c5c414c...(a)j19g2000yqk.googlegroups.com>,
>>
>> Jean  <alertj...(a)rediffmail.com> wrote:
>> >Cant we build a pipelined machine with no I-Cache and only use
>> >multiple Stream buffers (Prefetch Queues)  and D- cache instead of
>> >it ? Wont the sequential characteristics of instructions make it
>> >work ? Comments !
>>
>> Yes.  It's been done.  It worked.  Look up 'unified cache'.
>>
>> Regards,
>> Nick Maclaren.
>
> Isn't a system with unified cache, one which only has a single cache
> for holding both data and instruction ? Instructions will be still
> fetched from cache..right ?
>
> What I was describing is a design in which the cache exists only for
> the data. In the fetch stage of the pipeline, instructions are fetched
> from stream buffers and never go to cache. Fetch---Stream buffer--
> Main memory. This obviously should reduce the delay of fetch stage
> because heads of stream buffers wont constitute to much delay compared
> to a cache.
>
> Jean
>

Consider the number of cycles required to fetch from main memory to your
stream buffer.
--
As we enjoy great advantages from the inventions of others, we should
be glad of an opportunity to serve others by any invention of ours;
and this we should do freely and generously. (Benjamin Franklin)
From: Jean on
On Oct 15, 12:29 pm, Joe Pfeiffer <pfeif...(a)cs.nmsu.edu> wrote:
> Jean <alertj...(a)rediffmail.com> writes:
> > On Oct 15, 8:24 am, n...(a)cam.ac.uk wrote:
> >> In article <89e448f1-612e-49f7-90c3-15c5c414c...(a)j19g2000yqk.googlegroups.com>,
>
> >> Jean  <alertj...(a)rediffmail.com> wrote:
> >> >Cant we build a pipelined machine with no I-Cache and only use
> >> >multiple Stream buffers (Prefetch Queues)  and D- cache instead of
> >> >it ? Wont the sequential characteristics of instructions make it
> >> >work ? Comments !
>
> >> Yes.  It's been done.  It worked.  Look up 'unified cache'.
>
> >> Regards,
> >> Nick Maclaren.
>
> > Isn't a system with unified cache, one which  only has a single cache
> > for holding both data and instruction ? Instructions will be still
> > fetched from cache..right ?
>
> > What I was describing is a design in which the cache exists only for
> > the data. In the fetch stage of the pipeline, instructions are fetched
> > from stream buffers and never go to cache.  Fetch---Stream buffer--
> > Main memory. This obviously should reduce the delay of fetch stage
> > because heads of stream buffers wont constitute to much delay compared
> > to a cache.
>
> > Jean
>
> Consider the number of cycles required to fetch from main memory to your
> stream buffer.
> --
> As we enjoy great advantages from the inventions of others, we should
> be glad of an opportunity to serve others by any invention of ours;
> and this we should do freely and generously. (Benjamin Franklin)

Cycles will be always there there even if it was a cache.