From: KJ on

"Martin Schoeberl" <mschoebe(a)mail.tuwien.ac.at> wrote in message
news:44de421a$0$12126$3b214f66(a)tunews.univie.ac.at...
>> Not really, it is just simpler to say that I'm not going to go anywhere
>> near code that can potentially change any of the outputs if wait request
>> is active. As an example, take a look at your code below where you've
>> had to sprinkle the 'if av_waitrequest = '0' throughout the code to make
>> sure you don't change states at the 'wrong' time (i.e. when
>> av_waitrequest is active). Where problems can come up is when you miss
>> one of those 'if av_waitrequest = '0' statements. Depending on just
>> where exactly you missed putting it in is is where it can be a rather
>> subtle problem to debug.
>
> Agree on the save side, but...
>
>>
>> Now consider if you had simply put the 'if av_waitrequest = '0' statement
>> around your entire case statement (with it understood that outside that
>
> I cannot do this. This case statement is combinatoric. It would introduce
> a latch for next_state. The reason to split up the state machine in
> a combinatoric next state logic and the clocked part is to react
> 'one cycle earlier' with state machine output registers depending
> on next_state. You can code this also with a single case in a clock
> process. However, than you have to code your output registers on the
> transitions (in the if part), which gets a little bit more confusing.

Well, there is always great debate between the one process and two process
folks about this but whether one thinks that one process or two process is
'more' confusing or not is usually a function of the designer themself. The
code will synthesize to the same darn thing. The truth is that either
method can be written as clearly and understandably if one tries to. One
thing that can definitely be said though is that one process code will be
shorter in terms of lines of code.

In any case, using my template then by inspection pretty much anyone can
guarantee that Avalon won't be violated (in terms of not allowing address or
control to change while waitrequest is active). The way you've coded it
you're simply replicating the same check on waitrequest in a somewhat
confusing manner (i.e. when waitrequest is active and the master is being
told to 'wait', you're actively going about changing from one state to the
next. To me that is a bit counterintuitive, you've been told to wait. On
top of that there is also a state where transitions are made independent of
wait request. Hopefully you're never reaching this state when read or write
are active or it will cause a failure for not heeding the wait request.
Not saying that it can't be made to work just that the code can be
simplified quite a bit and made clearer by not writing it this way in the
first place. You might try re-writing the process as a synchronous clocked
process and see for yourself which is clearer.

On top of that using my template you've got the assurance that it will be
fully Avalon compliant, with your approach it's not at all obvious if that
is the case or not under all operational conditions.

>
>>>
>>> What about this version (sc_* signals are my internal master signals)
>>>
>>> that case is the next state logic and combinatorial:
>
> the process containing this case statement is:
>
> process(state, sc_rd, sc_wr, av_waitrequest)
>
> begin
>
> next_state <= state;
>
>>>
>>> case state is
>>>
>>> when idl =>
>>> if sc_rd='1' then
>>> if av_waitrequest='0' then
>>> next_state <= rd;
>>> else
>>> next_state <= rdw;
>>> end if;
>>> elsif sc_wr='1' then
>>> if av_waitrequest='0' then
>>> next_state <= wr;
>>> else
>>> next_state <= wrw;
>>> end if;
>>> end if;
>>>
>>> when rdw =>
>>> if av_waitrequest='0' then
>>> next_state <= rd;
>>> end if;

-- Oops! Since this is a combinatorial process, then where is the 'else'
term on the above 'if' statement? Looks like it will cause a latch. Score
one for the one process folks ;)

>>>
>>
>>> when rd =>
>>> next_state <= idl;
>> --- Are you sure you always want to go to idl? This would probably cause
>> an error if the avalon outputs were active in this state.
>
> No problem as next_state goes to rd only when av_waitrequest is '0'.
> Perhaps 'rd' is a missleading state name. The input data is registered
> when next_state is 'rd'. So state is 'rd' when the input data is
> registered.
>
Might cosider putting an assert that the Avalon read and write signals are
not set in the 'rd' state. From a code maintenance perspective, say that
somewhere down the road someone wanted to modify the code somewhat and use
state 'rd' and didn't 'know' that read and write had better not be set when
entering this state. You could add this as a comment, but better to add it
as an assert since that is active design code and is a much better flag for
design intent.

>>
>> Whether it works or not for you would take more analysis, I'll just say
>> that
>
> For a complete picture you can look at the whole thing at:
> http://www.opencores.org/cvsweb.cgi/~checkout~/jop/vhdl/scio/sc2avalon.vhd
>
>
>>>> You might try looking at incorporating the above mentioned template and
>>>> avoid the Avalon violation. What I've also found in debugging other's
>>>> code
>>>
>>> Then I get an additional cycle latency. That's what I want to avoid.
>>
>> Not on the Avalon bus, maybe for getting stuff into the template but even
>> that is a handshake. I've even used Avalon within components to transfer
>
> Ok, than not at the Avalon bus directly but as you sayed 'getting stuff
> into the template'. That's the same for me (in my case).
>
> If my master has a (internal) read request and I have to forward it
> to Avalon in a clocked process (as you do with your template)
> I will loose one cycle. Ok in the interface and not in the bus.
> Still a lost cycle ;-)
>
Not always a lost cycle. It would be more properly considered a pipeline
stage. A pipelined design adds latency but does not by itself impact
performance. In fact, the reason one would use pipelining is to improve
clock cycle performance. If you're thinking only about interfacing to an
From: Martin Schoeberl on
> Well, there is always great debate between the one process and two process folks about this but whether one thinks that one
> process or two process is

It's just a matter of style. I change it from case to case.
We don't need to start a flame on it ;-)

> you're simply replicating the same check on waitrequest in a somewhat confusing manner (i.e. when waitrequest is active and the
> master is being

Again what is confusing depends on style.

> On top of that using my template you've got the assurance that it will be fully Avalon compliant, with your approach it's not at
> all obvious if that is the case or not under all operational conditions.
>
>>>> that case is the next state logic and combinatorial:
>>
>> the process containing this case statement is:
>>
>> process(state, sc_rd, sc_wr, av_waitrequest)
>>
>> begin
>>
>> next_state <= state;
.....
>>
>>>>
>>>> when rdw =>
>>>> if av_waitrequest='0' then
>>>> next_state <= rd;
>>>> end if;
>
> -- Oops! Since this is a combinatorial process, then where is the 'else' term on the above 'if' statement? Looks like it will
> cause a latch. Score one for the one process folks ;)

No, you did not catch me on this one ;-) See above the default
assignment: next_state <= state

Rule 1 for combinatorial processes: Assign defaults at the beginning
to avoid latches.

> Might cosider putting an assert that the Avalon read and write signals are not set in the 'rd' state. From a code maintenance
> perspective, say that somewhere down the road someone wanted to modify the code somewhat and use state 'rd' and didn't 'know' that
> read and write had better not be set when entering this state. You could add this as a comment, but better to add it as an assert
> since that is active design code and is a much better flag for design intent.

Good idea. Actually I'm using to less asserts in my code.
One can improve VHDL coding every day ;-) BTW: I now
like records. Synthesizer can handle them and it saves
a lot of coding.

>> If my master has a (internal) read request and I have to forward it
>> to Avalon in a clocked process (as you do with your template)
>> I will loose one cycle. Ok in the interface and not in the bus.
>> Still a lost cycle ;-)
>>
> Not always a lost cycle. It would be more properly considered a pipeline stage. A pipelined design adds latency but does not by
> itself impact performance. In fact, the reason one would use pipelining is to improve clock cycle performance. If you're
> thinking only about interfacing to an

ok, ok. Most of the time I'm talking about the read request and a
master that needs the result. Therfore I try to avoid latency as much
as possible (of course without loosing fmax).

>> You can do it when your template 'controls' the master logic but not
>> the other way round.
>>
> Not sure what you mean by 'not the other way around'. This template is only for the master side control logic.

Yes, but your trigger of the trasnaction 'within' your Avalon master
template. However, for me the Avalon interface is just an
interface. It has to react on the request from the CPU. And
the CPU requests the transaction from 'outside' of the
template/interface.

>> I meant when you assume n wait states in your VHDL code, but
>> did a mistake in the PTF file and specified less wait states.
>> This erro cannot happen when you generate the waitrequest within
>> your VHDL code.
>>
> That's why I never use the PTF file to define the number of wait states. In addition to a fixed number (or time) you can say that
> it is user controlled which means that the slave device has a 'waitrequest' output. Then it is up to the VHDL to set waitrequest
> appropriately and if I need to insert or remove a clock cycle of wait states I only change the VHDL for the slave, nothing in the
> PTF file changes.

....

> You almost never want to have a fixed number of wait states but want to simply have the Avalon slave provide a wait request output
> and tell Avalon that by specifying that in the PTF file.

Completely agree. When not writing and reading too many posts
I'm working on that version of the SRAM interface. It was just
a quick start as shown in the Quartus manual.

>> Or a bus specification that counts down the number of
>> wait states ;-)
>>
> See above and tell me how long to count down. You'd have to figure out a worst case number of wait states for the component, but
> then you'd be stuck with that even if the component got the data early. Talk about wasting clock cycles.

No, counter was a little bit misleading name. From the spec.
it's allowed to jump down more than one increment. E.g.
from 2 to 0. Or a very simple slave without pipelining
can just say 33330. Than it's like a simple ready signal.
That's also the way I translate Avalon to SimpCon :-(

>>>> Again, one more cycle latency ;-)
>>> Again, nope not if done correctly.
>>
>> I think we finally agreed, did we?
>>
> Well if you consider you making a statement and me saying 'nope' to be agreeing then yes we agree ;)

ok, than the discussion is on-going ;-)

Martin


From: Martin Schoeberl on
>
>> You almost never want to have a fixed number of wait states but want to simply have the Avalon slave provide a wait request
>> output and tell Avalon that by specifying that in the PTF file.
>
> Completely agree. When not writing and reading too many posts
> I'm working on that version of the SRAM interface. It was just
> a quick start as shown in the Quartus manual.

BTW (to KJ): Do you have this type of Avalon slave
for an SRAM? Would save some time and errors for me ;-)

Martin


From: KJ on

"Martin Schoeberl" <mschoebe(a)mail.tuwien.ac.at> wrote in message
news:44df8e1b$0$12642$3b214f66(a)tunews.univie.ac.at...
>> The above is again making me think that we're talking about interfacing
>> to an async SRAM. If that's the case, then from your description it
>> sounds like Avalon address/read/write basically become the corresponding
>> signals on the SRAM. If that's the case, then how are you guaranteeing
>> that the address is stable at the SRAM prior to write being asserted and
>> after it has been de-asserted. The way the Avalon address and write
>> signals work they will both be transitioning some Tco after the rising
>> edge of the clock. There is absolutely no guarantee of any timing
>> relationship between address and write on the Avalon side, so if those
>> are brought out unmodified to the SRAM you have no guarantee there
>> either....but for an async SRAM you absolutely have to have that. If
>> address and write are nominally transitioning at the 'same' time then you
>> won't get reliable operation (or if you build enough of these they will
>> 'erratically' fail) because you can't guarantee that you've met the
>> timing requirements of the SRAM.
>
> As I set setup time to 0ns, you're right. There is a little issue (depends
> on the tco of the different pins) when wrn goes low before the address
> is stable. That's against the SRAM timing spec. (minimum wrn low after
> address is 0ns). However, I 'assume' that this does not matter. Setting
> Setup_Time to something >0ns will add one additional cycle.

That timing does matter, so 'as-is' you do have a timing issue that needs
fixing. You also need to insure that at the trailing edge of the SRAM write
that the address remains stable. Otherwise it's possible for the SRAM to
respond to a write to an address that you had not intended.
>
> To avoid this little issue and the additional cycle I do usually (with
> my SimpCon SRAM controller) clock the nwr with the inverted clock to
> shift it after address setup.
But now what about the trailing edge or write? The address could start
changing and the write signal will still be active.
>
>> Actually I was referring to what I had described as not being that much
>> work. I agree that what you've done doesn't take much work but I also
>> don't think that your 'sram_tristate_slave' component will work reliably
>> if used to interface with an external asynchronous SRAM. It probably
>> will work if you're interfacing it to a synchronous SRAM if you also then
>> bring out the Avalon clock as the SRAM clock.
>

>> design a component that would allow the master to continue on while the
>> SRAM operation is still in progress. The only time the master would then
>> need to be stalled is if it performed a subsequent access to the SRAM
>> while the previous one was still in progress.
>
> Not for the read (in my case) as I'm waiting for the read data in the
> processor. In some cases I can hide the latency by execution of additional
> code. However, in this case I need the data registerd in the slave. which
> is again not possible....

It is if you write some code for your component and use the 'readdatavalid'
Avalon signal it will work. Once you have the address and command safely
stored within the component wait request can be set to 0. This can even be
happening on a read even if you haven't completed the read (i.e. you don't
have the data yet). For example, let's say it takes 10 Avalon clock cycles
to complete the read and provide the data. Assuming that the SRAM is
currently idle, then when that read comes in from Avalon you do not need to
assert wait request at all. Since wait request is not asserted then the
master device is free to go off and start up another transaction with any
device (i.e. it has not been stalled).

Now, 10 Avalon clock cycles later the slave finally get the data in from the
SRAM, shoves it out the 'readdata' output and asserts 'readdatavalid'. This
signal goes back to the master and now it actually has the data that it was
looking for. During those 10 clock cycles while it was waiting it could
very well have been off initiating 10 other write cycles to anything else.
Unfortunately if a second read is started (even if it not to the SRAM, even
if it is to a device that has 0 wait state reads) that read will be greeted
by a wait request because Avalon needs to insure that read data is supplied
back in the order in which the master requested it. In order to do this it
will actually block the second read from even reaching the other slave until
the first one (to the SRAM that takes 10 clock cycles) completes. If the
second read happens to occur to the same device (in this case the SRAM) such
delays won't happen since each component must also supply readdata in the
order in which it was requested.

Sounds confusing but it isn't, the bottom line is that reads don't
necessarily cause the master device to stall. To see all of this clearly,
try putting together an SOPC system with a master device and a SDRAM or DDR
SDRAM controller out of the MegaCore library. Then watch the interaction
between the master and slave side read/write/waitrequest/readdatavalid
signals. It'll give you a real good feel for how this Avalon interface can
really perform.

KJ


From: KJ on

"Martin Schoeberl" <mschoebe(a)mail.tuwien.ac.at> wrote in message
news:44df9c2e$0$12126$3b214f66(a)tunews.univie.ac.at...
>> Well, there is always great debate between the one process and two
>> process folks about this but whether one thinks that one process or two
>> process is
>
> It's just a matter of style. I change it from case to case.
> We don't need to start a flame on it ;-)
Good ;)
>
>> -- Oops! Since this is a combinatorial process, then where is the 'else'
>> term on the above 'if' statement? Looks like it will cause a latch.
>> Score one for the one process folks ;)
>
> No, you did not catch me on this one ;-) See above the default
> assignment: next_state <= state
>
> Rule 1 for combinatorial processes: Assign defaults at the beginning
> to avoid latches.

You're right, erase the 'score one...'. Us one process folks don't like
writing unneeded code to do those default assignments to avoid something
that can't happen because we're in a clocked process.....OK, just couldn't
resist.

>
>>> You can do it when your template 'controls' the master logic but not
>>> the other way round.
>>>
>> Not sure what you mean by 'not the other way around'. This template is
>> only for the master side control logic.
>
> Yes, but your trigger of the trasnaction 'within' your Avalon master
> template. However, for me the Avalon interface is just an
> interface. It has to react on the request from the CPU. And
> the CPU requests the transaction from 'outside' of the
> template/interface.
>
OK, lost again I think. Now it sounds like the CPU even though embedded
within the FPGA doesn't have a native Avalon interface and you're talking
about a bridge to get you from the CPU interface over to Avalon. Such a
bridge though would typically not be terribly application specific but
instead is tailored to the signals on the CPU and Avalon. Just like you can
make a bridge between Wishbone and Avalon. If the CPU design is your
homebrew though a simpler approach is to simply make it have an Avalon
compatible interface. When you get to writing that code is where my
template would be placed.

KJ


First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13
Prev: Embedded clocks
Next: CPU design