From: rickman on
glen herrmannsfeldt wrote:
> In comp.arch.fpga rickman <gnuarm(a)gmail.com> wrote:
> > On Apr 17, 7:17?pm, glen herrmannsfeldt <g...(a)ugcs.caltech.edu> wrote:
> (snip on test benches)
>
> >> Yes, I was describing real world (hardware) test benches.
>
> >> Depending on how close you are to a setup/hold violation,
> >> it may take a long time for a failure to actually occur.
>
> > That is the point. Finding timing violations in a simulation is hard,
> > finding them in physical hardware is not possible to do with any
> > certainty. A timing violation depends on the actual delays on a chip
> > and that will vary with temperature, power supply voltage and process
> > variations between chips.
>
> But they have to be done for ASICs, and all other chips as
> part of the fabrication process. For FPGAs you mostly don't
> have to do such, relying on the specifications and that the chips
> were tested appropriately in the factory.

I don't follow your reasoning. Why is finding timing violations in
ASICs any different from FPGA? If the makers of ASICs can't
characterize their devices well enough for static timing analysis to
find the timing problems then ASIC designers are screwed.


> > I had to work on a problem design once
> > because the timing analyzer did not work or the constraints did not
> > cover (I firmly believe it was the tools, not the constraints since it
> > failed on a number of different designs). We tried finding the chip
> > that failed at the lowest temperature and then used that at an
> > elevated temperature for our "final" timing verification. Even with
> > that, I had little confidence that the design would never have a
> > problem from timing. Of course on top of that the chip was being used
> > at 90% capacity. This design is the reason I don't work for that
> > company anymore. The section head knew about all of these problems
> > before he assigned the task and then expected us to work 70 hour work
> > weeks. At least we got them to buy us $100 worth of dinner each
> > evening!
>
> One that I worked with, though not at all at that level, was
> a programmable ASIC (for a systolic array processor). For some
> reason that I never knew the timing was just a little bit off
> regarding to writes to the internal RAM. The solution was to use
> two successive writes, which seemed to work. In the usual operation
> mode, the RAM was initialized once, so the extra cycle wasn't much
> of a problem. There were also some modes where the RAM had to
> be written while processing data, such that the extra cycle meant
> that the processor ran that much slower.
>
> > The point is that if you don't do static timing analysis (or have an
> > analyzer that is broken) timing verification is nearly impossible.
>
> And even if you do, the device might still have timing problems.

You keep saying that, but you don't explain.

> >> Yes, I was trying to cover the case of not using static timing
> >> analysis but only testing actual hardware. ?For ASICs, it is
> >> usually necessary to test the actual chips, though they should
> >> have already passed static timing. ?
>
> > If you find a timing bug in the ASIC chip, isn't that a little too
> > late? Do you test at elevated temperature? Do you generate special
> > test vectors? How is this different from just testing the logic?
>
> It might be that it works at a lower clock rate, or other workarounds
> can be used. Yes, it is part of testing the logic.
>
> (snip)
>
> >> If you only have one clock, it isn't so hard. ?As you add more,
> >> with different frequencies and/or phases, it gets much harder,
> >> I agree. ?It would be nice to get as much help as possible
> >> from the tools.
>
> > The number of clocks is irrelevant. I don't consider timing issues of
> > crossing clock domains to be "timing" problems. There you can only
> > solve the problem with proper logic design, so it is a logic
> > problem.
>
> Yes, there is nothing to do about asynchronous clocks. It just has
> to work in all cases. But in the case of supposedly related
> clocks, you have to verify it. There are designs that have one
> clock a multiple of the other clock frequency, or multiple phases
> with specified timing relationship. Or even single clocks with
> specified duty cycle. (I still remember the 8086 with its 33% duty
> cycle clock.)
>
> With one clock you can run combinations of voltage, temperature,
> and clock rate, not so hard but still a lot of combinations.
> With related clocks, you have to verify that the timing between
> the clocks works.

But you can't verify timing by testing. You can never have any level
of certainty that you have tested all the ways the timing can fail.
If the clocks are related, what exactly are you testing, that they
*are* related? Timing is something that has to be correct by
design.

Rick
From: mike v. on
I also use seperate sequential and combinatorial always blocks. At
first I felt that I should be able to have just a single sequential
block but quickly became accustomed to 2 blocks and it now feels
natural and I don't think it limits my ability to express my intent at
all. Most of the experienced designers I work with use this style but
not all of them.
From: Chris Higgs on
On Apr 23, 3:02 pm, KJ <kkjenni...(a)sbcglobal.net> wrote:

> OK, but it doesn't compare it to the 'one process' approach, the
> comparison is to a 'traditional method'.  The 'traditional method'
> though is all about lumping all signals into records (refer to the
> 'Benefits' area of that document).  All of the comparisons are between
> 'traditional method' which has discrete signals and 'two-process
> method' which lumps signals into a record.

I think one of the points (implicitly) made by the paper is an
admission that the two-process method is a mess unless you use
records. I think it's also implied that 'traditional method' people
are more prone to using discrete signals rather than record types.

> The author does mention "No distinction between sequential and comb.
> signals" as being a "Problem".  Maybe it's a problem for the author,
> but it's somewhat irrelevant for anyone skilled in design.  The author
> presents no reason for why having immediate knowledge of whether a
> signal comes out of a flip flop or a gate is relevant...(hint:  it's
> not).  What is relevant is the logic that the signal represents and
> whether it is implemented properly or not.  Whether 'proper
> implementation' means the signal is a flop or not is of very little
> concern (one exception being when you're generating gated
> clocks...which is a different can of worms).

Sometimes it's necessary to use the combinatorial signal.

>
> Even the author's 'State machine' example demonstrates the flaw of
> using two processes.  Referring to slide #27 partially shown below,
> note that (the undeclared) variable v has no default assignment, v
> will result in a latch.

Yes, that's just sloppy.

> You seem to have been caught up by his statement "A synchronous design
> can be abstracted into two separate parts; a combinational and a
> sequential" and the slide titled "Abstraction of digital logic" and
> thought that this was somehow relevant to the point of his
> paper...it's not...his point is all about combining signals into
> records...totally different discussion.

Well combining state into records makes a "two-process" technique neat
enough to be feasible. Personally I use a similar style and I find it
very clear and understandable. As an example:

entity myentity is
generic (
register_output : boolean := true
);
port (
clk : in std_ulogic;
srst : in std_ulogic;

-- Input
data : in some_type_t;

-- Output
result : out another_type_t
);
end;

architecture rtl of myentity is

type state_enum_t is (IDLE, OTHER_STATES);

type state_t is record
state : state_enum_t;
result : another_type_t;
end record;

constant idle_state : state_t := (state => IDLE,
result => invalid_result);

signal r, rin : state_t;

begin

combinatorial: process(r, srst, data)
variable v : state_t;
begin

--DEFAULTS
v := r;
v.result := invalid_result;

-- STATE MACHINE
case v.state is
when IDLE =>
null;
when OTHER_STATES =>
null;
end case;

-- RESET
if srst = '1' then
v := idle_state;
end if;

--OUTPUTS
if register_output then
result <= r.result;
else
result <= v.result;
end if;

rin <= v;
end process;

sequential : process(clk)
begin
if rising_edge(clk) then
r <= rin;
end if;
end process;

end;

> The only conclusion to draw from this paper is that you shouldn't
> believe everything you read...and you shouldn't accept statements that
> do not stand up to scrutiny.

You can only use sequential processes and make it impossible to infer
a latch but lose the ability to use a combinatorially derived signal.
Alternatively you can use a two-process technique which allows
intermediate/derived signals to be used but accept the risk that bad
code will introduce latches. We can ague forever about which method is
more 'correct' but it's unlikely to boil down to anything other than
personal preference.

Thanks,

Chris
From: Patrick Maupin on
On Apr 23, 2:09 pm, Jonathan Bromley <s...(a)oxfordbromley.plus.com>
wrote:

> If you have had even half an eye on comp.lang.verilog
> these past few years you will have seen a number of
> posts clearly pointing out the very serious flaws in
> Cliff's otherwise rather useful paper.  In particular,
> the "guideline" (=myth) about not using blocking
> assignments in a clocked always block was long
> ago exposed for the nonsense it is.

One last note: In researching this, I found a posting by you with
rules and recommendations that I cannot disagree with:
http://groups.google.com/group/comp.lang.verilog/msg/a87ba28b6d68ecc8

I will note that, if faithfully followed, the two process model can
make it very easy to insure that none of these rules or guidelines are
broken. Finally, as I have posted elsewhere, these rules combined
with my personal preference to always update related variables inside
the same always block, sometimes make it difficult to *not* use the
two process model.

Regards,
Pat
From: Jan Decaluwe on
On Apr 22, 10:04 pm, Muzaffer Kal <k...(a)dspia.com> wrote:
> On Thu, 22 Apr 2010 08:08:59 -0700 (PDT), Jan Decaluwe

> >Quoting from the article:
> >"""
> >This example is more subtle and complex than it may seem at first
> >sight. As said before, variables dir and run are state variables and
> >will therefore require a flip-flop in an implementation. However, they
> >are also used “combinatorially”: when they change, they may influence
> >the counter operation “in the same clock cycle”, that is, before the
> >flip-flop output changes. This is perfectly fine behavior and no
> >problem for synthesis tools, but it tends to confuse a lot of
> >designers.
> >"""
>
> I am not sure who is really confused here.

You are: both about Verilog (suprizing) and about RTL synthesis
(anticipated).

> What is suggested in the
> above paragraph is not really feasible; assuming by 'dir' one refers
> to the output of a flop.
>
> The problem with the last verilog block shown is the dir and run are
> not flops anymore but combinational signals decoded from goleft and
> goright so the last direction will not be remembered. If last
> direction needs to be remembered, one needs to decode the
> 'instruction', use  the decoded value and remember the decoded value
> as above.

So this is now already the third post that I devote to explaining
to two seasoned Verilog designers how a very simple example in
their favourite language with the ultra-short learning curve
actually works. I'm beginning to think that Verilog designers
don't know how to use variables :-)

'dir' and 'run' keep their value over always block invocations, OK?
Depending on the conditions, they may or may not get a new
value. Basic Verilog stuff.

Then the implementation. Clearly there is both a sequential and
a combinatorial path from these variables. If you think this is
not feasible, don't argue with me, but just synthesize it and
simulate the result to verify.

FInally, RTL synthesis. No, 'dir' is not a flop and no, it isn't
combo logic either. It gives rise to both. Technically, the
same variable 'dir' creates multiple nodes in a multi-level
logic network that represents the code. Voilà.

Jan