From: rickman on
On Jan 19, 1:51 pm, Mike Treseler <mtrese...(a)gmail.com> wrote:
> rickman wrote:
> > After struggling with VHDL type casting for some time, I finally
> > settled on using signed/unsigned for the majority of the vectors I
> > use.  I seldom use integers other than perhaps memory where it
> > simulates much faster with a lot less memory.  But nothing is
> > absolute.  I just try to be consistent within a given design.
>
> I use integer/natural ranges for small numbers
> and signed/unsigned for large.

Can you explain the idea behind that? Why integers for small numbers
and not large?


> > I have
> > never used bit types, but the discussion here about the advantages of
> > ulogic over logic is interesting.  I certainly like to speed up my
> > simulations.  But it is such a PITA to get away from std_logic.
>
> Vectors, require some compromise.
> I only use std_logic_vector for non-numeric variables
> and for the device pins.
>
> For std_ulogic bits, there is no pain.
> However the advantages are not overwhelming either.
>
> Simulators are now very well optimized for standard types,
> and I would not expect much run-time speed up.

As optimized as they may be, a signal that does not require resolution
will take longer than one that does. Detecting multiple drivers is
done at compile time while the resolution is done at simulation time.
I guess if there are no multiple drivers the difference may not be
apparent though.


> Detecting multiple drivers at compile time is very useful
> for new users using many processes,
> but these errors can also be found at elaboration time.
>
>        -- Mike Treseler

Yeah, I can't say I have many issues with multiple drivers. I'm
pretty sure the tools I've been using give adequate warnings when I do
make a mistake and reuse a signal name.

Rick
From: Andy on
On Jan 21, 9:17 am, Jonathan Bromley <jonathan.brom...(a)MYCOMPANY.com>
wrote:
> On Thu, 21 Jan 2010 05:54:10 -0800 (PST), rickman wrote:
>
> [Mike Treseler]
>
> >> I use integer/natural ranges for small numbers
> >> and signed/unsigned for large.
>
> >Can you explain the idea behind that?  Why integers for small numbers
> >and not large?
>
> Can't speak for Mike, but my reasoning might be: integers are
> convenient because they allow mixing of signed and unsigned
> quantities, and all the sign extension and range checking
> gets sorted out for me automatically; but VHDL integers
> (inexcusably, for any time later than about 1990) don't
> support anything wider than 31 bits.  Yes, 31, that's not
> a typo for 32; you can't reliably get 32-bit signed
> behaviour from VHDL integers, and you can't get 32-bit
> unsigned behaviour at all.  Madness, and one of my top
> beefs with VHDL.
>
> Beyond 31 bits, the signed and unsigned types work well
> and don't put any insuperable obstacles in my way; but
> mixing signed and unsigned is tedious, and it's also
> irritating that you can't copy an integer number or
> expression directly into a signed or unsigned vector,
> because VHDL doesn't allow overloading of the
> assignment operator.
>
> My personal choice tends to be driven by whatever looks
> neatest in the specific application I have to hand;
> any cracks can easily be papered-over by creating some
> appropriate VHDL functions.
>
> >As optimized as they [data types in library ieee] may be, a
> > signal that does not require resolution
> >will take longer than one that does.  Detecting multiple drivers is
> >done at compile time while the resolution is done at simulation time.
> >I guess if there are no multiple drivers the difference may not be
> >apparent though.
>
> Right, a simulator can (and should!) optimise away the resolution
> function when only one driver exists on a signal.  I suspect the
> need to support multi-valued logic is a bigger cost of CPU power
> than the resolution function, in many cases.  No U/X/Z to worry
> about in an integer!
>
> >> Detecting multiple drivers at compile time is very useful
> >> for new users using many processes,
> >> but these errors can also be found at elaboration time.
>
> And also by synthesis tools.  When writing HDL code that
> aims to be synthesisable, it's a smart move to synthesise
> it early in the debug process - just as soon as you've
> got the syntax goofs out of it.  Synthesis tools do a lot
> of pretty smart static analysis and, obviously, can check
> for synthesis design rules that are of no concern to simulators.
> Some simulators have a "synthesis rule check" option on their
> compilers, but I've never found those to be useful; they miss
> far too much.
>
> >Yeah, I can't say I have many issues with multiple drivers.  I'm
> >pretty sure the tools I've been using give adequate warnings when I do
> >make a mistake and reuse a signal name.
>
> Sure, but you can easily end up with syntactically legal - and
> perfectly meaningful - code that drives a signal from more than
> one process, and waste a lot of debug time as a result.  In
> fairness, that tends to be more of a beginner's problem; the
> old hands around here probably get that sort of thing right
> first time, more often than not.
> --
> Jonathan Bromley

On Jan 21, 7:54 am, rickman <gnu...(a)gmail.com> wrote:
> As optimized as they may be, a signal that does not require resolution
> will take longer than one that does. Detecting multiple drivers is
> done at compile time while the resolution is done at simulation time.
> I guess if there are no multiple drivers the difference may not be
> apparent though.

As Jonathan alluded, many simulators have a default optimization (can
be turned off) that, assuming all resolution functions are transparent
(meaning that the result of the resolution of a single driver is
always the value of that single driver) omit the resolution function
when only one driver is present. Of course, the resolution function
for std_logic is transparent, but resolution functions are not
required to be that way by the LRM. I have seen such non-transparent
resolution functions used in a library used for modelling performance
of systems with resource contention, etc. As I recall, the arbitration
scheme used a non-transparent resolution function on the record type
that represented a bus. However, for std_logic, there is often no
simulation penalty, but perhaps a small elaboration phase penalty.

Integer arithmetic promotes results to at least 31 bit signed, and
only limits them upon assignment to a smaller subtype's range. Thus
the result of a natural - 1 can be less than zero (just don't try to
store it in a natural!), whereas the result of an unsigned - 1 cannot.
The synthesis tools are smart enough to optimize out the logic from
the intermediate promotion that is not needed.

Integer arithmetic is MUCH faster to simulate than vector based
arithmetic, even when using divide/modulo by powers of 2 to extract
"bit fields" or truncate values. The divide/modulo with powers of 2 is
automatically optimized by synthesis. Integer operations are usually
implemented directly via the corresponding processor instruction,
rather than the complex nature of dealing with individual MLV "bits"
spread across different addresses in memory for a vector.

The new fixed point package borrows some interesting features from
integers, but not all of them. Note that the fixed point types can be
used for integers by simply specifying non-negative index ranges. The
fixed point operators promote the results to a size that is large
enough to handle the largest possible result without overflowing, but
they do not promote u_fixed to s_fixed for subtraction (an unfortunate
oversight for arithmetic completeness, which could also be handled by
the customary resizing). The general idea for using the fixed point
system would be to let the operators promote intermediate values in
expressions to preserve accuracy, then resize the final result to fit
the size of the storage, the same way (conceptually) that is used for
integers.

Andy