New keyword 'orif' and its implications

K

KJ

Andy said:
KJ,

I understand what you are trying to demonstrate, but I don't think
that is what we're asking for.

We? Marcus asked for an example that demonstrated enumeration conversion
functions that result in 0 logic cell usage when the two functions are in
separate entities. My code demonstrates that.
What we want is a way to take an input slv for which we are certain
that exactly one of those bits is set at any given time (one-hot,
mutually exclusive), and convert that slv into an output integer that
is the index of the bit that is set, with no priority encoding, etc.

What you want is not always logically achievable....in any case, you're
posing a different question, one that interests you so read on for more on
why
An integer is sufficient because integer encodings are by definition
mutually exclusive. Also enumerated types are impossible to define
with an arbitrary (parameterized) number of possible values.

Now, we know this can be done with boolean equations,

Depending on your starting point, no you can't....again read on for more on
why.
but can you do
it in behavioral code without resorting to boolean operations (and,
or, etc.). I think it can be done with a case statement (and std_logic
types), but I'm not sure whether most synthesis tools will "take the
bait":

Why are you insisting on a particular form for the solution? You're just
like Weng who is trying to fit it into an if statement. I'll use the
appropriate form for the solution.
case input is
when "0001" => output <= "00";
when "0010" => output <= "01";
when "0100" => output <= "10";
when "1000" => output <= "11";
when others => output <= "--";
end case;
Now look at your code again. You have not really told the synthesis tool
that the input is one hot coded but I'll bet you think you did. To do so,
you 'should' have specified the cases as "---1", '--1-", etc. You know you
can't really do this and get the function you want (it will fail sim because
'0001' is not equal to '---1') so you chose (incorrectly) to write incorrect
case statements because you chose to use a case statement....which is the
wrong tool....just like Weng would like to augment the 'if' statement
because he wants to see everything inside an 'if' statement for whatever
reason. In any case, you haven't 'told' the synthesis tool that this is one
hot encoded, in fact you've left it up to it's discretion about how to
implement 3/4 of the cases.

Run your code through a synthesis tool and check the resource usage. Now
convert the case statement to the if/elsif form using the std_match function
instead where you match to "---1", "--1-", "-1--" and "1---" and compile
again. You'll probably find your logic resource usage drops way down (but
still not to 0 when paired with it's 'inverse' decode function).
Now, if I convert the integer encoding back to one hot:

result := (others => '0');
result(to_integer(unsigned(output))) := '1';


However, if this string of logic is synthesized, It will not reduce to
wires (because we had to use binary encoding because we had an
arbitrary number of inputs).

So, I would presume that, for an arbitrary number of mutually
exclusive inputs, there is no conversion from one hot to any mutually
exclusive encoding and back that reduces to wires.

Therefore, we need a method to do that.
Well OK then, now we get into the reason why what you're asking for is not
achievable except under certain usage conditions. In an earlier post to
Marcus I stated

"The encoding and decoding functions can be lots of things but they could
simply be a one hot coding of the enumeration list or a binary coding or
whatever"

While I think it was clear from that post that the encode/decode functions
are functional inverses of each other, an important point that I had
neglected to mention is that these functions need to operate over the same
domain (i.e. all the possible input values) if you want to be able to apply
these functions as a pair in any order and incur no logic cost. I don't
have that as a proof but I'm pretty sure that it is both necessary and
sufficient. I haven't found a case where it is not true but I'll simply
call it "KJ's postulate" since that's about all it is at that point.

If you now have two functions f(x) and g(x) that are candidate conversion
functions (but the input 'x' is not necessarily over the same domain...or in
the digital logic case it would mean that the the 'x' input into f() and g()
do not necessarily have the same number of bits) you can have the following
situations:
1. f(g(x)) = x
2. g(f(x)) /= x

I claim that if f() and g() operate over the same domain (same number of
bits), then condition #2 above will never be true and f(g(x)) = g(f(x)) = x
and the resulting implementation will be wires with no logic resource
required to synthesize.

I claim also that if f() and g() operate over different domains that if the
application USAGE is only of form #1 then the resulting implementation will
be wires with no logic resources. But if the usage is of form #2 there will
be some non zero cost to implementing and the result will not be just wires.

The example I originally put up used 'val and 'pos as the conversion
function pair. Those both operate on the same domain (i.e. the enumerated
type) so they will result in 0 logic when used as a pair in any order. I
could change that function pair to remap things however I wanted (gray code,
7 - 'pos, etc.) and this will always be true (per my first claim).

Now take as an example something to test claim 2, let's say that g(x) is a
3->8 decoder and f(x) is an 8->3 mux. This is an example where the domains
of the two functions are not the same, one works with three bits, the other
with 8. Now implement f(g(x)). The primary input is the 3 bit input which
goes through the 3-> decoder which is fed into the 8->3 mux and out the
chip. This will result in a 0 logic, 'wires' solution.

Now turn it around and implement g(f(x)). The primary input is an 8 bit
input vector which gets encoded to a 3 bit number which then gets decoded to
produce an 8 bit output. Again, implement it but now you'll find that you
get some non-zero logic. The reason is quite simple, the 3 bit code can not
possibly produce the 256 possible input values so g() and f() IN THIS USAGE
are not quite functional inverses of each other . This whole thread has
been about mutual exclusive stuff but putting that aside for a moment, and
ponder, if you have an 8 bit input and happen to set all of them to 1 and
then feed them into the mux/decode logic do you really expect that the
output will be 'FF'? It might, but you definitely won't be able to cycle
through all the input codes from "00" to "FF" and produce those same outputs
on the 8 bit output bus.

You've been preaching on about how telling the synthesis tool that these are
mutually exclusive by way of an assertion could somehow magically produce
what you'd like to see, but I don't think it can (again depending on the
usage). In my case, if there is some known mutual exclusiveness information
I start from that point (whether it's an enumerated type or a coded slv) and
know that the decode/encode process pair costs nothing in logic (but
enhances readability and maintainability) but that the encode/decode process
pair does have a non-zero cost....and of course either function by itself
has a non-zero cost to it. I'd suggest you back off on your unfounded
claims that assertions could do what you seem to think they could until you
can produce anything tangible.

KJ
 
M

Marcus Harnisch

KJ said:
I think you read more into this than was intended. What Marcus had
asked for (or at least what I thought he had asked for) was example
code showing that the conversion of an enumerated type to and from a
std_logic_vector really took 0 logic elements.

Your posting suggested (but maybe I was reading too much into it) that
some synthesis tools would be able to figure out by themselves whether
the usage of conversion functions at either end of a connection is
redundant and make everything dissolve in a puff of logic
automagically. I strongly doubt that this is true unless the
conversion is one-to-one, where this could potentially work.
It really had nothing to do with Weng's 'orif'.

A functionality like that (if it existed) could be utilized to take
advantage of the inherent uniqueness of enum elements.

-- Marcus
 
K

KJ

Marcus Harnisch said:
Your posting suggested (but maybe I was reading too much into it) that
some synthesis tools would be able to figure out by themselves whether
the usage of conversion functions at either end of a connection is
redundant and make everything dissolve in a puff of logic
automagically. I strongly doubt that this is true unless the
conversion is one-to-one, where this could potentially work.
See my reply from yesterday to Andy on this thread for what I believe are
the necessary conditions for the conversion functions to dissolve away....as
well the reasoning why the 'one hot -> encoded -> one hot' will never
dissolve away completely regardless of the form that one chooses to write
the equations (i.e. and/or, if/end if, case) and why 'asserts' will not help
either.
A functionality like that (if it existed) could be utilized to take
advantage of the inherent uniqueness of enum elements.
Again, from my reply yesterday, the conversion 'encoded -> one hot ->
encoded' will always dissolve away resulting in 0 logic usage. The basic
concept is that if some arbitrary collection of signals is truly 'one hot'
then there exists an encoding (whether as a vector or an enum) that those
arbitrary signals can be decoded from that is the basic underpinning of why
that collection is truly a logical 'one hot'.

KJ
 
A

Andy

See my reply from yesterday to Andy on this thread for what I believe are
the necessary conditions for the conversion functions to dissolve away....as
well the reasoning why the 'one hot -> encoded -> one hot' will never
dissolve away completely regardless of the form that one chooses to write
the equations (i.e. and/or, if/end if, case) and why 'asserts' will not help
either.



Again, from my reply yesterday, the conversion 'encoded -> one hot ->
encoded' will always dissolve away resulting in 0 logic usage. The basic
concept is that if some arbitrary collection of signals is truly 'one hot'
then there exists an encoding (whether as a vector or an enum) that those
arbitrary signals can be decoded from that is the basic underpinning of why
that collection is truly a logical 'one hot'.

KJ

KJ,

Sorry; I did not make myself clear. Yes, you did demonstrate
specifically what you were asked to, and I thank you for it. I
apologize if my response was taken as a negative critique of your
demonstration.

By "what we wanted", I meant that the subject of the thread, taken as
a whole, has to do with methods for optimally dealing with encodings
or conditions that are not by definition mutually exclusive, but are
functionally guaranteed to be so. A binary encoding is by definition
mutually exclusive, so starting with that would not demonstrate what
the thread, as a whole, seeks.

In light of the thread subject, a more useful demonstration (which you
were not asked for) would be to demonstrate a conversion from a one-
hot vector to a binary or enumeration encoding, and back, with minimal
(preferably zero) logic. As you have pointed out, that is likely not
possible.

Several options have been proposed within the existing language and
tool capabilities, but they are all limited to either data types that
include a "don't care" value understood by synthesis, or that support
an OR operation between values. It is possible that an enum value
could be treated as a don't care if nothing is done in response to it,
but I have not tried that. Methods capable of dealing with
unconstrained dimensions of input and output would be required as
well.

Thanks,

Andy
 
A

Andy

In fact the guiding principle of logic synthesis is that the behaviour
exhibited by the hardware should conform to the language specification not
'the simulator' (whatever that might be). Every tool that I've picked up
claims conformance to some flavor of VHDL (i.e. '87, '93, '02), I've yet to
pick one up that claims conformance to Modelsim 6.3a.



But the logic description (i.e. everything BUT the asserts) completely
describes the logic function by definition. The asserts are:
- Generally not complete behavioural descriptions
- Unverifiable as to correctness. In fact, a formal logic checker program
would take the asserts and the logic description and formally prove (or
disprove) that the description (the non-assert statements) accurately
produce the stated behaviour (the asserts). Since formal logic checkers can
'fail' (i.e. show that incorrect behaviour can result from the given logic
description) this would tend to drive a stake completely through your claim.




No, it is a (possibly incorrect) statement of what is believed to be correct
behaviour.




On that point, we most definitely do not agree. The non-asserts completely
define the logic function, the asserts incompletely (at least in VHDL)
describe correct behaviour.

KJ

KJ,

What happens if the VHDL description does not fully define the
behavior? The VHDL specification is written in terms of an executable
model. That model may be SW, in terms of an executable simulation or a
static (formal) analysis; or it may be physical, in terms of an FPGA
or ASIC implementation.

Let's look at the following example:

signal count, output : integer range 0 to 511;
....
process (rst, clk) is
begin
if rst = '1' then
count <= 0;
output <= 0;
elsif rising_edge(clk) then
if restart = '1' then
output <= count;
count <= 0;
else
count <= count + 1;
end if;
end if;
end process;

The restart input above is externally guaranteed to be set at least
once every 500 clocks. If it is not, then an error has occurred
somewhere else, and we don't have to handle it or recover from it
without external assistance.

The question is what should the hardware do if count is 511, and
restart is not set? The logic description does not say what happens to
count, because the executable model stops, per the language
specification. Should the hardware model stop too? Or should the
synthesis tool interpret the situation as a "don't care"? If the
former, what does "stop" mean in the context of the hardware model? If
the latter, every synthesis tool I know of already implements a
rollover, which, while optimal from a resource POV, is not described
anywhere in the "logic description".

So what is the difference in the example above, and one in which a
user-defined assertion stops the model?

Andy
 
K

KJ

KJ,

What happens if the VHDL description does not fully define the
behavior?
I could say that you're at the mercy of how the synthesis tool wants
to generate it...but in reality, by definition, synthesis (and
simulation) assume that the logic description is complete. If it's
incomplete the simulator can help you find it, just like it can help
you find any other design flaw. But synthesis is supposed to
implement it, if you have a design flaw in your logic that's not it's
concern.
Let's look at the following example:

signal count, output : integer range 0 to 511;
...
process (rst, clk) is
begin
if rst = '1' then
count <= 0;
output <= 0;
elsif rising_edge(clk) then
if restart = '1' then
output <= count;
count <= 0;
else
count <= count + 1;
end if;
end if;
end process;

The restart input above is externally guaranteed to be set at least
once every 500 clocks. If it is not, then an error has occurred
somewhere else, and we don't have to handle it or recover from it
without external assistance.
OK...you're declaring this to be the case.
The question is what should the hardware do if count is 511, and
restart is not set? The logic description does not say what happens to
count
Sure it does...it says to add 1. No place in your code do you say to
not count if the count gets to 511.
, because the executable model stops, per the language
specification.
No, the simulator stops...
Should the hardware model stop too?
Why should it? Just because the simulator stops does not imply that
the synthesized form should as well. You're going back again to your
claim that synthesis implements what is simulatable whereas in fact
synthesis and simulation are two totally different tools do two
totally different thing that are supposed to conform to the same
language specification...nowhere does anyone claim that different
tools should conform to some other tool.
Or should the
synthesis tool interpret the situation as a "don't care"?
No, it should do what you said and attempt to add 1....like it clearly
says to do. The fact that it is unable to produce '512' is your
design issue to resolve, not the synthesis tools problem.
If the
former, what does "stop" mean in the context of the hardware model? If
the latter, every synthesis tool I know of already implements a
rollover, which, while optimal from a resource POV, is not described
anywhere in the "logic description".
Sure it is. You said to take the current value and add 1 and
implements this digital logic with the given bits of precision. As
soon as you (implicitly) say "implement this in digital logic with the
given bits of precision" you've made certain design trade offs that
you don't seem to be willing to accept.
So what is the difference in the example above, and one in which a
user-defined assertion stops the model?
Asserts don't stop synthesized hardware...nor do they change the
implemented code in any way....nor should they.

KJ
 
K

KJ

By "what we wanted", I meant that the subject of the thread, taken as
a whole, has to do with methods for optimally dealing with encodings
or conditions that are not by definition mutually exclusive, but are
functionally guaranteed to be so. A binary encoding is by definition
mutually exclusive, so starting with that would not demonstrate what
the thread, as a whole, seeks.
What 'the thread as a whole seeks' is to take an arbitrary collection
of signals, impart some supreme insight that this arbitrary collection
has some mutual exclusiveness property to it in some fashion OTHER
than coding it in that manner....and what I've been saying all along
(somewhat casually at first, a bit more forcefully in my Sept 7 post)
is that if you have some mutual exclusiveness than the functional code
(not the asserts) will directly support that by how it is coded...and
that means coding in a sum of products form or using an enumerated
type...period.
In light of the thread subject, a more useful demonstration (which you
were not asked for) would be to demonstrate a conversion from a one-
hot vector to a binary or enumeration encoding, and back, with minimal
(preferably zero) logic. As you have pointed out, that is likely not
possible.
I claimed that zero logic is not possible, 'minimal' would depend on
one's definition. The minimal way to do this is some form of the
simple sum of products formation (an example way of 8->3 encoding is
below)

Encoded_OneHotInps(0) <= OneHotInps(1) xor OneHotInps(3) xor
OneHotInps(5) xor OneHotInps(7);
Encoded_OneHotInps(1) <= OneHotInps(2) xor OneHotInps(3) xor
OneHotInps(6) xor OneHotInps(7);
Encoded_OneHotInps(2) <= OneHotInps(4) xor OneHotInps(5) xor
OneHotInps(6) xor OneHotInps(7);

Using the above encoding followed by a 3->8 decoder function results
(using Quartus) exactly one logic element per output for a total of
8.

where 'OneHotInps' is an 8 input supposedly one hot input vector,
'Encoded_OneHotInps' is a 3 bit vector that encodes the information.
Just for grins I used 'xor', 'or' could also have been used. I can't
prove that it is 'minimal' but I'll bet you can play all kinds of
games with 'xor', 'or' or changing the function around a bit and get
the exact same result in terms of logic resource usage.

The mechanism for telling the compiler about the 'one hot' nature of
the input is not through an assertion (as you claim) or through a new
language keyword (as Weng claims is required) but through the actual
functional logic that is written. Any other formulation in VHDL using
a case statement or an if statement(s) (without Weng's proposed
'orif') is doomed because it will inherently encode either a priority
encoding (i.e. if/elsif/elsif/endif) or dependency on inputs that, in
fact, declares that maybe, just maybe, the inputs are not one hot
(i.e. using a case statement where the cases are something like "0001"
when REALLY you would like to say "---1")

The sum of products formation (or the product of sums for the DeMorgan
fans) is likely the best one that truly encodes the one hot nature of
the input because it logically depends on those inputs being one hot
and will produce an incorrect coding if somehow two or more of those
puppies ever got set. It does this by including multiples of those
supposedly one hot inputs into single equations taking full advantage
of the higher knowledge that only one term should ever fire for all of
the terms that are being 'or' or 'xor' -ed together.

There was great debate on just what Weng's 'orif' would do in certain
circumstances, even to the extent that proper indenting and color
coding would be needed but the bottom line is that 'orif' would have
to synthesize down to a sum of products implementation. My point back
to Weng, that he never responded to either time, was that he was
claiming superior performance or resource utilization. I seriously
doubt that, and he was unable to demonstrate anything that was an
improvement over what I believe to be best practice (enumerated types
or vector coding for the 'unconstrained number of enumerations'
situation). At best, 'orif' is a clearer and more productive way of
coding...not convinced that it is, but that's what it amounts to.

By the way, another interesting way to encode the one hots could be
the following....
Encoded_OneHotInps_var := (others => '0');
for i in OneHotInps'range loop
Encoded_OneHotInps_var := Encoded_OneHotInps_var + to_unsigned(i *
to_integer(unsigned'("0" & OneHotInps(i))),
Encoded_OneHotInps_var'length);
end loop;

Functionally this produces the exact same result given one hot inputs
as the previous example using 'xor' or 'or'. But the synthesized
result is larger (14 logic cells in Quartus). While the above code
snippet treats each of the inputs as an independent thing and depends
on the inputs really being one hot in order to produce a correct
result (just like the 'xor' implementation). At the moment I don't
quite understand why it is sub-optimal to a simpler sum of products
(maybe it's just a Quartus thing)...but it is.
Several options have been proposed within the existing language and
tool capabilities, but they are all limited to either data types that
include a "don't care" value understood by synthesis,
Simulation doesn't really like the "don't care" either...witness your
use of "0001" as a case value instead of "---1" that you would really
have liked to say. But sim and synth both are treating them
appropriately per the language standard and the rules of boolean
logic.
or that support
an OR operation between values.
And what's wrong with that?
It is possible that an enum value
could be treated as a don't care if nothing is done in response to it,
but I have not tried that. Methods capable of dealing with
unconstrained dimensions of input and output would be required as
well.
The unconstrained (i.e. parameterizable) enumerated type is simply a
coded vector. The number of 'enumerations' would be 2**n. Your root
type in this case would be an 'n bit' vector which represents the
coding of the various possibilities.

Starting from unencoded form and trying to claim 'one hot-ness'
without realizing that there really is a coded form that underlies the
'one hot-ness' is the fallacy. Not explicitly defining this coded
type as the basis for the claims of 'one hot-ness' results in sub-
optimal (larger logic resource usage) implementations. Once coded in
that manner, synthesis needs no additional help telling it about the
'one hot-ness'....such help is about as useful as getting coding tips
from the boss.

Asserting such knowledge is fruitless because synthesis' job is to
implement the functional logic (not the asserts) correctly. At best
the assertion is true, and therefore redundant since the functional
logic defines the same thing that is being described by the
assertion. At worst, the assertion is not always true and you don't
have a formal logic checker to find that out ;)

KJ
 
A

Andy

I could say that you're at the mercy of how the synthesis tool wants
to generate it...but in reality, by definition, synthesis (and
simulation) assume that the logic description is complete. If it's
incomplete the simulator can help you find it, just like it can help
you find any other design flaw. But synthesis is supposed to
implement it, if you have a design flaw in your logic that's not it's
concern.

No, neither assumes the description is complete. Both abide by the
entire description (executable and declarative sections), logically
complete or not.

Synthesis does implement the described behavior, you and I just have a
disagreement over just what constitutes the described behavior.
OK...you're declaring this to be the case.




Sure it does...it says to add 1. No place in your code do you say to
not count if the count gets to 511.

Partial credit: It says to add one to count, but "signal count,
output : integer range 0 to 511;" also says "don't store anything less
than 0 or more than 511 in count or output.

It can add 1 to the count all day, but it cannot store the result back
in count unless it is in range. This is a vital difference.

You want to be bound by only the executable statements, but the
language (and the total description) includes the declarations and
their limitations.

Do not confuse "vector math" with "integer math". Vector math rolls
over, integer math does not.
No, the simulator stops...


Why should it? Just because the simulator stops does not imply that
the synthesized form should as well. You're going back again to your
claim that synthesis implements what is simulatable whereas in fact
synthesis and simulation are two totally different tools do two
totally different thing that are supposed to conform to the same
language specification...nowhere does anyone claim that different
tools should conform to some other tool.


No, it should do what you said and attempt to add 1....like it clearly
says to do. The fact that it is unable to produce '512' is your
design issue to resolve, not the synthesis tools problem.

So what you just said is that synthesis needs to make count big enough
to store 512? How big should it be? 1024? 2048?

It is not a design issue, I accounted for it in the built-in
assertion.

The language cannot store the 512 back into count. That is undefined
per the LRM.

If what you say was true (it should just add 1), then the following
code would synthesize the same way whether count was integer range 0
to 511, or unsigned(8 downto 0):

if count - 1 < 0 then
count <= 511; -- or (others => '1')
output <= '1';
else
count <= count - 1;
output <= '0';
end if;

But if count is an unsigned(8 downto 0), then the condition is by
definition false (unsigned subtraction returns unsigned), and output
never gets set.

If OTOH, count is an integer range 0 to 511, then the condition has
meaning (because integer operations are not limited to the ranges of
their operands), and output will be asserted when the counter is set
to zero (which would be optimized to a rollover).

This example proves that "count - 1" does not always mean the same
thing, depending on what the data type is.

I can do the same thing with an up counter:

if count + 1 > 511 then
output <= '1';
count <= 0; -- or (others => '0');
else
output <= '0';
count <= count + 1;
end if;

With unsigned(8 downto 0), output will never get set, because the
maximum value for the result of the operation is still 511. So count +
1 is not the same thing either: adding one to an integer is never
DEFINED as a rollover; however, the storage may be optimized to effect
a rollover.

By your understanding, the "definition" of "count <= count + 1" should
be the same no matter what data type is used, but in fact, there are
differences caused by declarations and their effects (built in
assertions, operator definitions, etc.).

There's more to the logic description than just the "executable part".

Andy
 
K

KJ

Andy said:
No, neither assumes the description is complete.
Yes they do...simulators simulate the logical description code that is
written, synthesis tools synthesize the logical description code that is
written. Neither grabs for anything beyond the written code to complete
it's task...it is assumed by both tools to be logically complete. Whether
or not it is what you want or has some other logic flaw is your problem to
figure out. If you choose to use a formal logic checker to have it look for
logical discrepancies between what you've written in the logical description
and your assertions is up to you, maybe it will help you find your design
errors.
Both abide by the
entire description (executable and declarative sections), logically
complete or not.
No, synthesis does not look at assertions except (possibly) for ones that
are statically determinate. You keep claiming this but do not back it up in
any way. Name one synthesis tool and post one design that gets synthesized
one way when there are no assertions but synthesizes to something different
with the inclusion of a single assert. Post it up here for all to see.

Your previous post on this thread in regards to the 0-511 counter example
said...
every synthesis tool I know of already implements a
rollover, which, while optimal from a resource POV, is not described
anywhere in the "logic description".
If that is the case, then every synthesis tool that you know of is not
compliant with the LRM so surely you've opened a service request to them on
this, right? If not, why not? Or isn't it just remotely possible, that
you're incorrect and that a failed assertion is a design or modelling error
on the part of the person who wrote the code and is NOT the fault of any
tool because it failed to detect your flawed logic.
Synthesis does implement the described behavior, you and I just have a
disagreement over just what constitutes the described behavior.
Well we certainly do agree on that point.
Partial credit: It says to add one to count, but "signal count,
output : integer range 0 to 511;" also says "don't store anything less
than 0 or more than 511 in count or output.
integer range 0 to 511 does not say anything about storing or not storing
any result that is outside the declared range. What it says is that YOU are
guaranteeing that no matter how this code is used, that YOU will make sure
that count stays in the proper range. It is your responsibility to make
sure that happens, it is synthesis' responsibility to generate hardware that
implements the stated function it's only use for "integer range 0 to 511" is
to allow it to set aside enough storage space to represent an integer in
that range. It will not implement anything to "don't store anything less
than 0 or more than 511 in count" as you've stated...you're way off on that
one.

Synthesis doesn't care that you have failed to provide adequate storage. It
is YOU that have misused the device since...
- You declared count to be in the range from 0 to 511
- You used the synthesized hardware in a manner that allowed it to try to
operate outside that range.
- You provided no mechanism in the logical description to guarantee that, no
matter how the hardware is operated, that count would remain in the range
that YOU defined.
It can add 1 to the count all day, but it cannot store the result back
in count unless it is in range. This is a vital difference.
Kind of conflicts with your earlier statement that every synthesis tool you
know WILL store it back in count by rolling it over. The result does get
stored, the fact that the result is wrong is a design/usage flaw on your
part. Are you now saying that every synthesis tool is wrong for storing
your specified result incorrectly?
You want to be bound by only the executable statements, but the
language (and the total description) includes the declarations and
their limitations.
Simulators simulate just fine and the synthesis tools synthesize just fine
when there is only the logical description and no asserts.

On the other hand, simulators have nothing to simulate and synthesis tools
will synthesize nothing when you have no logical description but only
asserts.

If what you claim held any nugget of truth to it at all, both sets of code
would perform in the same manner....but they don't...at least not in VHDL
now do they? That's because the asserts play absolutely no role in
determining how a signal should change (for a simulator) or the boolean
logic that needs to be implemented (for a synthesizer).

Take this example
c <= a and b;
assert ((a or b) = c)
report "OOPS!"
severity ERROR;

Run it through any synthesis tool and it will produce a bitstream that
implements "c <= a and b". Yet according to you there is an obvious paradox
here in that I've asserted that c should be equal to "a or b". Following
your logic no synthesis tool in the world should be able to complete it's
task because of the logical conflict between the logical description and the
assertion. The reason it synthesizes is because there is no paradox, the
synthesis tool looks ONLY at the logical description, not the assertions.
But feel free to take this code and run it through any synthesis tool and
then open a service request to the supplier on their supposedly deficient
tool. Post the replies you get up here so we can all learn.

Now take the above code into any simulator, do a "run -all" and it will run
just fine, no assertion will be hit....but Doooooh, that's because we need a
testbench to generate stimulus for it huh? Because 'U' and 'U' (the logic)
DOES happen to equal 'U or "U' (the assert). So add the following testbench
code and run again
a <= '1';
b <= '1';
Dooooooh, the simulator ran AGAIN without failing the assertion. So I guess
we'll need to open service requests to all the simulator vendors for not
catching this too? Simulators do NOT validate that the assert statement is
logically correct. All a simulator does with an assert is check that the
condition is true for any inputs that you just happen to throw at it. If
not, then it throws an exception. Don't give it the right conditions and
the assert will never fail. In fact, in cases like this the simulator
doesn't even have to 'stop'.

I can tell the simulator to only stop on 'FAILURE' level severity in which
case it will run to completion again....Dooooooh. I shudder to think what
you think the synthesis tool should do as a function of severity level on
the assert.

Now take the original code and run it through a formal logic checker. Here
I'll take a guess since I've never used such a tool but I'll bet that it
would come back and say something like...
"Your logic does not correctly handle the case when a=1, b=0 but you've
asserted that the result should be 1"
"Your logic does not correctly handle the case when a=0, b=1 but you've
asserted that the result should be 1"
Not always, see above example. It will only stop if given the proper input.
It is not a formal logic checker.
So what you just said is that synthesis needs to make count big enough
to store 512? How big should it be? 1024? 2048?

No, YOU told it that 9 bits would be enough by declaring that YOU would
guarantee that count would never stray from the range of 0 to 511. Surely
you don't expect the synthesized result to be correct when operated outside
of the specified range do you? Do you also expect a 3.3V part to work when
connected to a 24V supply? Both cases indicate usage outside of the
specified range, so don't expect things to 'work'.
It is not a design issue, I accounted for it in the built-in
assertion.
It's not a design issue that you let a counter get out of the range that YOU
defined??!! You have a very misguided view of what assertions do and don't
do.
The language cannot store the 512 back into count. That is undefined
per the LRM.
OK, so your previous statement is that this is not a design issue on your
part because of the built-in assertion and now you (correctly) say that a
count of 512 can not be stored back into count without violating the
LRM....so you've written code that under certain conditions is not compliant
to the VHDL LRM and you do not consider this to be a design issue on your
part???? Amazing.
If what you say was true (it should just add 1), then the following
code
<SNIP>
No more examples please.....nearly everything you've had to say on this
thread has been incorrect, why compound it further? Suffice it to say that
for your examples that the rules for '+' and '-' are different for unsigned
than they are for integers...and all of that behaviour is properly defined
in the LRM or the IEEE numeric_std package. Use the tools properly and they
will do what you intend, misuse them and they will bite you.

KJ
 
J

Jonathan Bromley

hi Kevin and Andy,

I don't want to get involved in the fisticuffs :)
but I honestly think Kevin is missing precisely the
point of the original (way, way back original)
discussion, which is that new language features
can provide a form of assertion that can be both
tested in simulation and exploited in synthesis.

[KJ]
No, synthesis does not look at assertions except (possibly) for ones that
are statically determinate.

Agreed, for current VHDL tools.
You keep claiming this but do not back it up in
any way. Name one synthesis tool and post one design that gets synthesized
one way when there are no assertions but synthesizes to something different
with the inclusion of a single assert. Post it up here for all to see.

Here it is: in SystemVerilog, because it's the only
way I can do it, and with the very important caveat
that this is NOT the only way to achieve the desired effect.
The point is, though, that it works, and it precisely
illustrates the idea that Andy and I have been trying
to elucidate.

module onehot_to_binary (
( input logic [3:0] onehot_vector
, output logic [1:0] binary_code
);

always @(onehot_vector)
unique if (onehot_vector[3])
binary_code = 3;
else if (onehot_vector[2])
binary_code = 2;
else if (onehot_vector[1])
binary_code = 1;
else if (onehot_vector[0])
binary_code = 0;

endmodule

The "unique" prefix to "if" is, for simulation, an
assertion that exactly one branch of the "if" is
accessible. At runtime in simulation, *all*
branch conditions of the "if...else if" are evaluated
at the outset, and the simulator checks (asserts)
that exactly one of those expressions is true. If
not, it throws a runtime error.

Synthesis knows that this is the semantics of "unique if",
and therefore knows that it can create logic that gives
the right answers if exactly one branch's condition is true,
but if more than one branch were true, the logic would not
match the if statement's behaviour in the absence of "unique".
The assertion also permits the synthesis tool to assume that
assignment to the output is complete - every branch of the
if makes an assignment to "binary_code", and we know
by assertion that precisely one branch will be executed -
so it is unnecessary and inappropriate to add the latches
that would be needed in the absence of "unique".
You can see this working today in at least two synth tools and
at least two simulators. In synthesis, there is a real
saving of gates because the whole thing collapses to a
bunch of ORs. (Yes, I know that in an FPGA each output bit
would be a 4-input LUT function anyway, whatever the logic,
but that would no longer be true if there were more input
bits or if there were some downstream combinational logic.)

Yes, I know it's not an assertion in the conventional syntax,
but it is PRECISELY an assertion in simulation; an assertion
dressed-up in syntax that can reliably be handled by synthesis.

This is PRECISELY the problem that Weng's "orif" aims to
solve in VHDL.

There are serious limitations and difficulties both in
SystemVerilog's approach and in Weng's, but that doesn't
mean that KJ can dismiss them as meaningless or useless.
--
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services

Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
(e-mail address removed)
http://www.MYCOMPANY.com

The contents of this message may contain personal views which
are not the views of Doulos Ltd., unless specifically stated.
 
A

Andy

KJ,

I understand that you prefer to code the implementation explicitly,
and not rely on anything else to optimize it. That's fine; you and
others have aptly demonstrated how this problem can be coded
explicitly and implemented efficiently within the existing tools and
language standard. For many of the rest of us, interested in using
higher levels of abstraction in synthesis, we would prefer a method to
do so, and have identified a couple of potential solutions to that
end.

And there are other types of problems (e.g. synthesis of 'others' in
enumerated type state machines) for which most of use would also like
solutions (that already exist if you code at a low enough level). But
that's the whole point: what can we do to handle common problems
encountered while coding at higher levels of abstraction? We do not
want to do anything that would limit someone from coding at a lower,
more explicit level, but by the same token, we do not want to be
limited from coding at a higher level with similar efficiency (if
possible/practical).

I value your contributions to these discussions, and to the art of
VHDL in general, so I wanted to tone this down a bit...

Please accept my sincere apologies if anything I have said has
offended or mis-characterized you in any way; it certainly was not my
rational intent.

Andy
 
K

KJ

hi Kevin and Andy,

I don't want to get involved in the fisticuffs :)
but I honestly think Kevin is missing precisely the
point of the original (way, way back original)
discussion
Ummm...to be fair though Jon, while I do keep the point of the
original post in mind, threads take on sub-topics and go off on
various tangents and when I reply to something I generally keep the
text from the post so it is clear to what I'm referring to so I
wouldn't be reffering to something 'way, way back' that wasn't in the
post to which I'm replying. The tangent that Andy and I were off on
had to do with claims that he was making regarding assertions in the
current language standard that I feel are incorrect.

I presented my reasoning and provided examples demonstrating what I
see, Andy has presented his reasoning and has not provided any
examples that I haven't then shown to be false. If you feel you can
answer my specific objections then please do...but I'm guessing this
whole thread is dragging on all of us now, so this will be my last one
on the subject.
, which is that new language features
can provide a form of assertion that can be both
tested in simulation and exploited in synthesis.
Exploited in synthesis compared to what though is the rub. But that's
been beaten to death here.
[KJ]
No, synthesis does not look at assertions except (possibly) for ones that
are statically determinate.

Agreed, for current VHDL tools.
Then it would appear that you and Andy do not agree on the role of
asserts in synthesis either.
Here it is: in SystemVerilog, because it's the only
way I can do it, and with the very important caveat
that this is NOT the only way to achieve the desired effect.
The point is, though, that it works, and it precisely
illustrates the idea that Andy and I have been trying
to elucidate.
Good play, man. I DID forget to say that I wanted a VHDL example now
didn't I? ;)

The "unique" prefix to "if" is, for simulation, an
assertion that exactly one branch of the "if" is
accessible. At runtime in simulation, *all*
branch conditions of the "if...else if" are evaluated
at the outset, and the simulator checks (asserts)
that exactly one of those expressions is true. If
not, it throws a runtime error.

Synthesis knows that this is the semantics of "unique if",
and therefore knows that it can create logic that gives
the right answers if exactly one branch's condition is true,
but if more than one branch were true, the logic would not
match the if statement's behaviour in the absence of "unique".
The assertion also permits the synthesis tool to assume that
assignment to the output is complete - every branch of the
if makes an assignment to "binary_code", and we know
by assertion that precisely one branch will be executed -
so it is unnecessary and inappropriate to add the latches
that would be needed in the absence of "unique".
You can see this working today in at least two synth tools and
at least two simulators. In synthesis, there is a real
saving of gates because the whole thing collapses to a
bunch of ORs.
Savings compared to what? Compared to using a priority encoding if/
elsif/endif...which by inspection does not encode the one-hotness? Or
savings compared to the simple forms that I've shown? I'm guessing
it's the former, not the latter.
This is PRECISELY the problem that Weng's "orif" aims to
solve in VHDL.
And if you read my postings, the only real objection I really had to
Weng's "orif" was his claims of improved performance. It's easy to
improve performance over an incorrect 'other' solution. You would
also note that I said what he might be able to claim 'orif' is
possibly a productivity improvement over commonly misused forms(i.e.
designer productivity in working lines of code per unit of time).
There are serious limitations and difficulties both in
SystemVerilog's approach and in Weng's, but that doesn't
mean that KJ can dismiss them as meaningless or useless.
I don't recall saying things to Weng that would be interpreted in that
manner, apologies to him if they were taken in that manner. I was
trying to get information on the basis for what appears to be
unsubstantiated claims.

KJ
 
K

KJ

KJ,

I understand that you prefer to code the implementation explicitly,
and not rely on anything else to optimize it.
Not at all, I routinely use all of the 'higher level' stuff...and have
the history of service requests to the synthesis tool suppliers to
prove it. I insist on logical correctness and clarity of code and
using the wrong tool (if/elsif/endif or case) to handle a one hot is
logically incorrect in my opinion....and I've demonstrated how
handling it properly with vector coding and enumerated types works
quite well...I rely on the synthesis tool to do it's job but I don't
expect it to clean up any mess of my own doing.
That's fine; you and
others have aptly demonstrated how this problem can be coded
explicitly and implemented efficiently within the existing tools and
language standard. For many of the rest of us, interested in using
higher levels of abstraction in synthesis, we would prefer a method to
do so, and have identified a couple of potential solutions to that
end.
You make me sound like an outcast. In the one hot case, the higher
level of abstraction is the enumerated type or coded vector...use it,
it's free.
And there are other types of problems (e.g. synthesis of 'others' in
enumerated type state machines) for which most of use would also like
solutions (that already exist if you code at a low enough level). But
that's the whole point: what can we do to handle common problems
encountered while coding at higher levels of abstraction?
I don't think Weng's 'orif' was an example of any higher level of
abstraction. It was an example of a possible designer productivity
improvement.
We do not
want to do anything that would limit someone from coding at a lower,
more explicit level, but by the same token, we do not want to be
limited from coding at a higher level with similar efficiency (if
possible/practical).
I agree, but you're elevating use of an 'if' statement as a solution
as being some higher level abstraction...it's not.
I value your contributions to these discussions, and to the art of
VHDL in general, so I wanted to tone this down a bit...
OK...I'm done with this thread.
Please accept my sincere apologies if anything I have said has
offended or mis-characterized you in any way; it certainly was not my
rational intent.
Wasn't taken that way....and apologies to you if my rebuttals to your
points were taken in any negative way also.

Kevin Jennings
 
M

Mike Treseler

Jonathan said:
The "unique" prefix to "if" is, for simulation, an
assertion that exactly one branch of the "if" is
accessible. At runtime in simulation, *all*
branch conditions of the "if...else if" are evaluated
at the outset, and the simulator checks (asserts)
that exactly one of those expressions is true. If
not, it throws a runtime error.

I like the idea that a "unique" keyword
in vhdl might eliminate the need
to deal with '-' characters while capturing
exactly the same design intent.

If such a keyword were added to vhdl,
there would be two ways to handle don't cares,
as you have pointed out. However there would also be
a non-trivial gain in code clarity,
and a new way for the computer to do
a bit more of the heavy lifting.

Having said that, we have probably already spent
enough time on this thread to pay for enough luts
to convert all '-' bits to '0'
for the next ten years :)

-- Mike Treseler
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,989
Messages
2,570,207
Members
46,783
Latest member
RickeyDort

Latest Threads

Top