Encapsulation is often less about which code can modify/drive a variable/signal than it is about which code can read it, depend on it, and quit working if it is modified.
For example, say I have a couple of processes in an architecture. One uses a counter to step through some data, and it is not important which order it is processed, so the author decides to use an up-counter.
Another process that can see that counter, uses that same counter's value to control something else within it, but it depends on the implementation decision in the first process to use an up-counter.
What happens if the first process is modified to optimize the counter by converting it to a down-counter? If the counter had been a local variable, then there would be nothing outside that process that could be directly dependent upon its behavior, and changing the direction of the counter would have no impacts elsewhere. But if it is a signal, then the entire architecture has to be understood to make sure that any changes to the counter's behavior do not have an unforeseen impact.
That makes no sense to me. Whether the counter implementation affects
other logic depends on whether the other logic uses the value of the
counter. Why wouldn't you know this if a signal is used?
Sometimes shared counters are a good thing; great, make them signals so that it is known that it is intended to be shared. Otherwise, keep it local so that it cannot be shared. Better yet, if the counter is shared among only two of the processes, put those two processes in a block, and declare the counter signal locally within the block. This protects the counter from dependencies in the other processes in the architecture.
Sometimes??? A counter is part of a design, created by a designer. If
the counter is intended to be shared it is shared, otherwise it is not.
You are talking about a totally different situation than the OP is
talking about using procedures.
Al's solution of passing state variables around between different processes is another example. Generally, state variables are pure implementation, and should not be shared. A better solution might be to define the interfaces between the procedures as explicit control (start) and status (finished) parameters, so that one procedure can be modified to change the way its FSM works, while maintaining the interface signals' functionality, and the other procedures would not be impacted.
I don't follow exactly. My problem with alb's implementation is that
the order of the procedure calls affects the values read by each
procedure, all within *one process*. That has got to be clumsy if not
impossible to make work. Or maybe I read his example wrong. If they
are separate processes then they communicate by signals, no?
If designers are using a single process per entity, then yes, there is no practical difference in scope between a signal and a variable. Most designers use multiple processes per entity, so there is a difference for most designers.
Yeah, but I don't buy into the idea that using signals creates problems
from lack of isolation. Modularization allows isolation. I use
entities, you want to use processes, I don't see much difference. I put
different state machines into different processes for clarity, I think
you (or alb) are putting different state machines into different
procedures in the same process. But I can't see how this woudl work the
way he shows it with one variable for each state variable. With no
isolation between the present state and next state I can't see how to
code separate procedures.
This is a matter of how most designers are taught HDL: by examples of what kind of code structure creates what kind of circuit, and then just write concurrent islands of code that generates those circuits, and wire them up (in code).
Sure, it is important to know what kind of circuit will be created from a certain piece of code. But the problem is, the synthesis tool is analyzing the behavior of the code, not the structure, and inferring the circuit from that behavior. The problem is that designers are taught that "code that looks like this" creates a register, and "code that looks like that" creates a combinatorial circuit.
Designers should be taught that "code that BEHAVES like this" creates a register, etc. It is amazing to me how many different approaches to avoiding latches in RTL are based on a fundamental misunderstanding of the behavior that infers a latch (which is very similar to the behavior that creates a register).
Design productivity can only progress so far by continuing to focus on describing the circuitry (gates and registers). To improve design productivity, we have to start designing more at the behavioral level (functions, throughput and latency). Why do you think high level synthesis tools (that can synthesize untimed models in C, etc.) are becoming so popular? I don't think it is the language as much as it is the concept of describing behavior separate from throughput and latency (those are provided to the HLS tool separately), and getting working hardware out the other end.
I don't agree really. RTL doesn't describe literal registers and gates.
It describes behavior at the level of registers. If you need that
level of control, which many do just so they can understand what is
being produced, then there is nothing wrong with RTL. Abstractions
create obfuscation with the hardware produced. I often have trouble
predicting and controlling the size and efficiency of a design. What
you are a describing would likely make that much worse.
Of course, any output from a process must be a signal. But for that signal to be a combinatorial function of registered values in the same process, the registers must be inferred from variables. If you use a signal for the register in the process, you have to use a separate process for the combinatorial function.
I don't have a problem with that although I would like to learn the
technique better, I might end up liking it. I have seen it, but never
used it. I'm usually too busy designing the circuit in my head to worry
about the coding really. I just don't see problems with the coding.
I'd like to be better at test benches though. There I sometimes code
purely behaviorally. But only when timing is not such an issue.
Perhaps so, but my background is hardware design (analog and digital circuit cards and later, FPGAs), not SW. My first few XC3090 FPGA designs were by schematic entry. I did not immediately embrace HDL design (I actually lobbied managment against it), but once I tried it, I was hooked. My first VHDL training was for simulation, not synthesis, so maybe that too has influenced the way I use VHDL even for synthesis. Over the decades I have seen first hand the value of designing the desired behavior of a circuit, rather than describing the circuit itself. There are times where performance or synchronization still require focus on the circuit. But even for those, I tend to tweak the behavior I am describing to get the circuit I need (using RTL& Technology viewers in the synthesis tool), rather than brute-force the circuit description.
There is also the issue that I don't use FPGAs and HDL every day, or any
other tool for that matter. I move around enough that I want to learn a
way to use a tool and then tend to stick with it so I don't have to keep
relearning. The tools change enough as it is.
Among causes for slow simulations, using signals where variables would work is pretty low on the list of big hitters. But using lots of combinatorial processes is a much bigger hitter (gate level models are the extreme example of this). Some simulators can merge execution of processes that share the same sensitivity list, saving the overhead of separately starting and stopping the individual processes. Combinatorial processes rarely share the same sensitivities, so they are rarely combined, and the performance shows it.
Of course!
Andy
Well, like I said, next design I do I will try the combinatorial output
from a clocked process to see how I like it. Not sure when that will be.