You have to look at every situation, there are general principles, but it's
not as simple as blindly applying a rule.
Lets explain further.
We've got this situation
typedef struct
{
int a;
int b;
} KERNELSTRUCTURE;
#define NKERNELSTRUCTS 10
KERNELSTRUCTURE kernelstructs[10]; /* writing to these structures has side effects ! */
So how do we handle it?
typedef struct
{
int N;
KERNELSTRUCTURE *kernel; /*the bits that manipulate the kernel */
KERNELSTRUCTURE *copy; /* what we'll be reading and writing from */
bool *dirty; /* possibly keep track of dirty structures */
} ABSTRACTION;
Then we set one up by reading the kernel at program start.
Then we pass an ABSTRACTION * to all out subroutines. So what we've done is we've converted
the subroutines from IO routines, which have side-effects, to pure bit-shuffling functions. All
they're doing is shuffling bits about in copy.
The we have one function.
void synch(ABSTACTION *abs)
{
/* go through updating dirty kernel entries */
/* maybe you also need to read the kernel again */
}
That's the one place we update the kernel, it's the only function which can have any side
effects.
What's the advantage? Well now we've decoupled testing and debugging from the
particular kernel. We run our subroutines on a UNIX box and verify that for every kernel
state we're interested in, they put copy into the correct state. All the bit shuffling is
correct.
It simply remains to hook up sync to a real kernel on real hardware.
Now of course you can have all sorts of problems, such as needing to update the
kernel and get back a result from the system immediately. There might not be
time to go all the way back up the call tree, call synch, then go back down again to
process the answer. This design is not a magic bullet which can be blindly applied.
But it's the sort of thing I'm talking about.
And you think I don't "live in the real world?".