names, values, boxes and microchips

R

Rainer Weikusat

Instead of trying another explanation of the 'it is a name given to a
value' versus 'it is a register' issue, I thought I should rather try
an example.

Introductory remarks: The subroutine whose code is included below
takes two sorted lists of 'DNS [reverse] domain objects' as arguments
(represented as references to arrays[*]) and is supposed to produce an
output list of all objects on the first input list which don't
'conflict' with the items on the filter list. One application this is
put to is to determine which RFC1918 reverse zones still need to be
'terminated' on some DNS server, given a list of reverse domains
configured by a customer which may cover some or all of the RFC1918
address space(s).

The subroutine could be regarded as a software emulation of a
seriously 'high-level' special purpose IC performing the same
operation. In particular, it has a set of 'working registers'
(represented as my variables) whose contents change as the algorithm
proceeds. This is something totally different from 'assigning a
[transient] name to a value' and not at all closely related to
'programming a general purpose IC using its machine language'.

The return value of the net_compare method call is stored using list
assignment because this method returns a list of two values in some
cases, the first of which is interesting to this subroutine ('is it
before or behind') the second ('is it immediately adjacent to the
other object') is used by another subroutine.

[*] Contrary to another, immensely popular myth, using linked
lists instead of arrays, based on using anonymous 2-element
arrays for representing 'cons cells', is actually faster for
algorithms like this, especially if the lists become
large. For this particular case, they're expected to be small
and the operation performed infrequently (whenever someone
changes the configuration in the GUI), hence, the more
convenient approach was chosen.

sub filter_against
{
my ($in, $filter) = @_;
my ($in_item, $in_pos, $filter_item, $filter_pos);
my (@out, $step_in, $step_filter, $rc);

$in_pos = 0;
$in_item = $in->[0];
$step_in = sub {
last LOOP if ++$in_pos == @$in;
$in_item = $in->[$in_pos];
};

$filter_pos = 0;
$filter_item = $filter->[0];
$step_filter = sub {
last LOOP if ++$filter_pos == @$filter;
$filter_item = $filter->[$filter_pos];
};

LOOP: {
($rc) = $in_item->net_compare($filter_item);

p_alloc('in %s, filter %s, rc %u',
$in_item, $filter_item, $rc);

given ($rc) {
when (R_AFTER) {
push(@out, $in_item);
&$step_in;
}

when ([R_BEFORE, R_SUB]) {
&$step_filter;
}

when ([R_SAME, R_SUPER]) {
p_alloc('%s: dropping %s (%s)', __func__,
$in_item, $filter_item);

&$step_in;
}
}

redo;
}

push(@out, @$in[$in_pos .. $#$in]);
return \@out;
}

[This code is copyrighted by my employer and cited here for
educational purposes].
 
R

Rainer Weikusat

Ben Morrow said:
Quoth Rainer Weikusat <[email protected]>:
[...]
sub filter_against
[...]

There are times when it's necessary to write in that sort of state-
machine style, but it's always clearer to avoid it when you can.

use List::MoreUtils qw/part/;

my %rc = (
R_AFTER => 0,
R_SAME => 1,
R_SUPER => 1,
R_BEFORE => 2,
R_SUB => 2,
);

sub filter_against {
my ($in, $filter) = @_;

my @out;
for my $f (@$filter) {
(my $out, my $drop, $in) = part {
my ($rc) = $_->net_compare($f);
$rc{$rc} // die "bad return from net_compare";
} @$in;

push @out, @$out;
p_alloc("%s: dropping %s (%s)", __func__, $_, $f)
for @$drop;
}

return \@out;
}

This does more compares than yours, but given small-lists-infrequently-
compared I doubt that matters.

Judgeing from a cursory look at the documentation, it also does a lot
of other stuff it shouldn't be doing and/or doesn't need to do,
including that I have serious doubts if this works at all. But I'm not
really in the mood to figure out if this hypercomplicated attempt at
reverse dicksizing (Look! Mine is shorter than yours!) is semantically
equivalent to the fairly straight-forward list merge based algorithm
I'm using. In any case, this is the kind of code I'd delete without
even trying to fix it as soon as I'd have to look at it for the first
time.
 
R

Rainer Weikusat

Rainer Weikusat said:
Ben Morrow said:
Quoth Rainer Weikusat <[email protected]>:
[...]
sub filter_against
[...]

There are times when it's necessary to write in that sort of state-
machine style, but it's always clearer to avoid it when you can.

use List::MoreUtils qw/part/;

my %rc = (
R_AFTER => 0,
R_SAME => 1,
R_SUPER => 1,
R_BEFORE => 2,
R_SUB => 2,
);

sub filter_against {
my ($in, $filter) = @_;

my @out;
for my $f (@$filter) {
(my $out, my $drop, $in) = part {
my ($rc) = $_->net_compare($f);
$rc{$rc} // die "bad return from net_compare";
} @$in;

push @out, @$out;
p_alloc("%s: dropping %s (%s)", __func__, $_, $f)
for @$drop;
}

return \@out;
}

This does more compares than yours, but given small-lists-infrequently-
compared I doubt that matters.
[...]

But I'm not really in the mood to figure out if this
hypercomplicated attempt at reverse dicksizing
[...]

is semantically equivalent to the fairly straight-forward list merge
based algorithm I'm using.

I've nevertheless been curious enough that I spend some time thinking
about this while walking back to my flat after a trip to the
supermarket: Provided I understand this correctly, this is basically
the bubblesort algorithm: It compares each element of the filter list
to all elements on the input list in turn, moving the input elements
which are in front of the current filter item to a temporary output
list and the others onto the input list for the next iteration. It
then does a 2nd pass through the temporary output list in order to
move the elements on there to the real output list and a second pass
through the 'drop' list in order to print the tracing
information. These two fixup steps are necessary because the 'part'
routine is really totally unsuitable for this particular task because
it always creates a bunch of new anonymous arrays to store its output.

Possibly, there's an even worse way to perform what is essentially a
merge operation of two sorted lists but if the idea was to produce the
most inefficient implementation of this, the code above is certainly a
promising contender for the 'baddest of the worst' title.

"It must 're-use' code someone else already wrote" is not a sensible,
overriding concern for creating software

Joke I invented a while ago: Imagine software companies would be
constructing lorries, how would these look like?

Version 1: Like a regular lorry except that the wheels aren't round
but elliptical because the ellipsoids were left-over from a previous
project.

Version 1.5: Like a lorry with elliptical wheels but it has an
additional jet engine on the back to achieve Superior(!) speed when
compared with the lorries built by competitors using round wheels.

Version 2: Comes with an advanced AI auxiliary steering system
supposed to enable smooth movement despite the combined effects of the
jet engine and the elliptical wheels.

And so forth. The only thing which is never going to happen is that
someone goes back to step one and redesigns the autobody to
accommodate round wheels.
 
R

Rainer Weikusat

Ben Morrow said:
There's really no need to be rude.

Indeed. In particular, there was no need to ignore the reason why I
posted this example and produce this absolutely atrocious
demonstration that some people are really too clever for their own
good instead. That was not only rude but - as a sort of reverse
categorical imperative - it could be regarded as morally repellent as
well.

[...]
Who cares how efficient it is? You said yourself this is run
infrequently over short lists; it's far more important that the
algorithm be simple and the code be comprehensible than that it run as
fast as possible.

There's no point in deliberately using algorithms which are badly
suited for certain task, especially when said 'badly suited'
algorithms rely on outright bizarre 'homegrown' abuses for
pre-existing functions instead of simple and well-known 'standard
operations' like merging to sorted lists. That's thoroughly
'undergraduate' stuff and one can expect people to be sufficently
familiar with that that they refrain from totally outlandish flights
of fancy of this kind.
 
R

Rainer Weikusat

Ben Morrow said:
There's really no need to be rude.

Indeed. In particular, there was no need to ignore the reason why I
posted this example and produce this absolutely atrocious
demonstration that some people are really too clever for their own
good instead. That was not only rude but - as a sort of reverse
categorical imperative - it could be regarded as morally repellent as
well.

[...]
Who cares how efficient it is? You said yourself this is run
infrequently over short lists; it's far more important that the
algorithm be simple and the code be comprehensible than that it run as
fast as possible.

There's no point in deliberately using algorithms which are badly
suited for certain task, especially when said 'badly suited'
algorithms rely on outright bizarre 'homegrown' abuses for
pre-existing functions instead of simple and well-known 'standard
operations' like merging to sorted lists. That's thoroughly
'undergraduate' stuff and one can expect people to be sufficently
familiar with that that they refrain from totally outlandish flights
of fancy of this kind.

In any case, if you feel like disputing the results of some sixty
years (probably) of research into elementary sorting algorithms based
on NIH and "But I don't care for that!" please write a
book. Suggestion for a suitable cover:

http://cheezburger.com/4354656256
 
R

Rainer Weikusat

Rainer Weikusat said:
Rainer Weikusat said:
Ben Morrow <[email protected]> writes:
[...]
use List::MoreUtils qw/part/;

my %rc = (
R_AFTER => 0,
R_SAME => 1,
R_SUPER => 1,
R_BEFORE => 2,
R_SUB => 2,
);

sub filter_against {
my ($in, $filter) = @_;

my @out;
for my $f (@$filter) {
(my $out, my $drop, $in) = part {
my ($rc) = $_->net_compare($f);
$rc{$rc} // die "bad return from net_compare";
} @$in;

push @out, @$out;
p_alloc("%s: dropping %s (%s)", __func__, $_, $f)
for @$drop;
}

return \@out;
}

This does more compares than yours, but given small-lists-infrequently-
compared I doubt that matters.
[...]

But I'm not really in the mood to figure out if this
hypercomplicated attempt at reverse dicksizing
[...]

is semantically equivalent to the fairly straight-forward list merge
based algorithm I'm using.

I've nevertheless been curious enough that I spend some time thinking
about this while walking back to my flat after a trip to the
supermarket: Provided I understand this correctly, this is basically
the bubblesort algorithm:

I've used the catchup function of my newsreader to throw everything in
this group away since my less-than-reasoned two postings of
yesterday. While I've grown somewhat more accustomed to ordinary human
meanness, the umount of totally irrational[*] vitriolic rage some
people are capable of when being confronted with concepts alien to
them (such as using 'lexically scoped subroutines' with meaningful
names instead of duplicated code) is still a little bit too much for
me. But there are two points I'd like to clear up a little.

1. 'Bubblesort': This is not, in fact, the bubblesort algortihm but an
insertion sort variant, somewhat obscured by the need to work around
behaviour built into the 'part' subroutine which isn't really useful
for this case: It works by searching the place where the item on the
filter list had to be inserted into the other if it was to be
inserted. A somewhat less contorted implementation could look like
this (uncompiled/ tested) example:

my ($f, $i, $next, @out);

for $f (@$filter) {
$next = [];

for $i (@$in) {
given ($i->net_compare($f)) {
when (R_AFTER) {
push(@out, $i);
}

when ([R_BEFORE, R_SUB) {
push(@$next, $i);
}
}

$in = $next;
}

This is still rather bizarre, given that both input list are sorted
and free of items which are a subset of other items on the list,
because there's no reason to continue comparing $f with elements on
@$in after its place in the list has been found, ie, after a
comparison either returned '$f is in front of this' (R_BEFORE) or '$f
is a subset of this' (R_SUB) since the result of all remaining
invocations will be R_BEFORE. That's a direct result of the
unfortunate choice to use a general purpose 'partition a list' routine
for the inner loop: That's not quite what is really needed for an
insertion sort but 'clarity of the implementation be damned,
('clarity' in the sense that all operations which are part of the
algorithm are actually useful steps wrt accomplishing the desired
end) we can Cleverly(!) save two lines of code here' ...

This is really, seriously bad for writing something which can be
maintained by someone other than the original author because nobody
except him knows which parts of the implemented algortihm are
functional and which are accidental --- this will need to be
rediscovered by everyone who is tasked with making sense of this code
(which includes the original author after a sufficient amount of time
has passed).

2. __func__: That's a feature of C Perl unfortunately lacks, namely, a
'special expression' containing the name of the current
subroutine. This is useful for diagnostic messages because it enables
someone reading through the debug output to identify the code which
produces a particular 'diagnostics statement' more easily than by
grepping for the string. The equivalent Perl expression is

(caller(1))[3]

when invoked as an actual subroutine or

(caller(0))[3]

when the expression is 'inlined' instead. __func__ is defined as
subroutine,

sub __func__()
{
return (caller(1))[3];
}

to make the perl compiler happy when being run for syntax/ strictness
checking and warnings alone and prior to installation (into a debian/
directory, actually), all code files are preprocessed via m4 -P using
an 'init file' containing

m4_define(`__func__', `(caller(0))[3]')

m4_changequote(`', `')

because this code prints tenthousands of messages quickly for 'large'
installations (several thousand users) and they're necessary for
identifying and fixing errors in it.

[*] Eg, referring to use of an O(n) algorithm for merging two sorted
lists as 'pointless microoptimization which would be better
accomplished by using C instead of Perl' --- that's not 'an
optimization' aka 'attempt to cover the well after the
baby drowned' at all but the obvious 'textbook' choice.
 
G

gamo

2. __func__: That's a feature of C Perl unfortunately lacks, namely, a
'special expression' containing the name of the current
subroutine. This is useful for diagnostic messages because it enables
------------------

There is a new function that could be what you want:
perldoc -f __SUB__

__SUB__ A special token that returns a reference to the current
subroutine, or "undef" outside of a subroutine.

The behaviour of "__SUB__" within a regex code block (such as
"/(?{...})/") is subject to change.

This token is only available under "use v5.16" or the
"current_sub" feature. See feature.

Best regards,
 
R

Rainer Weikusat

Ben Morrow said:
Quoth Rainer Weikusat <[email protected]>:
[...]
my ($f, $i, $next, @out);

for $f (@$filter) {
$next = [];

for $i (@$in) {
given ($i->net_compare($f)) {
when (R_AFTER) {

(Smartmatch and given/when are deprecated in 5.18, so it's probably not
a good idea to use them in new code.)

Not really. They're re-classified as 'experimental', mainly because
the 'smart match' operation is considered to be ill-defined/
confusing, something I wholeheartedly agree with: The problem with all
'do what I mean'-algorithms is that they're "Damn Warren's Infernal
Machine" for people other than the 'I' who wrote down what he meant.

[...]
I agree, it's not ideal. However, even your implementation above still
obscures the most important assumption of the algorithm, which is that
the R_AFTER entries all come before the R_SAME entries which all come
before the R_BEFORE entries. You, I am sure, regard this as obvious,
since you wrote the code, but it's not at all clear to someone else
reading it.

How about something more like this (untested)?

There's something in here that you seemingly failed to get so far,
namely, the operation performed by the algorithm I posted is
(conceptually) a 'simple' single-pass merge operation of two sorted
lists. That's the 'merge' part of the mergesort algorithm invented
(according to Wikipedia) by John von Neuman in 1945. Abstractly
described, it works like this:

while neither input list is empty
compare the two items at the front
move the smaller one to the output list
elihw

This is slightly more complicated for the actual case because the
items I'm dealing with are IPv4 (sub-)networks, hence, the comparison
routine has five possible results (2nd operand comes after 1st, before
1st, is identical to 1st, is a subset of 1st or is a superset of 1st)
and 'duplicates' (input item is identical to or a subset of filter
item) are dropped instead of becoming part of the output. It also
deviates from a 'plain merge' because the items on the filter list
aren't really moved to the output list, just removed from filter as
soon as they become 'the smallest item'.

This is a simple, well-known algorithm which is part of an elementary
sorting algorithm first published 68 years ago and I absolutely don't
think inventing a more complicated 'new one', especially if that is
also very likely going to be 'less efficient' in the sense that it
does more comparisons and/or move operations is appropriate.

[...]
My aim is not to save lines of code, it's to make the algorithm as clear
as possible. My algorithm may not have been the best choice, but it took
me quite a bit of thought to work out what yours was in the first
place.

That should have been abundantly clear from the leading statement
which talked about 'processing two sorted input lists in order to
produce an output list' ('should have been' here meaning I expected
that the first thing anyobdy reading this would think of would be
"aha, a mergesort").
Loops which iterate over two arrays at once are not easy to follow,
especially when the iteration and termination steps are hidden inside
callbacks.

These are not 'callbacks' as they're not passed to other code which
'calls back into the calling code', they're lexicially scoped
subroutines performing a part of the algorithm whose details don't
matter for 'the main loop', namely, "move to the next item in this
list if there is one, otherwise, terminate the loop" which exist so
that the same (or an 'almost same') code sequence isn't contained in
there in three different places. Instead, it is 'called upon'
using a sensible name, ie, 'step_in' for 'step the in list' or
'step_filter' for 'step the filter list'. The names may not be all
that sensible but if so, that's because of my lacking English, not
because of deficiencies of the idea itself.

[...]
[*] Eg, referring to use of an O(n) algorithm for merging two sorted
lists as 'pointless microoptimization which would be better
accomplished by using C instead of Perl' --- that's not 'an
optimization' aka 'attempt to cover the well after the
baby drowned' at all but the obvious 'textbook' choice.

The O() classifications refer to *limiting* complexity, so they are most
important in the limit; that is, when n is large. In this case n is
small.

For the 'common case', this will still make 17 or 34 pointless
comparisons and as many copy operations. That's in itself no big deal
but they don't buy anything, considering that the merge algorithm is
both old and 'popular' despite it deals with two lists in one loop.
 
C

Charles DeRykus

Quoth Rainer Weikusat said:
Ben Morrow <[email protected]> writes:
...
Let's look at the main loop from your original post:

LOOP: {
($rc) = $in_item->net_compare($filter_item);

p_alloc('in %s, filter %s, rc %u',
$in_item, $filter_item, $rc);

given ($rc) {
when (R_AFTER) {
push(@out, $in_item);
&$step_in;
}

when ([R_BEFORE, R_SUB]) {
&$step_filter;
}

when ([R_SAME, R_SUPER]) {
p_alloc('%s: dropping %s (%s)', __func__,
$in_item, $filter_item);

&$step_in;
}
}

redo;
}

When does it terminate? How does it make progress? It's not even
immediately clear it's a *loop*: while (1) or for (;;) are the
conventional idioms.
...

Even a more expressive 'do' might help a dizzy brain:


do {
....
}
until $in_pos == @$in or $filter_pos == @$filter;
 
R

Rainer Weikusat

Ben Morrow said:
Quoth Rainer Weikusat said:
Ben Morrow <[email protected]> writes:
[...]
This is a simple, well-known algorithm which is part of an elementary
sorting algorithm first published 68 years ago and I absolutely don't
think inventing a more complicated 'new one', especially if that is
also very likely going to be 'less efficient' in the sense that it
does more comparisons and/or move operations is appropriate.

I have never questioned the appropriateness of the algorithm. My latest
example uses the *same* algorithm, it is just (IMHO) a great deal
clearer about what is happening.

Including the 'shift_while' I counted no less than six loops or 'loopy
constructs' in there. I have - plainly - no idea what this code does
(or would do) and I'm not really interested in determining that,
either: It is exceedingly unlikely that it is an improvement of the von
Neumann algorithm and even if it was, all the complications in order
to avoid something as simple as an addition, a comparison and an
assignment would hardly be worth the effort.

[...]
These are not 'callbacks' as they're not passed to other code which
'calls back into the calling code', they're lexicially scoped
subroutines performing a part of the algorithm whose details don't
matter for 'the main loop', namely, "move to the next item in this
list if there is one, otherwise, terminate the loop" which exist so
that the same (or an 'almost same') code sequence isn't contained in
there in three different places. Instead, it is 'called upon'
using a sensible name, ie, 'step_in' for 'step the in list' or
'step_filter' for 'step the filter list'. The names may not be all
that sensible but if so, that's because of my lacking English, not
because of deficiencies of the idea itself.

Let's look at the main loop from your original post:

LOOP: {
($rc) = $in_item->net_compare($filter_item);

p_alloc('in %s, filter %s, rc %u',
$in_item, $filter_item, $rc);

given ($rc) {
when (R_AFTER) {
push(@out, $in_item);
&$step_in;
}

when ([R_BEFORE, R_SUB]) {
&$step_filter;
}

when ([R_SAME, R_SUPER]) {
p_alloc('%s: dropping %s (%s)', __func__,
$in_item, $filter_item);

&$step_in;
}
}

redo;
}

When does it terminate? How does it make progress?

In the appropriate way, as indicated by the auxilary subroutines. The
details of 'the appropriate way' don't matter for this loop which is
closer to the abstract algorithm than it could have been when
repeating the mechanics three times.
It's not even immediately clear it's a *loop*:

With a block label of 'LOOP:' that shouldn't be to difficult to guess.
while (1) or for (;;) are the conventional idioms.

I've had people complaining about that in the past as well: "But
that's not the way how I would have done it!" is a powerful impediment
to understanding: The guy who always writes for (;;;) will freak out
when seeing 'while (1)' simply because it looks different. Actually,
the guy who always writes 'for (...)' will freak out upon encountering
a loop using while for the first time and will be more strongly inclined to
'fix' the code by bringing it in line with his style of writing than
come to the conclusion that his knowledge of $language is less
complete than it ought to be[*].

[*] I had an excursion into a Lisp implementation written by someone
to support his 'web application development' business a while ago
(incredible as it may sound) and this code was written in a completely
alien style to me. Nevertheless, after understanding what it was doing
for which reasons, I came to the conclusion that - while I would
never write anything in this way - there were no objective reasons for
doing so, just a set of radically different but equally sensible
conventions colliding with some of my (unjustified) prejudices.
Probably it would have helped to use an ordinary sub with arguments
rather than trying to be clever with closures, and a return value rather
than an implicit last. With a call like

step_array $in, \$in_pos, \$in_item
or last;

These aren't closures, either: They exist inside a certain, lexical
environment but don't 'capture' any part of it beyond its 'natural
lifetime'. The fact that the lists this algorithm works with are
represented as arrays are as irrelevant to it as the names of the
variables used to step through them: This is the 'abstraction' you've
repeatedly judged me of being incapable of understanding. It means
leaving out irrelevant details in favor of concentrating on a
higher-level description of what is going on: &$step_in instead of

last LOOP if ++$in_pos == @$in;
$in_item = $in->[$in_pos]

Perl 5.18 adds explicit support for lexically-scoped subroutines
without having to resort to references to anonymous subroutines so
constructs of this kind should become more common in future (IMHO,
they're rarely useful but this case happens to be one of the
exceptions).

[...]
Even then, this is so close to just being 'shift' that it
seems like unnecessary reinvention;

'shift' destroys the arrays it works with. I have to keep them intact.
in fact, I think it's equivalent to

($in_pos, $in_item) = each @$in
or last;

For the perl I'm mostly interested in (5.10.1), this complains loudly
about the fact that @$in is not a hash. Also, the index variables are
solely used to step through the arrays in a non-destructive way and
since they do make the implementation more complicated, being able to
get rid of them would be good.
 
R

Rainer Weikusat

Charles DeRykus said:
...
Let's look at the main loop from your original post:

LOOP: {
[...]
given ($rc) {
when (R_AFTER) {
push(@out, $in_item);
&$step_in;
}
[...]
redo;
}

When does it terminate? How does it make progress? It's not even
immediately clear it's a *loop*: while (1) or for (;;) are the
conventional idioms.
...

Even a more expressive 'do' might help a dizzy brain:


do {
....
}
until $in_pos == @$in or $filter_pos == @$filter;

That's what I originally did. But I didn't like the fact that I had to
assign something to the 'top of the list' variable after the
termination condition for the loop became true only to postpone the
check for that to the 'more conventional' location, especially
considering that I also write code in languages (PL/Pgsql) where the
most basic looping construct is really just

loop
-- do something
end loop

Loops with multiple termination conditions 'trueness' of which
becomes available as the 'loop body' processing proceeds are not that
uncommon and I thought it might be clearer to omit the 'mock loop
condition' at all since 'blocks are just single-iteration loops' (but
they don't have to be single-iteration). I haven't yet come to some
final opinion on this.
 
R

Rainer Weikusat

--text follows this line--

[...]
Instead, it is 'called upon' using a sensible name, ie, 'step_in'
for 'step the in list' or 'step_filter' for 'step the filter
list'. The names may not be all that sensible but if so, that's
because of my lacking English, not because of deficiencies of the
idea itself.

Loosely related anecdote (I feel like telling because this particular
phenomenon should really be documented for its example value): Some
years ago, I used the time between Christmas and New Year's Eve for
investigating a network driver which occasionally caused serious
performance issues customers had been complaining about in an
on-and-off way for one or two years. The code was a pretty typical
example of a 'vendor written "board support" device driver' (judgeing
from the comments, it originally targetted SVR4, had been sort-of
ported to every other operating system under the sun, and was supposed
to support countless generations of wildly incompatible hardware),
that is, a total mess even the people who worked on it didn't
understand anymore. One of the rather harmless 'strange sights' I
found in there was that the RX DMA ring[*] was indexed using a
variable named index while the TX ring used an outdex.

[*] Array treated as ring buffer with the help of index = (index + 1)
% ring_size stepping code used to inform the DMA (direct memory
access) engine on the NIC of the locations of memory buffers set aside
for storing incoming ethernet frames or picking up outgoing ones. The
R in RX stands for reception, the T in TX for transmission. The
acronyms are conventionally used in this way.
 
R

Rainer Weikusat

--text follows this line--

[...]
Instead, it is 'called upon' using a sensible name, ie, 'step_in'
for 'step the in list' or 'step_filter' for 'step the filter
list'. The names may not be all that sensible but if so, that's
because of my lacking English, not because of deficiencies of the
idea itself.

Loosely related anecdote (I feel like telling because this particular
phenomenon should really be documented for its example value): Some
years ago, I used the time between Christmas and New Year's Eve for
investigating a network driver which occasionally caused serious
performance issues customers had been complaining about in an
on-and-off way for one or two years. The code was a pretty typical
example of a 'vendor written "board support" device driver' (judgeing
from the comments, it originally targetted SVR4, had been sort-of
ported to every other operating system under the sun, and was supposed
to support countless generations of wildly incompatible hardware),
that is, a total mess even the people who worked on it didn't
understand anymore. One of the rather harmless 'strange sights' I
found in there was that the RX DMA ring[*] was indexed using a
variable named index while the TX ring used an outdex.

[*] Array treated as ring buffer with the help of index = (index + 1)
% ring_size stepping code used to inform the DMA (direct memory
access) engine on the NIC of the locations of memory buffers set aside
for storing incoming ethernet frames or picking up outgoing ones. The
R in RX stands for reception, the T in TX for transmission. The
acronyms are conventionally used in this way.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,982
Messages
2,570,186
Members
46,742
Latest member
AshliMayer

Latest Threads

Top