BCD List to HEX List

J

John Machin

Grant said:
Or perhaps it may not.

Methinks it was all just a rather good troll.

--

Now we have a few more questions i.e. apart from what CPU is in
Phillipe's device:
1. WHO was Philippe replying to -- Simon or me?
2. WHAT was cute?
3. Grant thinks WHAT might have been a rather good troll by WHOM?

Ah well never mind ... I think I'll just report the whole thread to
thedailywtf and move on :)
 
B

bryanjugglercryptographer

John said:
Yeah yeah yeah
i.e. y = y * 10 + n
he's been shown that already.

Problem is that the OP needs an 8-decimal-digit (32-bits) answer, but
steadfastly maintains that he doesn't "have access to" long (32-bit)
arithmetic in his C compiler!!!

And he doesn't need one. He might need the algorithms for shift and
add.
 
J

John Machin

And he doesn't need one. He might need the algorithms for shift and
add.

I hate to impose this enormous burden on you but you may wish to read
the whole thread. He was given those "algorithms". He then upped the
ante to 24 decimal digits and moved the goalposts to some chip running
a cut-down version of Java ...

TTFN
John
 
B

bryanjugglercryptographer

John said:
I hate to impose this enormous burden on you but you may wish to read
the whole thread. He was given those "algorithms".

Quite some longwinded code and arguing about platforms in the rest
of the thread. My version assumes three subroutines: extracting
nibbles, shifting, and adding, Those are pretty simple, so I asked
if he needed them rather than presenting them. Assuming we have
them, the algorithm is three lines long. Don't know why people
have to make such a big deal of a BCD converter.
He then upped the
ante to 24 decimal digits and moved the goalposts to some chip running
a cut-down version of Java ...

He took a while to state the problem, but was clear from the start
that he had lists of digits rather than an integer datatype.
 
J

John Machin

My version assumes three subroutines: extracting
nibbles, shifting, and adding, Those are pretty simple, so I asked
if he needed them rather than presenting them.
Assuming we have
them, the algorithm is three lines long.

Perhaps you could enlighten us by publishing (a) the spec for each of
the get_nibble(s), shift, and add subroutines (b) the three-line
algorithm (c) what the algorithm is intended to achieve ...
He took a while to state the problem, but was clear from the start
that he had lists of digits rather than an integer datatype.

Yes, input was a list [prototyping a byte array] of decimal digits. The
OUTPUT was also a list of something. A few messages later, it became
clear that the output desired was a list of hexadecimal digits. Until
he revealed that the input was up to 24 decimal digits, I was pursuing
the notion that a solution involving converting decimal to binary (in a
32-bit long) then to hexadecimal was the way to go.

What is apparently needed is an algorithm for converting a "large"
number from a representation of one base-10 digit per storage unit to
one of a base-16 digit per storage unit, when the size of the number
exceeds the size (8, 16, 32, etc bits) of the "registers" available. Is
that what you have?

Cheers,
John
 
B

bryanjugglercryptographer

John said:
Perhaps you could enlighten us by publishing (a) the spec for each of
the get_nibble(s), shift, and add subroutines (b) the three-line
algorithm (c) what the algorithm is intended to achieve ...

"For each nibble n of x" means to take each 4 bit piece of the BCD
integer as a value from zero to sixteen (though only 0 through 9
will appear), from most significant to least significant. "Adding"
integers and "shifting" binary integers is well-defined
terminology. I already posted the three-line algorithm. It
appeared immediately under the phrase "To turn BCD x to binary
integer y," and that is what it is intended to achieve.
He took a while to state the problem, but was clear from the start
that he had lists of digits rather than an integer datatype.

Yes, input was a list [prototyping a byte array] of decimal digits. The
OUTPUT was also a list of something. A few messages later, it became
clear that the output desired was a list of hexadecimal digits. Until
he revealed that the input was up to 24 decimal digits, I was pursuing
the notion that a solution involving converting decimal to binary (in a
32-bit long) then to hexadecimal was the way to go.

What is apparently needed is an algorithm for converting a "large"
number from a representation of one base-10 digit per storage unit to
one of a base-16 digit per storage unit, when the size of the number
exceeds the size (8, 16, 32, etc bits) of the "registers" available.

I read his "Yes I realized that after writing it." response to
Dennis Lee Bieber to mean Bieber was correct and what he wanted
was to go from BCD to a normal binary integer, which is base 256.

The point of posting the simple high-level version of the
algorithm was to show a general form that works regardless of
particular languages, register sizes and storage considerations.
Those matters can effect the details of how one shifts a binary
integer left one bit, but shifting is not complicated in any
plausible case.
Is that what you have?

I'm sorry my post so confused, and possibly offended you.
 
J

John Machin

"For each nibble n of x" means to take each 4 bit piece of the BCD
integer as a value from zero to sixteen (though only 0 through 9
will appear), from most significant to least significant.

The OP's input, unvaryingly through the whole thread, even surviving to
his Javacard implementation of add() etc, is a list/array of decimal
digits (0 <= value <= 9). Extracting a nibble is so simple that
mentioning a "subroutine" might make the gentle reader wonder whether
there was something deeper that they had missed.
"Adding"
integers and "shifting" binary integers is well-defined
terminology.

Yes, but it's the *representation* of those integers that's been the
problem throughout.
I already posted the three-line algorithm. It
appeared immediately under the phrase "To turn BCD x to binary
integer y," and that is what it is intended to achieve.

Oh, that "algorithm". The good ol' num = num * base + digit is an
"algorithm"???

The problem with that is that the OP has always maintained that he has
no facility for handling a binary integer ("num") longer than 16 bits
-- no 32-bit long, no bignum package that didn't need "long", ...
He took a while to state the problem, but was clear from the start
that he had lists of digits rather than an integer datatype.

Yes, input was a list [prototyping a byte array] of decimal digits. The
OUTPUT was also a list of something. A few messages later, it became
clear that the output desired was a list of hexadecimal digits. Until
he revealed that the input was up to 24 decimal digits, I was pursuing
the notion that a solution involving converting decimal to binary (in a
32-bit long) then to hexadecimal was the way to go.

What is apparently needed is an algorithm for converting a "large"
number from a representation of one base-10 digit per storage unit to
one of a base-16 digit per storage unit, when the size of the number
exceeds the size (8, 16, 32, etc bits) of the "registers" available.

I read his "Yes I realized that after writing it." response to
Dennis Lee Bieber to mean Bieber was correct and what he wanted
was to go from BCD to a normal binary integer, which is base 256.

Where I come from, a "normal binary integer" is base 2. It can be
broken up into chunks of any size greater than 1 bit, but practically
according to the wordsize of the CPU: 8, 16, 32, 64, ... bits. Since
when is base 256 "normal" and in what sense of normal?

The OP maintained the line that he has no facility for handling a
base-256 number longer than 2 base-256 digits.

The dialogue between Dennis and the OP wasn't the epitome of clarity:

[OP]
My apologies, I clearly made a mistake with my calculator, yes the
resulting array I would need is [0xb,0xc,0x6,0x1,0x4,0xe]
[Dennis]
Take note that this[**1**] is NOT a BCD form for "12345678". BCD
(typically
packed) uses four bits per decimal digit. That would make "12345678" =>
0x12, 0x34, 0x56, 0x78 (ignoring matters of big/little end).

The binary representation of 12345678, in bytes, is 0xBC, 0x61, 0x4E

0xb, 0xc... is really 0x0B, 0x0C... 8-bits per byte, with MSB set to
0000.

Compare:
BCD 00010010 00110100 01010110 01111000
binary 10111100 01100001 01001110
your 00001011 00001100 00000110 00000001 00000100 00001110

[OP]
Yes I realized that [**2**] after writing it.

.... [**1**] Dennis's "this" refers to the OP's *output* which is
patently not what the OP was calling BCD.

[**2**] The referent of the OP's "that" can't be determined
unambiguously, IMHO.

The point of posting the simple high-level version of the
algorithm was to show a general form that works regardless of
particular languages, register sizes and storage considerations.
Those matters can effect the details of how one shifts a binary
integer left one bit, but shifting is not complicated in any
plausible case.


I'm sorry my post so confused, and possibly offended you.

It didn't confuse me. I was merely wondering whether you did in fact
have a method of converting from base b1 (e.g. 10) to base b2 (e.g. 16)
without assembling the number in some much larger base b3 (e.g. 256).

Offended? Experts have tried repeatedly, and not succeeded :)

Cheers,
John
 
H

H J van Rooyen

| (e-mail address removed) wrote:
|
| >My version assumes three subroutines: extracting
| > nibbles, shifting, and adding, Those are pretty simple, so I asked
| > if he needed them rather than presenting them.
| > Assuming we have
| > them, the algorithm is three lines long.
|
| Perhaps you could enlighten us by publishing (a) the spec for each of
| the get_nibble(s), shift, and add subroutines (b) the three-line
| algorithm (c) what the algorithm is intended to achieve ...
|
| >
| > He took a while to state the problem, but was clear from the start
| > that he had lists of digits rather than an integer datatype.
|
| Yes, input was a list [prototyping a byte array] of decimal digits. The
| OUTPUT was also a list of something. A few messages later, it became
| clear that the output desired was a list of hexadecimal digits. Until
| he revealed that the input was up to 24 decimal digits, I was pursuing
| the notion that a solution involving converting decimal to binary (in a
| 32-bit long) then to hexadecimal was the way to go.
|
| What is apparently needed is an algorithm for converting a "large"
| number from a representation of one base-10 digit per storage unit to
| one of a base-16 digit per storage unit, when the size of the number
| exceeds the size (8, 16, 32, etc bits) of the "registers" available. Is
| that what you have?
|
| Cheers,
| John

I actually read most of this thread as it happened and could not really figure
out what the OP was on about.

If the above is a true statement of the problem, then its more difficult to do
in a high level language, when the results exceed the native size that the
compiler or interpreter writers thought was a reasonable number of bits.

- ten to the 24 is of the order of 80 binary bits ...

So you need a (say) twelve byte result field for the binary... (thats three 32
bit values concatenated)
you clear the result field out to zero.
Then you feed in the decimal digits, from the most significant side, into a
routine that multiplies the result by ten and then adds the digit. (yes you have
to write this twelve byte Ascii/binary thing yourself)
When you have done this for all the digits, you have a binary number, and
getting hex from binary a nibble at a time is easy...

Well its easy in assembler, even on a cripple little 8 bit processor, anyway...
In python I would take a hard look at what I could do with the decimal module -
doing the reverse of the above but dividing by 16 repetitively and using the
remainder or the fraction to give the hex numbers in lsb to msb order, and doing
a lookup (prolly using a dict) to get the hex digits...

just my $0.02...

- Hendrik
 
B

bryanjugglercryptographer

The OP's input, unvaryingly through the whole thread, even surviving to
his Javacard implementation of add() etc, is a list/array of decimal
digits (0 <= value <= 9). Extracting a nibble is so simple that
mentioning a "subroutine" might make the gentle reader wonder whether
there was something deeper that they had missed.

Yes, it's simple; that was the point. The most complex routine I
assumed is integer addition, and it's not really hard. I'll
present an example below.
Yes, but it's the *representation* of those integers that's been the
problem throughout.

Right. To solve that problem, I give the high-level algorithm and
deal with the representation in the shift and add procedures.
Oh, that "algorithm". The good ol' num = num * base + digit is an
"algorithm"???

You lost me. The algorithm I presented didn't use a multiply
operator. It could have, and of course it would still be an
algorithm.
The problem with that is that the OP has always maintained that he has
no facility for handling a binary integer ("num") longer than 16 bits
-- no 32-bit long, no bignum package that didn't need "long", ...

No problem. Here's an example of an add procedure he might use in
C. It adds modestly-large integers, as base-256 big-endian
sequences of bytes. It doesn't need an int any larger than 8 bits.
Untested:

typedef unsigned char uint8;
#define SIZEOF_BIGINT 16

uint8 add(uint8* result, const uint8* a, const uint8* b)
/* Set result to a+b, returning carry out of MSB. */
{
uint8 carry = 0;
unsigned int i = SIZEOF_BIGINT;
while (i > 0) {
--i;
result = (a + b + carry) & 0xFF;
carry = carry ? result <= a : result < a;
}
return carry;
}
Where I come from, a "normal binary integer" is base 2. It can be
broken up into chunks of any size greater than 1 bit, but practically
according to the wordsize of the CPU: 8, 16, 32, 64, ... bits. Since
when is base 256 "normal" and in what sense of normal?

All the popular CPU's address storage in byte. In C all variable
sizes are in units of char/unsigned char, and unsigned char must
hold zero through 255.
The OP maintained the line that he has no facility for handling a
base-256 number longer than 2 base-256 digits.

So he'll have to build what's needed. That's why I showed the
problem broken down to shifts and adds; they're easy to build.
The dialogue between Dennis and the OP wasn't the epitome of clarity:

Well, I found Dennis clear.

[...]
I was merely wondering whether you did in fact
have a method of converting from base b1 (e.g. 10) to base b2 (e.g. 16)
without assembling the number in some much larger base b3 (e.g. 256).

I'm not sure what that means.
 
B

bryanjugglercryptographer

ohn said:
The OP's input, unvaryingly through the whole thread, even surviving to
his Javacard implementation of add() etc, is a list/array of decimal
digits (0 <= value <= 9). Extracting a nibble is so simple that
mentioning a "subroutine" might make the gentle reader wonder whether
there was something deeper that they had missed.

Yes, it's simple; that was the point. The most complex routine I
assumed is integer addition, and it's not really hard. I'll
present an example below.
Yes, but it's the *representation* of those integers that's been the
problem throughout.

Right. To solve that problem, I give the high-level algorithm and
deal with the representation in the shift and add procedures.
Oh, that "algorithm". The good ol' num = num * base + digit is an
"algorithm"???

You lost me. The algorithm I presented didn't use a multiply
operator. It could have, and of course it would still be an
algorithm.
The problem with that is that the OP has always maintained that he has
no facility for handling a binary integer ("num") longer than 16 bits
-- no 32-bit long, no bignum package that didn't need "long", ...

No problem. Here's an example of an add procedure he might use in
C. It adds modestly-large integers, as base-256 big-endian
sequences of bytes. It doesn't need an int any larger than 8 bits.
Untested:

typedef unsigned char uint8;
#define SIZEOF_BIGINT 16

uint8 add(uint8* result, const uint8* a, const uint8* b)
/* Set result to a+b, returning carry out of MSB. */
{
uint8 carry = 0;
unsigned int i = SIZEOF_BIGINT;
while (i > 0) {
--i;
result = (a + b + carry) & 0xFF;
carry = carry ? result <= a : result < a;
}
return carry;
}
Where I come from, a "normal binary integer" is base 2. It can be
broken up into chunks of any size greater than 1 bit, but practically
according to the wordsize of the CPU: 8, 16, 32, 64, ... bits. Since
when is base 256 "normal" and in what sense of normal?

All the popular CPU's address storage in byte. In C all variable
sizes are in units of char/unsigned char, and unsigned char must
hold zero through 255.
The OP maintained the line that he has no facility for handling a
base-256 number longer than 2 base-256 digits.

So he'll have to build what's needed. That's why I showed the
problem broken down to shifts and adds; they're easy to build.
The dialogue between Dennis and the OP wasn't the epitome of clarity:

Well, I found Dennis clear.

[...]
I was merely wondering whether you did in fact
have a method of converting from base b1 (e.g. 10) to base b2 (e.g. 16)
without assembling the number in some much larger base b3 (e.g. 256).

I'm not sure what that means.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,997
Messages
2,570,239
Members
46,827
Latest member
DMUK_Beginner

Latest Threads

Top