From: ?a/b on
On Sat, 08 Oct 2005 09:43:33 GMT, "Richard Cooper"
<spamandviruses(a)rr.com> wrote:
>On Sat, 08 Oct 2005 02:18:48 -0400, ?a\/b <al(a)f.g> wrote:
>> i think the division in the book "Handbook of Applied Cryptography"
>> with the base 2^32 is 100x faster than its binary base form
>
>I don't like the way those cryptography people do math. According to
>them, you can solve x = a + b with the following code:
>
> mov eax, [a]
> mov edx, [b]
> or eax, edx
> mov [x], eax

in the few algo i implement from that book (square, Miller-Rabin
probabilistic primary test, square roots modulo p prime, extended
Euclidean algorithm, Jacobi symbol, factorize a number in the form
k^i) '+' means '+' and not 'or'
and these routines seems very good, nice, and magical in front of my
eyes
From: Richard Cooper on
On Sun, 09 Oct 2005 02:21:43 -0400, ?a\/b <al(a)f.g> wrote:

> 14.20 Algorithm Multiple-precision division
> INPUT: positive integers x = (xn, ... x1,x0)b, y = (yt,... y1,y0)b
> with n >= t >= 1, yt != 0.
> OUTPUT: the quotient q = (qn-t, ... q1,q0)b and remainder r = (rt,...
> r1r0)b such that
> x = qy + r, 0 <= r < y.
> 1. For j from 0 to (n - t) do: qj 0.
> 2. While (x  ybn-t) do the following: qn-t qn-t + 1, x x - ybn-t.
> 3. For i from n down to (t + 1) do the following:
>
> 3.1 If xi = yt then set qi-t-1 b - 1; otherwise set qi-t-1 b(xib +
> xi-1)=yt)c.
>
>
> #######################
> 3.2 While (q[i-t-1](y_tb +y_t-1) > x_ib^2 +x_i-1b + x_i-2)
> do: q[i-t-1]= q[i-t-1] -1.
> #######################
>
> 3.3 x = x - q[i-t-1]yb^(i-t-1).
>
>
> #######################
> 3.4 If x < 0 then set x=x + yb^(i-t-1) and q[i-t-1]= q[i-t-1] - 1.
> #######################
> 4. r x.
> 5. Return(q,r).

And that's why I don't like those cryptography people. They encrypt
everything they write.
From: ?a/b on
On Sun, 09 Oct 2005 08:21:43 +0200, "?a\\/b" <al(a)f.g> wrote:

>On Sat, 08 Oct 2005 17:15:15 GMT, "Richard Cooper"
><spamandviruses(a)rr.com> wrote:
>

>#######################
>3.4 If x < 0 then set x=x + yb^(i-t-1) and q[i-t-1]= q[i-t-1] - 1.
>#######################

i think here should be "x=x - yb^(i-t-1)"

>4. r x.
>5. Return(q,r).

From: ?a/b on
On Sun, 09 Oct 2005 08:13:42 GMT, "Richard Cooper"
<spamandviruses(a)rr.com> wrote:
>On Sun, 09 Oct 2005 02:21:43 -0400, ?a\/b <al(a)f.g> wrote:
>> 14.20 Algorithm Multiple-precision division
>> INPUT: positive integers x = (xn, ... x1,x0)b, y = (yt,... y1,y0)b
>> with n >= t >= 1, yt != 0.
>> OUTPUT: the quotient q = (qn-t, ... q1,q0)b and remainder r = (rt,...
>> r1r0)b such that
>> x = qy + r, 0 <= r < y.
>> 1. For j from 0 to (n - t) do: qj 0.
>> 2. While (x  ybn-t) do the following: qn-t qn-t + 1, x x - ybn-t.
>> 3. For i from n down to (t + 1) do the following:
>>
>> 3.1 If xi = yt then set qi-t-1 b - 1; otherwise set qi-t-1 b(xib +
>> xi-1)=yt)c.
>>
>>
>> #######################
>> 3.2 While (q[i-t-1](y_tb +y_t-1) > x_ib^2 +x_i-1b + x_i-2)
>> do: q[i-t-1]= q[i-t-1] -1.
>> #######################
>>
>> 3.3 x = x - q[i-t-1]yb^(i-t-1).
>>
>>
>> #######################
>> 3.4 If x < 0 then set x=x + yb^(i-t-1) and q[i-t-1]= q[i-t-1] - 1.
>> #######################
>> 4. r x.
>> 5. Return(q,r).
>
>And that's why I don't like those cryptography people. They encrypt
>everything they write.

if you want understand
it is in chapter 14 of book "Handbook of Applied Cryptography by A.
Menezes, P. van Oorschot and S. Vanstone" that is possible to find in
internet (it is find if you google it)
i have not implemented that version but i think it could be ok and
fast.
anyway i think i'm not smart enough for speak of divisions
it is 3-4 year that i think on it and last week i found another error
in my implementation ... but it seems: i like find errors more than to
program :)

if i have +-*/ for signed number
and i have defined "<<" and ">>" shift operators for unsigned one

for fixed point numbers i see the thing very easy

fixed point numbers "fnum" has the same data representation of
"snum"==signed numbers that has a digits array
that has the same data representation of "num" unsigned numbers
(change only the name)

if "a" is an fnum than "a.sn" is the signed number in it
a.sn.sn is the unsigned number in it
so i define +-/* for "fixed point numbers"

fnum a, b;

a+b = a.sn+b.sn;
a-b = a.sn-b.sn;
a*b =(a.sn*b.sn).sn>>(32*precision)
a/b =(a.sn.sn<<(2*32*precision))/b.sn;

where precision == 1 means 32 bit of fractional part
precision == 2 means 64 bit of fractional part
etc

for see it in 10 base
12.9/3.4 = 3.7[94]
1290/34 = 37.[94]
12.9*3.4 = 43.8[6]
129 *34 = 4386

Do you see some error?

it should be ok but i don't know how to find the position of "."
in the translation base2^32 --> base10
and base10 --> base2^32
that serve for read-write the fixed point number in input output.
Has someone some idea on how it is possible find that position?
Thanks
From: Richard Cooper on
On Sun, 09 Oct 2005 15:35:04 -0400, ?a\/b <al(a)f.g> wrote:

> if you want understand it is in chapter 14 of book

Yes, but with stuff like this:

>>> 4. r x.

I wouldn't know where to being to understand it. I mean, why would I "r
x" when I could "q t" or "v m" or "p s" or maybe even "d e w g d x v?"

Not that I understood any of the rest of it...

> if "a" is an fnum than "a.sn" is the signed number in it
> a.sn.sn is the unsigned number in it
> so i define +-/* for "fixed point numbers"
>
> fnum a, b;
>
> a+b = a.sn+b.sn;
> a-b = a.sn-b.sn;
> a*b =(a.sn*b.sn).sn>>(32*precision)
> a/b =(a.sn.sn<<(2*32*precision))/b.sn;

Now you see, I have no idea what you just said there.

Now since we're talking about fixed point numbers, I think that you're
talking about how to add, subtract, multiply and divide them. E.g., if
you have 32 bit integers, and the 24 most signifigant bits are the integer
part, and the 8 least signifigant bits are the fractional part, then you
do this for each operation:

add: a + b
subtract: a - b
multiply: ( a * b + 1 << 7 ) >> 8
divide: ( a << 8 + 1 << 7 ) / b

Now that I've written that out, I can kind of see where what you wrote
came from, but before doing so it made no sense whatsoever to me.

That's what I hate about the way that most people explain mathmatics.
They come up with such an unusual way to represent what they're doing that
I don't understand it even when it's something that I already know. In
the case of the division algorithm presented earlier, I don't already know
how to do it, so there's virturally no chance whatsoever of me ever
understanding it. On the other hand, the way I presented the
add/subtract/multiply/divide formulas, I'm sure everyone who knows what
'>>' and '<<' mean can implement it if not understand it.

> for see it in 10 base
> 12.9/3.4 = 3.7[94]
> 1290/34 = 37.[94]
> 12.9*3.4 = 43.8[6]
> 129 *34 = 4386

Again, I'm a bit confused, so I'll write out my own. We'll use four
digits, one for each byte in the dword sized numbers:

For 012.9 / 003.4, we've basically got the numbers represented as
integers, 0129 and 0034. Now if you remember what you learned in 4th
grade math, since both numbers are multiplied by 10, then you get the
correct answer when you divide, which is going to be 4 in correctly
rounded integer math. However, we don't want the correct answer, we want
the answer times 10, so that we've still got one fractional digit. To do
this, we just multiply the numerator by 10 before we do the division.
That way the numerator is multiplied by 100, which is divided by the
denominator multiplied by 10, and so the answer comes out multiplied by 10.

So we take the formula:

divide: ( a << 8 + 1 << 7 ) / b

a = 0129
b = 0034

For the a << 8 step, since we've got our numbers multiplied by 10 instead
of shifted left 8 bits, we just multiply the number by 10. So this makes
it 1290. However, of course, to take care of four digit numbers, we'd use
the eax:edx pair in assembly, so it'd be more like 00001290 instead.

For the 1 << 7, the point is to add half of the least signifigant bit. In
our numbers, 000.1 is effectively the least signifigant bit, and so 000.05
is half of that. Of course, the 00001290 is multiplied by 100, so we add
to it 000.05 multiplied by 100, which is 0005.

So for ( a << 8 + 1 << 7 ), we get ( 00001290 + 0005 ) which is 00001295
of course.

We then take 00001295 and divide it by b, which is 0034. The answer we
get is 0038. This is the correct answer, since 12.9 / 3.4 = 3.7941, which
when rounded is 3.8.

> 129 *34 = 4386
>
> Do you see some error?

Well, nothing except that you still need to shift the result there. 0129
* 0034 will be 00004386. You then add the '1 << 7' style term, which in
our decimal math is 0005, to get 00004391, then you do the '>> 8' step,
which in our decimal math is a division by 10, which gives you 0439. This
is the correct answer, since 12.9 * 3.4 is 43.86, which when rounded is
43.9.

That rounding step is important. I get really annoyed when I see programs
doing integer math, but not getting good results because the programmer
didn't bother to round the results and the errors just pile up on
eachother until the answer is way off.

With the rounding, 0129 * 0034 / 0034 (12.9 * 3.4 / 3.4) comes out to 0129
(12.9), but without the rounding it comes out to 0128 (12.8). Now there's
an small error even when you do round, for example 99.0 / 50.0 * 50.0 will
come out to 100.0 if you round, but if you don't round it turns into into
95.0.

> it should be ok but i don't know how to find the position of "."
> in the translation base2^32 --> base10
> and base10 --> base2^32
> that serve for read-write the fixed point number in input output.
> Has someone some idea on how it is possible find that position?

I always convert to integer before displaying.

Say we've got the '<< 8' style numbers. 123456.789 in this type of
representation would be 0x01E240CA, which converted back to decimal is
123456.789063...

Since 8 bits gets us 256 possible fractional values, that's equivelent to
2.408 decimal digits. So we'll want to output the number with either two
or three decimal places. Usually you want to throw away a little bit of
the number since it likely just contains rounding errors anyway, if we
display 2 decimal places that's equivelent to discarding the 1.356 least
signifigant bits. Normally you'd throw away more than that, but since
we've only got 32 bits to begin with, it might be nice to keep them even
if that last digit might be a little inaccurate.

So now we need to convert the number to decimal. To do this, we first
multiply it by 100 since we want two decimal places, this way the last two
digits of our number are the fractional digits. Then we shift it 8 bits
to the right to undo the '<< 8' stuff we've represented the number as. Of
course, we add '1 < 7' to it first to round it.

Now 100 in our number format is 25600 decimal, or 0x00006400 hex.

multiply: (a * b + 1 << 7) >> 8

(0x01E240CA * 0x00006400 + 0x00000080 ) >> 8 = 0xBC614EE8

Of course, the whole event of taking "100 << 8" and then taking the answer
">> 8" doesn't have a net effect, even with the "1 << 7" in there, so for
this specific multiplication you can just do "a * b" and be done with it.

0x01E240CA * 0x00000064 = 0x00000000BC614EE8

(At this point we want it to be a 64 bit number, in case of overflow,
although in this particular instance, the number still fits in 32 bits as
long as it isn't signed.)

Now we add 1 << 7 and then shift it right 8 bits. This has the effect of
dividing it by 256, which we're doing to convert the number to a number
with no fractional bits, which makes it easy to convert to decimal.

0x00000000BC614EE8 + 1 << 7 = 0xBC614F68

0x00000000BC614F68 >> 8 = 0x00BC614F

(At this point, we're back into 32 bits again, but that's because we chose
to discard 1.356 bits from our number, and so now our 32 bit number is a
30.644 bit number. If we had chosen 3 decimal places instead, then we
would have added 1.966 bits, and so at this point we'd have a 33.996 bit
number, and so we'd have to continue doing the math in 64 bits.)

(BTW, if you're wondering how I'm calculating how many bits our numbers
are, all you do is take the base 2 log of the number. For example, to
represent 2 digits of decimal accuracy, you do log_2(0.01) which is
-6.6438, and so you need 6.6438 decimal places. For a more obvious
example, to have you numbers accurate to 1/64, you calculate log_2(1/64)
which is -8, and so you need 8 fractional bits.)

Now, 0x00BC614F in decimal is 12345679, and since that is our number times
100 (because of the step where we multiplied by 100) the last two decimal
digits are after the decimal point, so the number is 123456.79, which is
123456.789063 rounded to two decimal places.

To convert from decimal to fixed point, you basically just do the
reverse. With 123456.789 as input, you convert it to an integer,
123456789, remembering that you multiplied by 1000 to make it an integer.
Then you convert 123456789 to binary, which is 0x75BCD15. Then you shift
that 8 bits to the left, to make it 0x000000075BCD1500 which is the
multiplied integer in the fixed point representation. To convert it back
to a fraction, you divide it by 1000 using the formula "(a << 8 + 1 << 7)
/ 1000" which gives you 0x01E240CA, which is the most accurate
representation of that number in the fixed point math.
First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7
Prev: HLA wont compile!!
Next: Structs in Assembly