From: Mok-Kong Shen on
Phoenix wrote:
[snip]

Wouldn't it be desirable to perfrom some statistical tests?

M. K. Shen

From: Phoenix on
On 3 Jun, 21:41, Mok-Kong Shen <mok-kong.s...(a)t-online.de> wrote:
> Phoenix wrote:
>
> [snip]
>
> Wouldn't it be desirable to perfrom some statistical tests?
>
> M. K. Shen

Already did.

Read the link on:

RANDOMNESS
Several statistical tests done with George Marsaglia’s Diehard battery
of tests, was not find any 0.00000 or 1.00000, p-values. See the
results on the example test here. Comparative tests done with binary
files containing true random numbers generated by physical sources,
such as atmospheric noise from RANDOM.ORG, atomic radiation from
HotBits, and chaotic source from LAVArnd, given good results.


There is another fresh one:

c= .976318359375
Speed performance: 29954716 Bytes/sec

Value Char Occurrences Fraction
0 536882586 0.500011
1 536859238 0.499989

Total: 1073741824 1.000000

Entropy = 1.000000 bits per bit.

Optimum compression would reduce the size
of this 1073741824 bit file by 0 percent.

Chi square distribution for 1073741824 samples is 0.51, and randomly
would exceed this value 47.61 percent of the times.

Arithmetic mean value of data bits is 0.5000 (0.5 = random).
Monte Carlo value for Pi is 3.141228007 (error 0.01 percent).
Serial correlation coefficient is -0.000020 (totally uncorrelated =
0.0).

From: Noob on
Maaartin wrote:

> It's compiler and system dependent: A C compiler may choose to use
> 80-bit representation for doubles as long as they stays in registers
> (this additional precision is for free), or not.

Right.

cf. GCC's most reported non-bug.
http://gcc.gnu.org/bugs/#nonbugs_general
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=323
From: Phoenix on
On 4 Jun, 12:07, Noob <r...(a)127.0.0.1> wrote:
> Maaartin wrote:
> > It's compiler and system dependent: A C compiler may choose to use
> > 80-bit representation for doubles as long as they stays in registers
> > (this additional precision is for free), or not.
>
> Right.
>
> cf. GCC's most reported non-bug.http://gcc.gnu.org/bugs/#nonbugs_generalhttp://gcc.gnu.org/bugzilla/show_bug.cgi?id=323

Yes is true, but wath do you do wend you need to scale big integers in
(0;1) fp numbers?
From: Maaartin on
On Jun 4, 1:34 pm, Phoenix <ribeiroa...(a)gmail.com> wrote:
> On 4 Jun, 12:07, Noob <r...(a)127.0.0.1> wrote:
>
> > Maaartin wrote:
> > > It's compiler and system dependent: A C compiler may choose to use
> > > 80-bit representation for doubles as long as they stays in registers
> > > (this additional precision is for free), or not.
>
> > Right.
>
> > cf. GCC's most reported non-bug.http://gcc.gnu.org/bugs/#nonbugs_generalhttp://gcc.gnu.org/bugzilla/s...
>
> Yes is true, but wath do you do wend you need to scale big integers in
> (0;1) fp numbers?

The scaling itself is no problem. An integer PRNG works independently
of all the floating-point problems, the resulting real numbers may
differ slightly, but this mustn't play any role in e.g., simulation or
equation solving (if it did, the whole result would be garbage). For
crypto you need no floating-point at all.

OTOH, using a floating-point based PRNG means that the generated
sequences may deviate substantially using different architectures,
compilers, and/or rounding modes. This should be no big problem for
non-cryptographic applications, but it makes it not exactly
repeatable. For crypto it's simply unusable.