From: Mok-Kong Shen on
WTShaw wrote:
> On Apr 11, 1:14 am, David Eather<eat...(a)tpg.com.au> wrote:
>> You wrote a whole program and even years later thought it something that
>> everyone should know about.
>>
>> You wrote a whole program to calculate this:
>>
>> equivalent number of bits = log(N!) / log(2)
>>
>> where N is the number of objects to be permuted.
>>
>> Sorry, anyone who did any formal maths already knows that, as does
>> anyone who studied cryptography for any length of time and they don't
>> need a table of repeated calculations. They can already directly
>> calculate it for any value they desire. Even better, by replacing log(2)
>> with log(X) they can calculate it to any value to base X as well.
>>
>> You don't even know what you don't know. You would be best served by
>> listening and asking genuine questions.
>
> While your figures are correct in general, the route I took was to
> actually simulate the real process. Your figures will not always match
> the real experimental values.

If two persons with computers "compute" a mathmatical entity then,
assuming that there aren't programming logic errors and the hardware
word sizes and rounding schemes are identical, the result should be the
same. "Simulation" on the other hand generally bases on simplification
assumptions. So, if any simulation and exact computation differ, it is
the simulation that carries the error relative to theory.

M. K. Shen


From: bmearns on
On Apr 14, 2:55 am, WTShaw <lure...(a)gmail.com> wrote:
> On Apr 11, 1:14 am, David Eather <eat...(a)tpg.com.au> wrote:
>
>
>
> > You wrote a whole program and even years later thought it something that
> > everyone should know about.
>
> > You wrote a whole program to calculate this:
>
> > equivalent number of bits = log(N!) / log(2)
>
> > where N is the number of objects to be permuted.
>
> > Sorry, anyone who did any formal maths already knows that, as does
> > anyone who studied cryptography for any length of time and they don't
> > need a table of repeated calculations. They can already directly
> > calculate it for any value they desire. Even better, by replacing log(2)
> > with log(X) they can calculate it to any value to base X as well.
>
> > You don't even know what you don't know. You would be best served by
> > listening and asking genuine questions.
>
> While your figures are correct in general, the route I took was to
> actually simulate the real process. Your figures will not always match
> the real experimental values.

No, his figures are correct absolutely. There are N-factorial possible
permutations of length-N, I'm sure even you know that. Therefore any
given permutation conveys a maximum of log_2(N!) bits of information.
Or log_e(N!) nats of information. Or any other base you care to use.
If the permutations are constructed randomly, then that is exactly it
contains. And that is how we typically talk about key-strength, when
the key is chosen randomly from the set of possibilities. If it is not
chosen randomly, that is a failure of the person choosing the key.

-Brian
From: David Eather on
On 14/04/2010 4:55 PM, WTShaw wrote:
> On Apr 11, 1:14 am, David Eather<eat...(a)tpg.com.au> wrote:
>> You wrote a whole program and even years later thought it something that
>> everyone should know about.
>>
>> You wrote a whole program to calculate this:
>>
>> equivalent number of bits = log(N!) / log(2)
>>
>> where N is the number of objects to be permuted.
>>
>> Sorry, anyone who did any formal maths already knows that, as does
>> anyone who studied cryptography for any length of time and they don't
>> need a table of repeated calculations. They can already directly
>> calculate it for any value they desire. Even better, by replacing log(2)
>> with log(X) they can calculate it to any value to base X as well.
>>
>> You don't even know what you don't know. You would be best served by
>> listening and asking genuine questions.
>
> While your figures are correct in general, the route I took was to
> actually simulate the real process.

Which just shows you wasted a lot of time.


Your figures will not always match
> the real experimental values.

Oh? Please give an example.
From: WTShaw on
On Apr 14, 3:48 am, Mok-Kong Shen <mok-kong.s...(a)t-online.de> wrote:
> WTShaw wrote:
> > On Apr 11, 1:14 am, David Eather<eat...(a)tpg.com.au>  wrote:
> >> You wrote a whole program and even years later thought it something that
> >> everyone should know about.
>
> >> You wrote a whole program to calculate this:
>
> >> equivalent number of bits = log(N!) / log(2)
>
> >> where N is the number of objects to be permuted.
>
> >> Sorry, anyone who did any formal maths already knows that, as does
> >> anyone who studied cryptography for any length of time and they don't
> >> need a table of repeated calculations. They can already directly
> >> calculate it for any value they desire. Even better, by replacing log(2)
> >> with log(X) they can calculate it to any value to base X as well.
>
> >> You don't even know what you don't know. You would be best served by
> >> listening and asking genuine questions.
>
> > While your figures are correct in general, the route I took was to
> > actually simulate the real process. Your figures will not always match
> > the real experimental values.
>
> If two persons with computers "compute" a mathmatical entity then,
> assuming that there aren't programming logic errors and the hardware
> word sizes and rounding schemes are identical, the result should be the
> same. "Simulation" on the other hand generally bases on simplification
> assumptions. So, if any simulation and exact computation differ, it is
> the simulation that carries the error relative to theory.
>
> M. K. Shen

Yes, it's the old problem of squaring the circle, sort of. On an
experimental basis, generalizations are almost never seen experimental
data. One time I did a chemical analysis and I hit it right on the
noise, a challenge to find the percent copper in an American silver
dime. It involved dissolving it in nitric acid, neutralizing the
solution, precipitating the silver as silver chloride, drying,
weighing. The result was 10% copper, 90% silver. I was accused of
faking the result until I presented my data and the instructor
reweighed the silver salt.

Experimental results, it this case doing all the steps, is better than
assuming. The more points the better. As the recent process went, the
point of validity was the ratio of bits to trits that got closer to
calculated as the accumulation of lengths was larger. Many years ago
it was my posting of the relationships between bases as the general
formula based of logs that attracted attention as common knowledge
never got that far, so my data is attacked with my own results,
curious.
From: WTShaw on
On Apr 14, 8:23 am, bmearns <mearn...(a)gmail.com> wrote:
> On Apr 14, 3:16 am, WTShaw <lure...(a)gmail.com> wrote:
>
> > On Apr 12, 11:38 am, bmearns <mearn...(a)gmail.com> wrote:

>
> > As there are almost infinite ways to do this, you reject any method
> > that you cannot control.
>
> No, I reject any method that is demonstrated to weaken the key.

So you'd rather walk than ride? The so called secret beyond anything
else would be an initial state permutation. In tests, I tend to use
the normal alphabet for base 26 but there is no reason not to start
with an permutation. Therefore a different domain would be created
with same entries using different initial states.

So, if you took that base64 coder scheme and changed the pasted in set
string, you would do the same thing whetehr or not my way of varying
the permutation on the fly was used at all. We have no idea what a
person might do to get such a permutation as desired. The character
set could also be different as long as it had 64 or 65 characters with
identical algorithm treatment; maybe then it would be derivative
algorithm.
>
> > My answer is that it is right to encourage
> > private generation of complex keys because that is often where real
> > strength lies and always does in ideal systems, or should. Making keys
> > has nothing to do with the actual encryption algorithm except meeting
> > operational runtime critera.
>
> So you're suggesting security by obscurity. That's fair enough; some
> people support that. I personally do not, but that's just a difference
> of opinion. The problem is that you're putting up this "web
> application" to generate permutations with an algorithm you've
> designed, and so you've lost the obscurity. You're sharing your
> algorithm with others and apparently trying to convince others to use
> it, so it needs to be an intrinsically strong algorithm, not just
> strong because no-one knows what you're doing.
>
Now, go back to the classic definition where keys are supposed to hold
the nature of strength given a realistic encryption scheme. It's not
security by obscurity if you must reveal the key you use or how you
got it, remembered it, or hide it, quite the contrary; you can't have
it both ways.
>
> > > The factoradic number system is the normal way to generate random
> > > permutations from seed data. Factoradic is a mixed-radix number system
> > > where each item in a permutation represents a different digit in the
> > > number. It defines a one-to-one mapping between all possible
> > > permutations and all non-negative integers so any digital input (seed)
> > > data can be represented by a permutation.
>
> > > -Brian
>
.....
>
> It is a non-trivial conversion that requires arbitrary precision
> integer operations, I'll grant you that. However, it is extremely
> efficient at preserving entropy and can yield maximum randomness in
> the permutation. of course, if you have entropy to burn, you can
> randomly choose each element of the permutation independently, but
> that will waste a terrific amount of entropy unless you have an
> entropy source for each integer base from 2 to N, inclusive.
>
> I don't grok what you mean by "descriptive". A key should not be
> descriptive, it should be random.

Random is good as all keys can be so. However, your qualifying a key
as one you allow is not random at all but isdescriptive.
>
.....
>
> I'm quite comfortable with the mathematics of any rational base. But I
> don't see what that has to do with the current discussion.
>
> -Brian

Bases differ as to characteristics. I prefer those with more native
promise that other. Bad bases can make otherwise good algorithms
produce less than optimum results.