From: Mok-Kong Shen on

The following is essentially commonplace and nothing special
but o.k., as I was told by a couple of mathematicians. I am
posting it nonetheless here, because I surmise it could eventually
be of some use to somebodies.

If f11(x) etc. are functions of x, one can namely e.g. use

| f11 f12 | | x1 |
| | * | |
| f21 f22 | | x2 |

to express the vector

| f11(x1) + f12(x2) |
| |
| f21(x1) + f22(x2) |

where the elements are non-linear, if the functions are so. That is,
one could in this way fairly conveniently express certain non-linear
computations using the familiar operator notations of matrix calculus
in linear algebra.

An practical example is to have full-cycle higher order permutation
polynomials mod 2^n as the functions fij, for which inverses may be
numerically computed and hence non-singular matrices may be defined.

Thanks.

M. K. Shen

From: Tom St Denis on
On Jun 1, 7:15 am, Mok-Kong Shen <mok-kong.s...(a)t-online.de> wrote:
> The following is essentially commonplace and nothing special
> but o.k., as I was told by a couple of mathematicians. I am
> posting it nonetheless here, because I surmise it could eventually
> be of some use to somebodies.
>
> If f11(x) etc. are functions of x, one can namely e.g. use
>
>       | f11  f12 |     | x1 |
>       |          |  *  |    |
>       | f21  f22 |     | x2 |
>
> to express the vector
>
>       | f11(x1) + f12(x2) |
>       |                   |
>       | f21(x1) + f22(x2) |
>
> where the elements are non-linear, if the functions are so. That is,
> one could in this way fairly conveniently express certain non-linear
> computations using the familiar operator notations of matrix calculus
> in linear algebra.

Unfortunately, though it's not understood that way. Your matrix could
have polynomials in it but then you're multiplying out to produce
polynomials as the output not scalars. And why would a sigma notation
be harder?

e.g.

y_i = \sum_{j=1...2} f_{ij}(x_j)

Tom
From: Mok-Kong Shen on
Mok-Kong Shen wrote:
>
> If f11(x) etc. are functions of x, one can namely e.g. use
>
> | f11 f12 | | x1 |
> | | * | |
> | f21 f22 | | x2 |
>
> to express the vector
>
> | f11(x1) + f12(x2) |
> | |
> | f21(x1) + f22(x2) |

[Addendum] One could evidently, if desired, also adopt the convention
that in the vector above + be substituted by xor, while leaving the
functions unchanged. On the symbolic level, one could write/discuss
M1*M2, M^(-1), C = M * P, etc., as is familiar in linear algebra.

M. K. Shen



From: Tom St Denis on
On Jun 1, 9:18 am, Mok-Kong Shen <mok-kong.s...(a)t-online.de> wrote:
> Mok-Kong Shen wrote:
>
> > If f11(x) etc. are functions of x, one can namely e.g. use
>
> > | f11 f12 |   | x1 |
> > |         | * |    |
> > | f21 f22 |   | x2 |
>
> > to express the vector
>
> > | f11(x1) + f12(x2) |
> > |                   |
> > | f21(x1) + f22(x2) |
>
> [Addendum] One could evidently, if desired, also adopt the convention
> that in the vector above + be substituted by xor, while leaving the
> functions unchanged. On the symbolic level, one could write/discuss
> M1*M2, M^(-1), C = M * P, etc., as is familiar in linear algebra.

Addition by XOR is just addition over Z_2 if your matrix elements are
vectors of bits just do Z^w_2.

Instead of trying to invent new terminology, why not, oh I don't know,
learn what is already out there?

Given that you've floated around MKS for the last decade or so I would
have hoped that you ended up picking up even a modest amount of actual
knowledge...

Tom
From: Mok-Kong Shen on
Mok-Kong Shen wrote:
> Mok-Kong Shen wrote:
>>
>> If f11(x) etc. are functions of x, one can namely e.g. use
>>
>> | f11 f12 | | x1 |
>> | | * | |
>> | f21 f22 | | x2 |
>>
>> to express the vector
>>
>> | f11(x1) + f12(x2) |
>> | |
>> | f21(x1) + f22(x2) |
>
> [Addendum] One could evidently, if desired, also adopt the convention
> that in the vector above + be substituted by xor, while leaving the
> functions unchanged. On the symbolic level, one could write/discuss
> M1*M2, M^(-1), C = M * P, etc., as is familiar in linear algebra.

I like also to mention, though trivial, that one would for practical
computatonal reasons in the present context preferably consider a
matrix to be an operator acting on a vector, so that M1*M2*V is always
computed as M1*(M2*V), rather than one first computes a matrix M3=M1*M2
and evaluate M3*V. To obtain a pseudo-random non-singular matrix, which
e.g. could be useful for encryption purposes, one would preferably
generate it as a product L*U.

I like also take this opportunity to say that the present theme occured
to me during discussions in a thread on Hillcipher, where one
participant seemed to be extremely "sensitive" to linearity from
matrices, which led me to look for possibility of exploiting the
convenience of matrix formalism also in non-linear computational
matters. (Please however don't carry over any comments on my proposal
of the dynamic Hillcipher to this thread. Such comments, if any, should
be posted there in order to avoid confusion to the general readers.)

M. K. Shen
--------------------------------------------------------------------------

[OT] In an attempt to reduce annoyance to the general readers, I am
unfortunately forced to forgo any opportunities of discussion with
those, who have the unnice impulse to overload their posts with
bandwidth-wasting personal stuffs and/or bad words, by placing them
into my kill-file. Those who dislike my posts for whatever reasons are
requested to kindly put me into their kill-files as well.