From: Luna Moon on
Hi all,

I have a vector of real numbers in Matlab. How do I compress them? Of
course this has to be lossless, since I need to be able to recover
them.

The goal is to study the Shannon rate and entropy of these real
numbers, so I decide to compress them and see how much compression
ratio I can have.

I don't need to write the result into compressed files, so those
headers, etc. are just overhead for me which affect me calculating the
Entropy... so I just need a bare version of the compress ratio...

Any pointers?

Thanks a lot!
From: John on
On Apr 2, 3:50 pm, Luna Moon <lunamoonm...(a)gmail.com> wrote:
> Hi all,
>
> I have a vector of real numbers in Matlab. How do I compress them?  Of
> course this has to be lossless, since I need to be able to recover
> them.
>
> The goal is to study the Shannon rate and entropy of these real
> numbers, so I decide to compress them and see how much compression
> ratio I can have.
>
> I don't need to write the result into compressed files, so those
> headers, etc. are just overhead for me which affect me calculating the
> Entropy... so I just need a bare version of the compress ratio...
>
> Any pointers?
>
> Thanks a lot!

Consider the array of numbers in binary form. Rearrange the bits so
all the ones are sequential, and do the same for the zeros. The number
of ones followed by the number of zeros is your compressed file.

John
From: Michael Plante on
John wrote:
>On Apr 2, 3:50=A0pm, Luna Moon <lunamoonm...(a)gmail.com> wrote:
>> Hi all,
>>
>> I have a vector of real numbers in Matlab. How do I compress them?
=A0Of
>> course this has to be lossless, since I need to be able to recover
>> them.
>>
>> The goal is to study the Shannon rate and entropy of these real
>> numbers, so I decide to compress them and see how much compression
>> ratio I can have.
>>
>> I don't need to write the result into compressed files, so those
>> headers, etc. are just overhead for me which affect me calculating the
>> Entropy... so I just need a bare version of the compress ratio...
>>
>> Any pointers?
>>
>> Thanks a lot!
>
>Consider the array of numbers in binary form. Rearrange the bits so
>all the ones are sequential, and do the same for the zeros. The number
>of ones followed by the number of zeros is your compressed file.

That's hardly optimal (effectively Run-Length Encoding (RLE)), and will, in
general, result in a falsely high estimate of "information content". How
many PCX images do you see floating around?

From: Michael Plante on
Michael wrote:
>John wrote:
>>On Apr 2, 3:50=A0pm, Luna Moon <lunamoonm...(a)gmail.com> wrote:
>>> Hi all,
>>>
>>> I have a vector of real numbers in Matlab. How do I compress them?
>=A0Of
>>> course this has to be lossless, since I need to be able to recover
>>> them.
>>>
>>> The goal is to study the Shannon rate and entropy of these real
>>> numbers, so I decide to compress them and see how much compression
>>> ratio I can have.
>>>
>>> I don't need to write the result into compressed files, so those
>>> headers, etc. are just overhead for me which affect me calculating the
>>> Entropy... so I just need a bare version of the compress ratio...
>>>
>>> Any pointers?
>>>
>>> Thanks a lot!
>>
>>Consider the array of numbers in binary form. Rearrange the bits so
>>all the ones are sequential, and do the same for the zeros. The number
>>of ones followed by the number of zeros is your compressed file.
>
>That's hardly optimal (effectively Run-Length Encoding (RLE)), and will,
in
>general, result in a falsely high estimate of "information content". How
>many PCX images do you see floating around?
>

Sorry, I should have said "it's throwing away information, and then RLE".
So it's going to give nonsense.

From: robert bristow-johnson on
On Apr 2, 3:50 pm, Luna Moon <lunamoonm...(a)gmail.com> wrote:
> Hi all,
>
> I have a vector of real numbers in Matlab. How do I compress them?  Of
> course this has to be lossless, since I need to be able to recover
> them.
>
> The goal is to study the Shannon rate and entropy of these real
> numbers, so I decide to compress them and see how much compression
> ratio I can have.
>
> I don't need to write the result into compressed files, so those
> headers, etc. are just overhead for me which affect me calculating the
> Entropy... so I just need a bare version of the compress ratio...
>
> Any pointers?
>

do you know about Huffman coding? it's in Wikipedia.

if the floating-point numbers are sorta random, not derived from a
"normal-looking" signal, there is not much you can do to compress. if
the range of the numbers are limited (at least probabilistically) then
Huffman coding might help a little. but i tend to think that the it
would be only the exponent bits that would be compressible and there
is not much to gain, since the exponent bits are a small portion of
the floating-point word. the mantissa bits will look pretty random,
and there is not much a lossless scheme can do about that.

if the signal is reasonably bandlimited, you can use LPC, predict the
next samples (from the previous N samples), and encode the
*difference* between the predicted value and what you really have. if
the prediction is good, the difference should be small and the number
of bits needed to represent it should be small (and you might Huffman
code those).

i know for audio, lossless compression doesn't gain a lot of saving of
space. it might save maybe 50%.


> Thanks a lot!

FWIW,

r b-j