in [DSP]

From: robert bristow-johnson on 4 Apr 2010 20:23 On Apr 4, 1:58 pm, glen herrmannsfeldt <g... (a)ugcs.caltech.edu> wrote:.... > Continuing, the output of a linear-congruential random number > generator is also easy to predict if you know the constants of > the generator. yeah, i guess you need a couple of constants and the initial seed value. but don't you also need to somehow encode the rng algorithm, too? > If you don't, and you have a big enough sample, > then you can likely find the pattern. (If you have the bits > exactly, though I am not sure how long it would take.) > > If you have, say, sin() of the linear-congruential number > stream then it is likely much more difficult. it will look different in a histogram. suppose the rng was scaled to be uniformly distributed over a segment as long as any multiple of 2pi, then the p.d.f. would go up as it approaches +1 or -1. r b-j
From: Jerry Avins on 4 Apr 2010 20:34 On 4/4/2010 2:42 PM, glen herrmannsfeldt wrote: ... > That shows that the transform can be reversed, not necessarily > the fastest way to do it. Great! Too clever by half, though. No wonder it's fairly new. Jerry -- "It does me no injury for my neighbor to say there are 20 gods, or no God. It neither picks my pocket nor breaks my leg." Thomas Jefferson to the Virginia House of Delegates in 1776. ���������������������������������������������������������������������
From: glen herrmannsfeldt on 5 Apr 2010 00:39 In comp.dsp robert bristow-johnson <rbj (a)audioimagination.com> wrote:> On Apr 4, 1:58?pm, glen herrmannsfeldt <g... (a)ugcs.caltech.edu> wrote:>> Continuing, the output of a linear-congruential random number >> generator is also easy to predict if you know the constants of >> the generator. > yeah, i guess you need a couple of constants and the initial seed > value. but don't you also need to somehow encode the rng algorithm, > too? Well, linear congruential pretty much means multiply by a constant, add a constant (possibly zero) and modulo a constant. I am not actually sure how long it takes, given a sufficiently long sample of the output, to find the constants. >> ?If you don't, and you have a big enough sample, >> then you can likely find the pattern. ?(If you have the bits >> exactly, though I am not sure how long it would take.) >> If you have, say, sin() of the linear-congruential number >> stream then it is likely much more difficult. ? > it will look different in a histogram. suppose the rng was scaled to > be uniformly distributed over a segment as long as any multiple of > 2pi, then the p.d.f. would go up as it approaches +1 or -1. Yes you could do that. But assuming that you have the ability to find the constants for an LCG from the output, it is much harder if you don't have all the bits of the generator output. If, for example, you have the single precision sine then you likely don't have enough bits after taking the arcsine. -- glen
From: Luna Moon on 6 Apr 2010 13:54 On Apr 5, 5:29 pm, "Jan Simon" <matlab.THIS_Y... (a)nMINUSsimon.de>wrote: > Dear Luna! > > > I have a vector of real numbers in Matlab. How do I compress them? Of > > course this has to be lossless, since I need to be able to recover > > them. > > > The goal is to study the Shannon rate and entropy of these real > > numbers, so I decide to compress them and see how much compression > > ratio I can have. > > > I don't need to write the result into compressed files, so those > > headers, etc. are just overhead for me which affect me calculating the > > Entropy... so I just need a bare version of the compress ratio... > > Michael Kleder's function compresses data in the memory with the zlib: > http://www.mathworks.com/matlabcentral/fileexchange/8899 > > E.g. for sin(1:1e5) this saves 5% memory. 7-zip reduces the file by at least 25%. > > Good luck, Jan So how is this approach: I first write the floating numbers to a TEXT file, and then call Winzip or 7Zip from within Matlab and then measure the file size change before and after the compression, and then compute the ratio.
From: Luna Moon on 6 Apr 2010 13:55
On Apr 5, 7:32 pm, TideMan <mul... (a)gmail.com> wrote:> On Apr 6, 9:29 am, "Jan Simon" <matlab.THIS_Y... (a)nMINUSsimon.de>> wrote: > > > > > Dear Luna! > > > > I have a vector of real numbers in Matlab. How do I compress them? Of > > > course this has to be lossless, since I need to be able to recover > > > them. > > > > The goal is to study the Shannon rate and entropy of these real > > > numbers, so I decide to compress them and see how much compression > > > ratio I can have. > > > > I don't need to write the result into compressed files, so those > > > headers, etc. are just overhead for me which affect me calculating the > > > Entropy... so I just need a bare version of the compress ratio... > > > Michael Kleder's function compresses data in the memory with the zlib: > > http://www.mathworks.com/matlabcentral/fileexchange/8899 > > > E.g. for sin(1:1e5) this saves 5% memory. 7-zip reduces the file by at least 25%. > > > Good luck, Jan > > An entirely different approach is "wavelet shrinkage". > Google it. > It's easy to do in Matlab if you have the wavelet toolbox. > I use the techniques for denoising and despiking, but I've never tried > to compress data with them. > I use Shannon entropy to figure out the optimum mother wavelet. Sounds good. I guess the question is how to decide the Shannon entropy for a sequence of floating numbers? |