From: Skybuck Flying on
I am working out the idea/plan/pseudo code in my head right and on text

And it's really nifty... the encoder can be as usual, quite fast and it
actually doesn't require any interleaving...

which means it doesn't require any extra pointers ! =D

That will make it very fast ! :)

This posting does leak a bit of information about the new algorithm
though... if you read carefull you will understand
where the "innovation" is at ;) :)

Skybuck =D

From: Skybuck Flying on

I just finished implementing my newest/latest RLE RGB algorithm =D

(You may call it a RLE interleaving algorithm) ;)

And I have done some quick testing... and debugging and it seems to work
perfectly/flawlessly =D

"Absolute perfection" comes to mind, at all fronts:

Speed, Compression, Correctness ;) :)

I haven't actually added the bit compression yet... but that should work as

So my final conclusion is:

Yes RLE RGB compression can work for interleaving streams with the correct
algorithm ! =D

Which requires exploiting a certain property of the RLE algorithm when it is
interleaved ! ;)

So the RLE algorithm actually obtains a new property when multiple RLE
streams are interleaved ! =D

Yup that's the secret... something interesting happens when RLE algorithms
are interleaved ! =D

And that property can be used to solve the problem ! =D

I am now 99.9999999999999999999999999% certain that the final bit
compression is going to work as well.

However the proof is in the pudding.

For now I have implemented a conceptual implementation which uses a byte and
a longword.

The byte and longword will be replaced by a 8-bitfield and a Skybuck's
universal code/bitfield.

These bitfields can than be bitpacked onto the output.

Then a final test will be done to say with 100% certainty if it's possible

The universal code will have to be slightly changed to match the property...
or I could create a general universal code which does it on both sides, hint

Then maybe later sometime in the far future I might replace the universal
code with a dynamic huffman code for perhaps the counts and perhaps the
The counts could be risky though because there could be many different
ones... the colors might compress better.

So maybe my universal code for this particular compression method is pretty

This whole experiment is just to see how to create a good rle rgb
interleaved algorithm to learn from it.

Now that I seem to have perfected it I can move on to other things like
perhaps doing it at the bitplane level and perhaps using GPGPU/OpenGL to try
and create an even faster version.

I want to release a super fast lossless codec sometime in the near (?)
future for 1. fun and 2. speed and 3. compression.

Speed is most important because having to re-encode/transcode sucks and
takes a lot of time with current codecs.

Maybe it could also be used for real-time recording of video losslessly.

Then second place of importancy is compression, to be able to perhaps store
video for the future to look back on it.

For now I wonder how my codec will compare to say for example Fraps codec...
for now it seems Fraps codec is fast and achieves pretty good lossless
compression ! ;) :)

As soon as I have some figures available I will perhaps let you guys know !
;) =D

Skybuck =D

From: Skybuck Flying on
Also during the development I sometimes took a little pause or had
breadfast/lunch and then I listened to for example this music:

Which I have fond memories of ! =D

Much fun coding this stuff and much fun listening to this music ! ;) =D

Skybuck =D

From: Skybuck Flying on

"Skybuck Flying" <IntoTheFuture(a)> wrote in message
> Also during the development I sometimes took a little pause or had
> breadfast/lunch and then I listened to for example this music:

Oh yeah before they removed that link again, the music title is:

"Primal Scream - Can't Go Back" ;) :)


Skybuck =D

From: MitchAlsup on
I once wrote a comperssioni algorithm that would take vectors for a
VLSI tester and compress these. It ends up that when one looks at the
vectors there is very little change in the vertical direction, and
much change in the horizontal direction.

RLE in the horizontal direction got 40%-ish lossless compression
RLE in the vertical direction vas getting 99.3% lossless compression

The compression was so good, it was actually faster to read out these
multi-GByte files into the tester in compressed form and expand them
on the fly into the tester than to read the image directly off the

Moral: compress on the dimension that has commonality.