From: Nicolas Bonneel on
Skybuck Flying wrote:
>> They also give a good review of the previous work which should allow you
>> to find all the infos you want, including a STAR report [EHK*06].
>
> ?

a state of the art report. Just check the reference [EHK*06].


>> Finally, storing hollow closed surfaces as volumetric data instead of
>> polygons or parametric surfaces is not that clever (except if you're
>> dealing with implicit surfaces simulation etc.).
>
> Polygons need to be rendered/pixelated with complex formula's and routines.

Projection is a division by z. It's not complex.
Rasterization is not complex, whether you rasterize a quad and check for
triangle inside/outside condition, or whether you do a Bresenham like
rasterization.
It can be costly though for tiny polygons.

> Voxel's can be ray-traced and massively parallel too ?!?

They can be ray-marched. This is much more costly than ray tracing where
line/triangles intersection have an analytic formula.
Usually you just use giant voxels (ie. octrees, kd-trees...) as an
acceleration structure to raytrace polygons.

> Like John Carmack said: When the polygons become that small it doesn't make
> much sense to use polygons anymore...

I agree. I've seen a powerpoint presentation given at siggraph a few
years ago where he evaluates the cost of transferring the data to the
gpu compared to the cost of rendering the pixels on cpu and transferring
the resulting pixels, knowing that data transfer is the bottleneck
rather than the cost of rendering. If the triangle is less than a few
pixels wide, then it's more useful to directly transfer the rasterized
pixels rather than sending the 3 vertices+normals+UVs and asking the gpu
to rasterize.
That's in part why billboards exist.

>
> Points is not an option those too small.

No. Points *are* an option. More precisely, you don't render "points"
but "splats" which are basically stretched discs. Look for example at:
http://www.irit.fr/~Gael.Guennebaud/docs/deferred_splatting_eg04.pdf
They render forests with points.

>
> Voxels seems ok, they lie in a grid... so their more like boxes.

They are also prone to filtering issues, are harder to render than
splats, are harder to raytrace than triangles or even splats, can
require a lot of memory for simple surfaces, can result in artefacts
(kind of stratifications), are harder to edit or model, are harder to
texture/parametrize (look at the paper "Tile Trees" at i3D 2007 for
example, which works for surfaces :
http://www-sop.inria.fr/reves/Basilic/2007/LD07/LD07.pdf )...

There is no magic in it. I agree it is a good area of research. I
however don't believe it to be the solution to everything.


Cheers
From: Nicolas Bonneel on
Skybuck Flying wrote:
> "Nicolas Bonneel" <nbonneel(a)cs.ubc.ca> wrote in message
>> This paper deal both with the acceleration structure (octree), the way to
>> use and update it efficiently on the GPU (during camera motion for
>> example, if the goal is rendering) and the rendering itself (ray marching
>> and filtering).
>
> As long as it renders one or two objects that not very impressive...
>
> It needs to render entire scenes... and then the scenes need to have physics
> as well.

Are you joking ? It needs to do the coffee as well ??
They render billions of voxels which is more than enough to render full
scenes. When they show the camera going into the human body, it *is* a
full scene which is being drawn. Replace the flesh by Quake 4 walls if
you want. Idem for the Sierpinsky sponge.

Physics is out of the scope.

> Voxels/Volumes can be compressed well, but need to be decompressed to do
> computations on...
>
> I don't think CPU's are suited to handle that kind of work...
>
> Nor do graphics cards seem really suited for it...

The reference I gave show that they are. Voxels are "decompressed" as
you said, since ray marching is performed on them. Replace the ray
marching step by anything else you want.

> The computations could be done inside the new technology as well so that the
> big volumes don't have to be transferred.

This looks like a crank sentence.

>
> The volumes get decompressed only when needed.

Nobody has waited for you to do that.


From: Skybuck Flying on

"Nicolas Bonneel" <nbonneel(a)cs.ubc.ca> wrote in message
news:hti1mq$6bn$1(a)swain.cs.ubc.ca...
> Skybuck Flying wrote:
>> "Nicolas Bonneel" <nbonneel(a)cs.ubc.ca> wrote in message
>>> This paper deal both with the acceleration structure (octree), the way
>>> to use and update it efficiently on the GPU (during camera motion for
>>> example, if the goal is rendering) and the rendering itself (ray
>>> marching and filtering).
>>
>> As long as it renders one or two objects that not very impressive...
>>
>> It needs to render entire scenes... and then the scenes need to have
>> physics as well.
>
> Are you joking ?

No.

> It needs to do the coffee as well ??

No.

> They render billions of voxels which is more than enough to render full

Let's see, a billion you say, a few billion you say ?

What is a billion ?

A billion is:

1000.000.000

Eye-opener for you:

That's not in 3D

That's a resolution of 1000x1000x1000.

That won't even fill my 1920x1200 monitor !

The moral of the story: 1D figures/numbers mean nothing when it comes to 3D
and are very deceptive.

> scenes. When they show the camera going into the human body, it *is* a
> full scene which is being drawn. Replace the flesh by Quake 4 walls if you
> want. Idem for the Sierpinsky sponge.
>
> Physics is out of the scope.

Physics needs to be done for games as well... even if it's simply simple as
collision/intersection detection or so.

>> Voxels/Volumes can be compressed well, but need to be decompressed to do
>> computations on...
>>
>> I don't think CPU's are suited to handle that kind of work...
>>
>> Nor do graphics cards seem really suited for it...
>
> The reference I gave show that they are.

Nope, I know very well what my hardware from 2006 can do.

It has a resolution of 4096x4096 for 2D textures which in itself isn't even
that big.

And for 3D it gets worse, for 3D it's actually 512x512x512.

That's even smaller than the small 1000x1000x1000 example !

> Voxels are "decompressed" as you said, since ray marching is performed on
> them. Replace the ray marching step by anything else you want.

I guess for now I would be mostly interested in the decompression itself.

Leave out the whole rendering/opengl and what not stuff... and just focus on
the decompression if possible...

First explain it conceptually... give it a name if possible.

Try to explain the basic concepts of the compression if possible.

For the time being I am not interested in actually writing any volume
software or whatever...

Though I am slightly interested in it, in the technology, and maybe for the
future if/when I do might want to write some software for it.

So for now I want to see "light" documentation which is easy to read and
explains the basic concepts as simple as possible...

It doesn't seem your document does that... it could help if it had some
pseudo code in a pascal like language or so... some comments or so.

Maybe some more pictures... whatever helps to make it more understable ;) :)

I want it/the documentation/document to be modular, not integrated...

I don't want a document describing the total renderer.

I want seperate documents focusing on each part of it.

For now I am only interested in the compression/decompression of volumes.

So everything in the document can be "thrown away" except the compression
side of it.

So my advice to you is:

Make a new document describing the compression/decompression only.

Maybe then I will start to take it a bit more seriously... because how else
would you transfer 1 billion bytes ?

Just once is nice for diashows/powerpoint... but for games I expect a lot
more traffic... that's why the compression
is the most important part of it.

I think the pci bus was actually limited to a few billions bytes per
second... so that's very low for uncompressed stuff.

I think I made my point clear...

>> The computations could be done inside the new technology as well so that
>> the big volumes don't have to be transferred.
>
> This looks like a crank sentence.

No, it's just like images.

The images are read from the harddisk in compressed form... then they enter
the memory/cpu and are decompressed there and they can stay there for a
while until the next compressed image needs to be decompressed and when room
is needed.

>>
>> The volumes get decompressed only when needed.
>
> Nobody has waited for you to do that.

The technology could be intelligent and automatically throw away
decompressed volumes if it needs to make room for new ones.

It could also remain dumb and require the programmer to explicitly "release
volumes/memory".

Or maybe even both for maximum control in case desired or easy of use when
desired.

Bye,
Skybuck.


From: Skybuck Flying on
>> Points is not an option those too small.
>
> No. Points *are* an option. More precisely, you don't render "points" but
> "splats" which are basically stretched discs. Look for example at:
> http://www.irit.fr/~Gael.Guennebaud/docs/deferred_splatting_eg04.pdf
> They render forests with points.

I guess the "splat" idea is the same as the "vector balls" idea from long,
long, long ago:

If I wanted to try it I would probably try to drive this concept to the
limit:

"UltraForce Vector Demo:"

http://www.youtube.com/watch?v=2suZ1KkZ9HI

See 3:07 where the vector balls start ;) :)

Is that the same concept as "splat" ? ;) :)

To me vector balls seem to represent reality the closest...

Aren't we all build up of tiny little molecules/atoms anyway ?

They were always pictured as "balls" in highschool...

Though later on they showed protons/neutrons/electrons...

And ultimately it might be energy strings or whatever...

But so far "atoms"/"tiny little balls" seems a good appromixation of reality
for "matter" ?

Bye,
Skybuck.


From: Nicolas Bonneel on
Skybuck Flying wrote:
> "Nicolas Bonneel" <nbonneel(a)cs.ubc.ca> wrote in message
> news:hti1mq$6bn$1(a)swain.cs.ubc.ca...
>> Skybuck Flying wrote:
>>> "Nicolas Bonneel" <nbonneel(a)cs.ubc.ca> wrote in message
>>>> This paper deal both with the acceleration structure (octree), the way
>>>> to use and update it efficiently on the GPU (during camera motion for
>>>> example, if the goal is rendering) and the rendering itself (ray
>>>> marching and filtering).
>>> As long as it renders one or two objects that not very impressive...
>>>
>>> It needs to render entire scenes... and then the scenes need to have
>>> physics as well.
>> Are you joking ?
>
> No.
>
>> It needs to do the coffee as well ??
>
> No.
>
>> They render billions of voxels which is more than enough to render full
>
> Let's see, a billion you say, a few billion you say ?
>
> What is a billion ?
>
> A billion is:
>
> 1000.000.000
>
> Eye-opener for you:
>
> That's not in 3D
>
> That's a resolution of 1000x1000x1000.

First, I said billionS.

Then, just *read* the paper, and see they render from 8192^3 resolution
voxels for real datas and up to 8.4M^3 virtual resolution (Sierpinsky).

>
> That won't even fill my 1920x1200 monitor !

Where did you see that 1 voxel = 1 pixel ?


>>> Voxels/Volumes can be compressed well, but need to be decompressed to do
>>> computations on...
>>>
>>> I don't think CPU's are suited to handle that kind of work...
>>>
>>> Nor do graphics cards seem really suited for it...
>> The reference I gave show that they are.
>
> Nope, I know very well what my hardware from 2006 can do.
>
> It has a resolution of 4096x4096 for 2D textures which in itself isn't even
> that big.
>
> And for 3D it gets worse, for 3D it's actually 512x512x512.
>
> That's even smaller than the small 1000x1000x1000 example !

For *THEIR* 8192^3 resolution, THEY use a 8800 GTS graphics card with
512Mb of memory. It was basically a quite high-end 2006 hardware but
could be found, or at worse in 2007. Currently, this hardware is
outdated (how much does a 8800GTS cost currently ? 20 bucks?).

They didn't invent their work (and I personally know well all of the
authors, and published with one of them). They just... compress data!
This is the scope of the paper though, but you don't seem to be willing
to read it.

I suggest you read a little bit more before posting your "innovative" ideas.


> It doesn't seem your document does that... it could help if it had some
> pseudo code in a pascal like language or so... some comments or so.

lol, giving a pascal code just for you ? Just *read* papers, and not
just one.


> Maybe some more pictures... whatever helps to make it more understable ;) :)

There is an accompanying video.

> I want it/the documentation/document to be modular, not integrated...
>
> I don't want a document describing the total renderer.
>
> I want seperate documents focusing on each part of it.

If *you* want something, do it yourself! Researchers are not here to
fulfill your personal desires because you are too lazy to read.


> Maybe then I will start to take it a bit more seriously... because how else
> would you transfer 1 billion bytes ?

Read it. It has been presented at Siggraph 2009 also (but it is an i3d
paper).


>>> The volumes get decompressed only when needed.
>> Nobody has waited for you to do that.
>
> The technology could be intelligent and automatically throw away
> decompressed volumes if it needs to make room for new ones.

you definitely need to read.