I defended my Ph.D thesis on GigaVoxels last July, and the document is now online.
You can download it there:
GigaVoxels: A Voxel-Based Rendering Pipeline For Efficient Exploration Of Large And Detailed Scenes
You can also check my other publications on my Ph.D webpage.
Ph.D thesis: GigaVoxels
Subscribe to:
Post Comments (Atom)
January 26, 2012 at 2:38 PM
Congratulations Cyril, really nice work! I never though about the overshading problem in the context of micro-polygons. Great summary of voxel vs. polygon rendering and of course great contribution with the Gigavoxels approach. Thanks!
January 27, 2012 at 4:58 PM
This is excellent work. Do you plan to release any source code packages based on GigaVoxels?
January 27, 2012 at 11:09 PM
Congrats! That looks awesome.
January 30, 2012 at 12:56 PM
Congrats, well done :)
February 14, 2012 at 10:41 AM
Working through this right now (page 77!), as a voxel newbie. Amazing work on all levels! One question -- in one sentence, since you defended this last July (6 months ago after all), has anything significantly "changed" that would affect an OpenGL implementation of your GigaVoxels pipeline? Encountered any issues since then, or keeping any errata?
This is truly outstanding work... from the focus on identifying and building on fundamentals, to the scalability upwards and downwards, to the extensibility and integrateability with existing mesh-based data or custom procedural generation or lighting techniques of one's choosing. I'm seriously inspired and motivated to put this to work to capture and/or create more beauty out there =)
March 10, 2012 at 5:13 PM
A silly n00b question -- for an N3 tree, each node has 8 childnodes and they are arranged sequentially -- why do you need to store a pointer to the sub-node at all? Would computing the "location" (index/offset really) of a node's sub-node be too expensive?
March 10, 2012 at 6:03 PM
Hi Phil.
To answer your first question, many things changed, but the main one i can give you is the fact that I am now using one border bricks (only on one side on each axis), but with node-centered values. This is sampled correctly by fetching value in a neighboring brick when the sample ends-up in a region with no border.
For your second question, sub-nodes are arranged sequentially and that's why we only store one pointer to the 8 sub-nodes. But this is not a real pointer, this is just an offset in the linear buffer encoded in 30bits. However we do need to store this reference because the octree is sparse and updated dynamically, and thus there is no predictable way to "compute" this offset.
Hope it helps :)
March 11, 2012 at 12:47 PM
Thanks for the clarification! Another quick question: if N=2 and M=8, each brick contains 512 voxels and hence (since one node addresses at most one brick) a total of 512 node tiles (4096 nodes) would be sufficient to address 2 million voxels -- which in turn should be quite sufficient to cover a screen resolution of say 1920*1080 (also roughly 2 million pixels), correct? But if a node needs 8 bytes, that gives just 32KB vram usage for the node pool (excluding additional cached node pools here). That number seems low -- the octree storage numbers in section 5.3 seem higher than that, as in 1-9 MB. Surely I'm missing something, what is it? :)
March 11, 2012 at 4:07 PM
In facts you need more for two reasons. The first reason is that in your 8x8x8 brick you count 8 voxels in depth, so to cover the 2M pixels screen you need more bricks. Also the depth complexity per pixel is usually >1, wish means that you need more than 1 voxel per pixel. However, your bricks do not necessarily align with the voxels actually needed for each pixel and thus you often need more than one brick in depth per pixel. These two reasons link with the problem of the granularity of the bricks, and reducing brick size allows to converge to the storage of the exact number of voxels actually needed per pixel. But then the borders become very costly...
March 12, 2012 at 1:04 AM
Ah of course ... the second I started reading your reply it clicked! :) Merci!
March 13, 2012 at 2:30 AM
April 3, 2012 at 12:32 PM
Impressive work!
At the end of the thesis you provide FragSniffer tool, which I can't compile unfortunately with visual studio 2010 because you provide precompiled nemographics.lib binary compiled with earlier visual studio version which I don't have access to (I believe it is vs2005). This makes it impossible to build your tool with other versions of visual studio.
I wonder if you could provide the sources or recompile this library with visual studio 2010 compiler ?
October 17, 2012 at 10:20 AM
Hi Cyril, great job!
This is interesting that I read through it recently. In your paper you mentioned that you chose to store isotropic Gaussian lobes characterized by an averaged vector D and a standard deviation σ. By Toksvig's paper, we have σ2 = 1-|D|/|D|. To understand this, I found Toksvig's paper of Mipmapping_Normal_Maps(http://developer.download.nvidia.com/whitepapers/2006/Mipmapping_Normal_Maps.pdf). However, I don't quite understand how he draw the curve of function of σ and |D| by Gaussian distribution. I mean, if |D| = f(σ), what the f(σ) is?
November 9, 2012 at 11:56 AM
Hi Cyril,
I've noticed in your demo video of the Sponza scene that you only use bricks (the voxels in red) at the lowest level - is this correct?
January 16, 2013 at 1:35 PM
In your thesis you describe how to cone trace soft shadows for point lights and area lights - what difference would the technique be for a directional light - like the one in the Sponza scene that you show?