OK. Let's start in 2D to begin with. Imagine you had a blank sheet of white paper, some coloured paint, and a list of instructions in front of you. Here are these hypothetical instructions:
Now, assuming you followed the instructions right, and assuming I followed them right when I wrote this tutorial, you should end up with a page looking more or less like this:
All pretty easy. That is in principle how the Postscript page description language works. You have to specify the colours and co-ordinates of shapes in two dimensions. If you happen to be using a Postscript printer, when you print something out, the program you use generates a Postscript-format list of shapes and colours and transmits that to the printer. The only difference between a typical printout and the example here is in the complexity of the shapes that get sent.
What happens when you extend things to 3D? Well now, things do get more complicated. Obviously, you have to specify 3D co-ordinates for all the objects in the scene. Less obviously, there are properties solid objects possess that 2D objects do not. Wood, for example, has a three dimensional texture. Yes, you can easily imagine a flat sheet of paper or a laminate work surface with a convincing 2D wood texture on it. But what would happen if you drilled a hole through it? Real wood is wood all the way through, so you would expect to see a different part of the wood grain revealed inside the drill hole. If it's just a sheet of laminate, that wouldn't happen. A raytracer or 3D renderer would be able to cope with both such eventualities – assuming it was programmed with the correct information!
Another difference; 3D objects can refract light – think of a glass of water, for example. So in a 3D renderer, you don't just have to specify the colour and dimensions of an object, you may have to specify many other properties of the material, such as whether it's grainy, like wood, glossy and slightly reflective, like plastic, highly polished and reflective, like metal, wispy and translucent, like fog, or whatever other options the renderer may provide. Of course, 'highly polished and reflective, like metal' is a gross over-simplification. What about iron? So a good raytracer has to be able to let you specify all sorts of parameters that vary from one material to another. A major part of the art of raytracing is creating surface textures that actually look like the material they're supposed to be.
Two final points, so subtle they're often taken for granted: When you look at a drawing on a piece of paper, it's assumed that there's enough light for you to be able to see by! Also, you don't generally try to look at the paper exactly edge-on and then complain that you can't see what's on it. So in 3D, by analogy, you also need to define what light sources exist, and you need to specify your viewpoint. Novices frequently overlook both of these requirements.
Let's now recast our simple 2D tutorial in 3D terms, adding new 3D coordinates where needed. The newly-invented z-coordinates are in italics:
Now, if you programmed all those instructions into a raytracer you might get something like the following:
This scene, simple as it is, has a lot more to it than the 2D version. The shapes are slightly reflective, and hence you can see the green triangular prism reflected in the surface of the sphere and vice versa. What's more, as this is virtual reality, the computer has raised no objection to the fact that all three shapes overlap. That's not an optical illusion – the green prism really does impinge slightly on the red sphere, and the blue rod intersects both of the other two shapes. The blue line is now a blue column, and as it's rather a dark colour it doesn't show up very well. There are shadows. The rest of the surroundings are an oppressive black – because there's nothing there unless you explicitly define it – so the scene is a bit darker than the 2D diagram we started from. But without these extra details, you just don't get a convincing 3D environment.
There, you now understand how raytracing is done, from the user's point of view. The only difference is that in these examples I've used plain English and I've missed out some technical bits in the interest of clarity and simplicity. But convert my examples to the appropriate computer-ese (adding in the technical bits that I've left out), give them to a computer, and after a short delay – or a long delay, depending on the complexity of the scene – back will come a 3D computer rendition of your instructions.
Click here to see a computer-friendly version of these instructions. On most browsers you can use shift-click or control-click if you want to save this so-called source code to look at later. It's easy to see which bit produces the red sphere, the green prism and so on, because they're prominently marked. See if you can relate the object coordinates in the file to the coordinates of the shapes in the instructions given above. Then try to work out where the light source must be to cast the shadows it does, and where the camera viewpoint is. If you wanted a yellow sphere instead of a red one, or an orange light source, what would you change? If you can deduce the answers to these questions on your own, you've made a good start to understanding how PoVRay works.
There are rich sources of PoVRay programming tutorials on the net. In fact, there is also a pretty good tutorial supplied with the PoVRay program itself. It talks you through creating a demo scene that starts out quite similar to the example I've given here. If I come across any really good web tutorials, I'll link to them. I would also suggest you check out the Internet Raytracing Competition – http://www.irtc.org. Most of the entrants to that competition use PoVRay, and the quality of images that PoVRay can produce in skilled hands is quite astonishing.
Back to the raytracing page.
Back to the main page.