Documentation
There is not a lot of documentation available yet! I prefer to spend my time on adding new
features to the program instead of writing documentation. If you are interested in WinOSi
and how it works, please let me know. If there is much demand, I will try to satisfy it. For
now, here is a simple overview:
How WinOSi works:
First, random light-rays are traced from all light-sources into
the scene. Where a light-ray hits an object, the hit (its position, direction, wavelength and
energy) is stored in the hitbuffer-memory. Then the light ray is reflected and/or refracted (diffuse
and/or specular, according to the objects material properties) and further hits are stored. When the
preallocated hitbuffer-memory-space is filled with hits, the second stage starts:
A standard raytracing pass casts scan-rays from the camera trough each pixel into the scene. Where
a ray hits an object, all previously stored hitpoints in a certain radius around the
intersection-point are evaluated to give the reflected intensity for that point towards the viewing
direction. This (wavelength-dependent) intensity is transformed into RGB-color-space and stored in
the accumulation buffer.
Then all hitbuffers are released and the process starts again with the first pass, comes to the
second pass, and the new intensities are added to the accumulation buffer.
With each iteration, the image in the accumulation buffer slowly converges to the final image. At
first, single light-hits can be distinguished in the image. With more and more hits integrated into
the image, the spots generate a kind of image-noise, with the contours of the scene appearing out
of the noise. Then with every iteration the noise is reduced and the scene content is becoming
clearer.
Typical rendering time for a simple scene in low resolution is 1 - 2 days (depending on CPU power,
here about 1 Ghz) before the noise becomes invisible, leaving a perfect illuminated smooth and
shiny image.
Note 1:
This seems to be very simple, but the problems are in the details!
Note 2:
Because the light-hit distribution depends on the true shape, it's distance and orientation of a
surface, and not of its normal vector, fakes like phong-interpolation and bump-mapping won't work
in WinOSi - you have to use real curved surfaces and displacement mapping!
Note 3:
Depending on how much memory your machine has left for the hitbuffer, there are often many
thousands of iterations for the final image. If you choose a different scanning wavelength for
each iteration, use a random pixel-jitter for the raytracing pass or slightly modify
camera-parameters or object-positions in the scene, you get perfect spatial and color
anti-aliasing, motion blur, depth of field, and other similar effects with a super-sampling
factor of 10000 or higher, giving a very smooth image.
-
-
|