About Me

A pure Game Programmer by heart and luckily have same as profession. I am into serious game programming from last 2 years and before that I was a Gamer and Hobbyist Game Programmer. I wrote chunks of game code when I was in my learning phase. Every programming language I learned I made a game out of it. Apart from gaming I love reading books, writing, and watching theater.

Tuesday, February 24, 2009

The stunning image you are looking at is
the result of hundreds of hours of work with shaders.

Today, shaders have found their place in nearly every industry, most notably the entertainment industry, where they make their showcase in the form of stunning visual effects, pushing the line between fiction and reality. Their single solitary task is to define the appearance of surfaces. For example, the above spheres are all identical with the exception of the shader applied to them. Shaders have the ability to make any surface look like it is made of anything, and because of this, they have played a central role in the design of cinema special effects and game graphics.

 When a computer draws a 3D object, it does it with something called a renderer. The renderer's job, put simply, is to talk to the shaders about all of the objects in the scene and draw what the shaders say to draw. The renderer figures out which objects are 'underneath' which pixels. It then asks an object's shader what color the object should be at a given pixel, and the shader answers back. The renderer plots that color, and moves onto the next. That is the basic idea behind it all.

Most shaders are based on lights. That is, they determine how to color a surface by looking at how the surface is oriented toward lights in the scene (see the image at the left). This shading model, or algorithm, is called the Lambertian shading model. The Lambertian shading model says the amount of light hitting a given point on a surface depends on the position of the lights around the surface andthe direction the surface is facing. This is, not surprisingly, quite obvious when you think about it. Bring a candle up to an apple, and the apple will be lit where it more directly faces the candle's flame.

A shader, of course, doesn't have to literally 'shade'. It could just as easily make every spot on a surface the same color. These kinds of shaders actually have a use, and are usually called "flat" or "constant" shaders (see the image on the right). For example, they are useful for making the background behind some subjects one solid color, or for giving the appearance that something is emitting light. Sometimes it is useful to set some objects one solid color so that you can find their contours easily without getting confused by their 3D shape. This is usually only done during production, though, so you almost never see the output from flat shaders in a final image.                                      



As I said a moment ago, the Lambertian shading model lives at the heart of nearly every shader used. This is because, quite simply, people want their shaders to reflect the real-world quality of lighting. They want their objects to be shaded according to the lights around them. In order to understand how this model works, and how exactly you can shade by using lights, you have to learn a little about vectors. Vectors are really very simple in concept. They are simply things that point. Yep. That's it. They are like arrows pointing in a direction. There are some nice operations for comparing vectors too, which you will see soon.

Remember, Lambertian shading compares the lighting direction with the direction the surface faces. Well, those are actually two vectors. The surface's facing direction at any point is called itssurface normal. The surface normal is a vector that points in the direction the surface is facing. That's all it ever does.

Those green lines are the surface normals of the sphere. Now, we also have our second vector. This vector points from the surface to a light. This is the orange line you see to the left. By comparing the angle between a surface normal and a lighting vector, you can determine how much light is hitting that point on our surface (see the image to the right). The method used to compare the angle between two vectors is called the dot product. When the two vectors are pointing in very different directions, the dot product gives a very low number. When the vectors are pointing in similar directions, the dot product approaches the number one. So, knowing this, if we were to ever take the dot product between the surface normal and the lighting vector, and wind up with the number one, we would know that spot on the surface is facing directly at a light.


First off, I am going to introduce to you the major components of a "modern" shader. These components work together to give a surface the overall appearance of resembling some material. The first components I'm going to cover here are the most basic and common: Color, Incandescence, Specularity, and Reflectivity.

 

Color

Color is perhaps the most basic attribute of any shader. You get the color of a point on a surface by multiplying a solid color by the output of the Lambert shading function. For example, if you want to color an object solid bright red (R1G0B0), and the point you're shading has65% light falling on it (0.65 from the Lambert function), then you plot the color (1*0.650*0.650*0.65), to get (0.6500). So that point is65% red. By doing this, you have both Lambert shading and color working together.

Shading formula with just color
Final Color = Shading * Color

Where "Shading" is the value from the Lambert formula.

 

Incandescence

Incandescence is actually very, very simple. It ignores lighting entirely, and simply adds a whole color to every point on a surface. This gives the appearance of the surface actually luminous.

Shading formula including incancescence;
Final Color = Incandescence + (Shading * Color)

 

Specularity

Specularity is an extension of the Lambert shading method. It attempts to mimic the highlights you see on shiny objects. Unlike the Lambert shading formula, specular highlights are dependent upon the location of the camera. They use the location of the light source, the surface normal, and the location of the camera to figure out how much light from the light source is being reflected towards the camera. Specular highlights are added back to the color of an object much like Incandescence. Specular highlights are also usually multiplied by a color to give them a tint.

Shading formula including Specularity:
Final Color = (Specular Value * Specular Color) +Incandescence + (Shading * Color)

 

Reflectivity

Reflectivity is used to integrate light reflected from the scene into the final shaded color. Reflectivity, like Specularity and Incandescence, is also merely added. Reflected colors are usually multipled by the amount of specularity, as well, since the specularity of a surface is related to how reflective it would be.

Shading formula including Reflectivity:
Final Color = Reflection + (Specular Value Specular Color) +Incandescence + (Shading * Color)

If you were to look at all of the vectors used to figure out all of these shading components, it would look like this:

Here you can see the first two vectors we covered earlier, the Lighting Vector (L) and the Normal Vector (N). Remember that comparing the directions of these two vectors tells us how much light is falling on that point. I will briefly describe these vectors again, as well as the new ones we see.

Normal Vector (N) - Points in the direction the surface faces at a given spot.

Lighting Vector (L) - Points from a spot on the surface to a light source.

Camera Vector (V) - Points from a spot on the surface to the camera viewing it.

Reflection Vector (R) - This vector is used to determine what part of the scene is being reflected off of a certain spot on the surface. It points from the surface out into the scene. Whatever it hits is reflected at that spot. It can be found by reversing the camera vector (V) so that it points at the surface, then adding 2 * ( N * dot(NV) ). Confusing? Here's a step-by-step:

Half Vector (H- The Half Vector is used for calculating the brightness of the specular highlight. By comparing its direction to the Normal Vector (N) (using the dot product) the specular value can be found. As H gets closer to N, the specular highlight at that point gets brighter. It is created by adding the Camera Vector (V) to the Lighting Vector (L), and dividing by 2. Thus, this vector is the average between L and V. Specular Value = dot(NH), andH = (L + V) / 2.

Tuesday, February 3, 2009

Shader Series Primer: Fundamentals of the Programmable Pipeline in XNA Game Studio Express

Level: Intermediate

Area: Graphics Programming

Summary

This document is an introduction to the series of samples, tutorials, and articles known as the Shader Series. This is a serial set of educational documentation and sample code that should allow an intermediate 3D developer to begin to explore the programmable graphics pipeline.

Audience

This document will be most useful for developers with some previous experience with either the fixed-function pipeline or the BasicEffect type in the XNA Framework. This document assumes no previous experience writing shaders or effects.

Background

History

In 1999, Microsoft introduced an important new feature in DirectX 7 that came to be known as hardware transformation and lighting, or hardware T&L. The new technology moved the expensive vertex transformation and lighting calculations from the CPU to the GPU. The DirectX 7 API exposed an elaborate set of state values that allowed a fixed number of lights, textures, and other states to be applied to a given draw function.

While hardware texturing and lighting had an incredible impact on the quality of 3D graphics on the personal computer, there was a significant drawback. The calculations used to light and display the world were hard-wired on the GPU. As a result, games of that time began to look very similar, since they couldn’t differentiate their lighting models except by applying different states. The era of universal 3D acceleration had raised the overall quality bar, but at the cost of the flexibility afforded by software rendering.

In the field of offline software image rendering for movie special effects, small programs or functions called shaders were increasingly being used to define custom interactions between a variety of materials and lighting conditions. Real-time applications were soon to follow; in 2002, the first consumer level programmable GPUs became available. Game developers were eager to make use of the new functionality. Most early consumers of programmable GPUs associated shaders with the classic rippling water effect, which at the time was considered the height of real-time graphical “eye candy.”

 

DirectX 8.0 was the first Microsoft graphics API to support programmable shaders, though initially, all shader programs had to be written in assembly code. As shader hardware increased in complexity, so did the programs. High-level shading languages were introduced to make shader development manageable. Today, Microsoft high-level shader language (HLSL) is the standard language used by all Microsoft 3D APIs, including the XNA Framework. The language compiles directly to the byte code used to execute shaders on GPUs.

 

Shaders have evolved greatly over the years. Generations of shader hardware are usually categorized by the DirectX shader models they support. Early shader models had extreme limits on the kinds and number of instructions that could be run on each vertex or pixel. Later models defined more instructions, added larger numbers of instructions per shader program, and enabled looping and branching functionality. Many XNA Windows materials are written with a minimum bar of Shader Model 2.0, while the Xbox 360 platform supports its own version of Shader Model 3.0. These models have the flexibility to support a huge number of rendering and optimization scenarios.

 

High-Level Shader Language (HLSL)

HLSL is the programming language created by Microsoft for writing shaders. It is similar to many C-style languages. The DirectX SDK is a great place to get more information about HLSL. A very complete set of documentation on HLSL can be found here:

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/directx9_c/HLSL_Shaders.asp.

A full description of HLSL is outside the scope of this document. Familiarity with HLSL is not required for the rest of this document, but the link above is a great starting point for further reading.

Often the best way to learn something is to jump straight in. All of the upcoming example shaders are labeled and commented in such a way to make comprehension more straightforward. This document assumes no previous experience with HLSL and attempts to clarify new HLSL grammars as they come up.

Other Shader Languages

There are plenty of other high-level shader languages available, but HLSL is the only language supported intrinsically by XNA and DirectX. Other common languages are GLSL (Open GL Shading Language) and NVidia’s Cg language. When non-XNA materials reference these languages, they essentially fill the role that HLSL does for DirectX.

Effects


Introduction to Vertex Shaders

Vertex shaders expose functionality that was originally hard-coded into fixed-function hardware texture and lighting. Vertex shader programs are functions run once on each vertex passed into a Draw call. They are responsible for transforming raw, unlit vertices into processed vertex data usable by the rest of the graphics pipeline. The input of the vertex shader corresponds to the untransformed data in the vertex buffer.

At the bare minimum, a vertex shader only needs to return a transformed position.

Introduction to Pixel Shaders

Pixel shaders add a level of control not available in classic fixed-function pipelines. To understand a pixel shader, you have to know a bit about what happens after the vertex shader runs. The processed vertex data is used to set up triangles, which in turn are used to determine which pixels on the screen will be drawn. The input to the pixel shader is calculated using the vertex shader outputs from each of the triangle's three vertices.

The inputs of a pixel shader function are therefore bound to the outputs of the vertex shader. So if a vertex shader returns color data, the pixel shader inputs may include this color data. The data is typically interpolated using the three surrounding vertices. For example, imagine an equilateral triangle. Each of its three vertices is a different color. One is red, one is green, and one is blue. The color input to the pixel shader will be calculated by linear interpolation on those three colors. Pixels that are close to the red vertex will be mostly red, pixels that are closer to the blue vertex will be blue, and the pixel in the exact center of the triangle will have equal parts of red, green, and blue.

At a minimum, pixel shaders must output color data. Pixel shader outputs translate directly into the colors seen on the screen. Pixel shaders are primarily used for a number of per-pixel color operations, including texturing, lighting, and image processing.

Introduction to Effects

Effects combine the ideas of vertex shaders, pixel shaders, and graphics device states into one common file format. XNA supports shaders primarily through effects, so they are central to the idea of writing your own shader programs. An effect file is a text file that contains the HLSL code used by any number of vertex and pixel shaders. 

An effect contains one or more techniques, which are in turn made up of at least one pass. Each pass usually contains one vertex shader function, one pixel shader function, and any number of render state and sampler state settings. In the next section, we’ll look as a simple example effect and go over each line in detail.

Examining a Simple Effect

float4x4 mWorldViewProj;  // World * View * Projection transformation
 
 
float4 Vertex_Shader_Transform( 
    in float4 vPosition : POSITION ) : POSITION
{
    float4 TransformedPosition;
 
    // Transform the vertex into projection space. 
    TransformedPosition = mul( vPosition, mWorldViewProj );
    
    return TransformedPosition;
}

 

float4 Pixel_Shader() : COLOR0
{
    return float4(1,1,1,1);
}
 
technique ColorShaded
{
    pass P0
    {          
        VertexShader = compile vs_1_1 Vertex_Shader_Transform();
        PixelShader  = compile ps_1_4 Pixel_Shader(); 
    }
}

This effect is one of the simplest effects that will produce a useable render. This shader will take a world-view-projection matrix and render white geometry based on its vertex positions. We’ll now break this shader down and explain each part in detail. 

HLSL Semantics

In the code listing above, a syntax that may be somewhat unfamiliar are the capitalized keywords that follow a variable and a colon. Consider the following line of HLSL code:

in float4 vPosition : POSITION

The POSITION keyword is called a semantic, which has an important place in HLSL code. These keywords indicate to the shader compiler how to map the inputs and outputs of the graphics data to the shader variables. In this example, vertex position data is being mapped to an argument called vPosition.  This informs the shader that the vPosition argument will contain position data from the vertex buffer. 

This document will explain the usage of semantics as they come up in the effect code.

HLSL Types

float4 TransformedPosition;

One aspect of HLSL programming that will quickly become intuitive is the different primitive types available when initializing variables. In this case, a float4x4 primitive is used, indicating a matrix of four floats by four floats. A Vector3, which is a structure of three floats, is a float3 in HLSL.  HLSL defines a number of primitive types, and a robust documentation can be found here:  http://msdn2.microsoft.com/en-us/library/bb206325.aspx.

Also provided here is a table mapping some basic HLSL types to their .NET or XNA Framework equivalents.

HLSL Type

XNA or .NET Framework Type

Float

Float

float2

Vector2

float3

Vector3

float4

Vector4, Quaternion, Color

float4x4

Matrix

Int

Int32

 

Effect Parameters

Effect parameters are the uniform data that remains constant for every vertex or pixel processed by the Draw call. These can be initialized in the effect, though many times it’s only appropriate to set these values in the render loop. Effect constants are used to represent a variety of things, but most commonly they’ll represent transformation data, light settings, and material information.

float4x4 mWorldViewProj;  // World * View * Projection transformation

Only one constant has been specified in the example effect. In this case, it’s the world-view-projection matrix used to transform the vertices drawn from object space into clip space.

By itself, this uninitialized parameter isn’t all that helpful. The application must provide this data. The XNA Framework API facilitates this assignment using the EffectParameter type, which is used to get or set the value of the parameter in the effect. The following condensed example shows how one might set the above matrix in C# code.

//Initialize the parameter

Effect exampleEffect = content.Load<Effect>("ExampleEffect");

EffectParameter worldViewProjParameter =

   exampleEffect.Parameters["mWorldViewProj"];

 

Matrix worldViewProj = Matrix.Identity *  //world transform

   Matrix.CreateLookAt(                   //view transform

      new Vector3(0f, 0f, -10f),

      Vector3.Zero,

      Vector3.Up) *

   Matrix.CreatePerspectiveFieldOfView(   //projection transform

      MathHelper.PiOver4,

      1.333f,

      0.1f,

      100f); 

 

//Set the world-view-projection matrix

worldViewProjParameter.SetValue(worldViewProj);

Uniform and Varying Inputs

The data that makes shaders function comes in two flavors: varying and uniform. Varying data is unique to each execution of the shader function. In the case of vertex shaders, it’s the data that comes from the vertex buffer. For pixel shaders, it is the data specific to the individual pixel being rendered.

The other type of data is uniform, and it includes data that applies across the entire draw call. This is also referred to as constant data, and is treated differently. The developer can set the values of any of the shader constants through the Effect API. In the previous example, one of the constants was a float4x4 (a 4x4 matrix of floating-point values) called mWorldViewProj. In the XNA Framework API, the developer can look up the wvp field by name and set it to a matrix available in the application. In this example, the matrix being set is the word-view-projection matrix information required by nearly every basic vertex shader.

Vertex Shader Function

Vertex shaders take a variety of inputs, and the values of these inputs vary for each vertex rendered. Usually, there’s a one-to-one correspondence between a vertex shader’s inputs and the structure of the vertices in the supplied vertex buffer.

float4 Vertex_Shader_Transform( 
         in float4 vPosition : POSITION ) : POSITION

In the provided example shader, the vertex shader takes a single input:  the untransformed vertex position. The way that the shader informs Direct3D what the purpose of each variable is through semantics. In this case, the POSITION semantic is applied to vPosition, meaning that vPosition will correspond to the x, y, and z-coordinates of a vertex.

There is a second POSITION semantic declared after the function declaration. This semantic applies to the float4 return value of the vertex shader function. This is an output semantic that informs the effect compiler that the return value is a transformed position.

Next, the body of the vertex shader function will be examined, starting with the first line:

    float4 TransformedPosition;

Here, we’re initializing a variable that will hold the results of the vertex shader. This is a structure of the type float4. The syntax for declaring a local variable is similar to variable initialization in C# or other C-style languages.

Transformation Pipeline

In the fixed-function pipeline, the actual transformation function was hidden from the developer.  In the programmable pipeline, shader flexibility is contingent on allowing the developer to apply transforms as needed.

In this example, the vertex shader is responsible for transforming the incoming vertex data. This requires a calculation in the shader that multiplies a position vector by a world-view-projection matrix. 

    // Transform the vertex into projection space. 
    TransformedPosition = mul( vPosition, mWorldViewProj );

This calculation transforms the vertex position from model space to projection space. These transformed positions are used by the geometry processing portion of the Direct3D pipeline to define the triangles that make up primitives on the screen. This is a matter of a simple multiply (the mul function in HLSL). That function in the shader is identical to calling Vector4.Transform(vPosition, mWorldViewProj) in the XNA Framework.

For more information about world-view-projection transforms, see the “Coordinate Spaces article.

Returning Data from the Vertex Shader

The last part of the vertex shader function simply returns the output from the shader. Like C++ and C#, the return keyword is used to return the vertex shader outputs.

    return TransformedPosition;

 

The Pixel Shader Function

float4 Pixel_Shader( ) : COLOR0
 

The first thing to note is that the pixel shader is returning a float4 value. This value represents the color of the pixel after the draw call. A pixel shader’s primary function is to return the color of the current pixel. Like the vertex shader, a semantic (COLOR0) is defined for the return value of the function.

Most simple pixel shader functions will only ever return an RGBA color. In most shaders, color values are represented by floating-point vectors with 0.0 being completely dark and 1.0 as the maximum or “fully-on” state. The graphics hardware then translates this value into a color that is meaningful in the context of the current back-buffer format.

 return float4(1,1,1,1);

There’s not much to this pixel shader, since the vertex shader has done all the hard. The pixel shader simply returns a white color. This means that all of the triangles drawn with this shader will appear flat-white.

State Setting

The last part of the effect is used to set state on the GraphicsDevice. It tells the device how the shader functions should be used.

technique ColorShaded
{
    pass P0
    {          
        VertexShader = compile vs_1_0 Vertex_Shader_Transform();
        PixelShader  = compile ps_1_4 Pixel_Shader(); 
    }
}

This section informs the effect of which shaders to apply using a given technique or pass. An effect file may contain several techniques. However, for this example, the effect is limited to a single technique. Passes are included to allow multiple-pass renders, which are common in more complex shaders. In this example, “P0” refers to the name of the pass.

There are only two states being set in this technique – the pixel shader and the vertex shader. The compile command also indicates what shader model to which to compile the shader. For now, it’s best not to get bogged down by shader model specifics. The samples all use appropriate shader models for the techniques being employed.

There are lots of other states that can be set in the technique. For example, nearly all of the render states and sampler states available on the GraphicsDevice can be set in the technique code. This is a useful way to organize specific states with their accompanying shaders. Later samples in the Shader Series will cover more of these states in depth.

Monday, February 2, 2009
























Current Mood:
thoughtful


Simulating Fire

Computer graphics researchers have long placed a significant emphasis on rendering aesthetically pleasing fire in 3D. Until now, efforts have failed to realistically capture the physical characteristics of fire. The natural randomness and turbulence of fire typically leads to a rendering solution based on well-understood calculations, such as Perlin noise and Gaussian distribution-based particle systems. These visual effects all point to a common model of fire but do not demonstrate the physical properties of fire beyond a single flame source. Intel’s Smoke demo combines a traditional particle system that captures the visual effects of fire with a secondary system that treats fire as a heat source. This heat emitter dictates how a fire spreads by following a fuel source, how intense the fire is at any particular position, and the proximity of the heat source to another fuel source.

Implementation

Objects in Smoke

The Smoke demo architecture1 uses individual components defined as systems that house typical game engine features, such as physics, graphics, audio, and AI. A typical game entity in Smoke is an abstract object linked to several systems. For example, a horse in Smoke’s farm scene has the following systems:

• Graphics for the model and skeletal animation

• Geometry to control position and orientation

• Physics for collision detection

• Audio for sound effects

The structural logic is the same for every object in the scene, such as the meteors that rain down from the sky, which include the graphics, physics, and audio systems as well as the fire system. The fire system is responsible for the demo’s physical and visual properties of fire.

Fire System

Smart Particles

The procedural fire system consists of two discrete parts: a particle emitter based on a particle system inspired by Luna2 that includes billboard flame textures (fire particles) and a heat emitter system that models the heat property of fire (heat particles). These graphics are show in later fi gures to differentiate the two particle systems. A fuel source, such as a tree in the Smoke demo, consists of multiple branches and canopies (for each leaf cluster). However, any geometry, including meteor objects, can be a potential fuel source. Each branch and canopy of a tree can serve as a host to the fire’s smart particle system.

The system can use the host object’s axis- aligned bounding box (AABB) to not only determine where the visual flame particles should be positioned but also to conduct collision checks in the heat emitter. Just as in a real fire, heat tends to spread upward and away from a heat source, moving up

a tree from branch to branch, finally reaching the canopy. At the same time, fire can occasionally spread downward, following the path of a canopy to a branch as indicated in

Figure 1.

Figure 1. The smart particle system—fuel source detection as a

heat particle intersects a bounding box.

Fire Particles Heat Particles

Spreading Fire

Meteor objects are the only objects in the scene that are initially burning. They also serve as hosts for the fire system, and as they fall through the scene some may pass near an element, such as a tree, that is also associated with the fire system. Trees in the demo consist of branches and canopies, each of which is an individual geometric object. If that object is not already burning, the fire system’s collision-detection algorithm will set it on fire, further propagating fire throughout the tree and potentially to other trees in the scene. The heat emitter’s collision check is a time-intensive computation well-suited to r un within Smoke’s highly parallel, task-oriented architecture. Each burning object consists of several individual fires that represent a visual particle system and a physical fire particle from the heat emitter. In terms of the collision check, the fire particle is treated like a ray and is tested against an intersection of an adjacent fuel source’s AABB. The magnitude of this ray approximates the physical nature of a fire’s heat to set that object on fire. Objects that are outside the range of this ray are not affected.

Figure 2 shows how a collision between a heat emitter and a branch will mark it as burning. This activates that object’s visual particles and heat emitter, allowing the fire to spread. The process repeats over time until all fuel sources within range are on fire. Each fire object is iterated against all branches that are on fire and checked against each heat emitter to detect if there is a collision with an adjacent fuel source.

The algorithm for this check consists of the following pseudo code:

The deepest piece of this code performs the actual intersection test between the heat emitter’s heat rays in Tree A and a branch in Tree

as shown in Figure 3.

Because the heat particle belongs to a heat emitter in a different object, that fire’s heat particle must be translated into the local space of the other geometric object (see Figure 3).


Figure 3. A translation and collision test to translate heat

particles from Tree A to Tree B.

Figure 4. A meteor spreading fire to a standing tree.

Fire System Parameters

The fire system includes a set of input values for the visual and heat-emitter particle systems that control

individual particles: • Type (spherical fire for meteors, line for branches, patch for canopies) • Fire density (how many heat emitter or visual particles exist for a single burning element) • Size (the rendered billboard’s size)

• Impulse (impulse velocity vector applied to guide fire in a default direction)

• Shift (upward position adjustments to guide fire direction)

• Lifetime (how long do the heat emitter and or visual particles live)

The following images depict the initial spherical fire from the meteor catching a tree on fire (Figure 4), then spreading throughout that tree (Figure 5), and ending with the fire spreading to another tree (Figure 6).Rendering Fire, Smoke, and Embers The visual particle system used to render the flame billboards uses a sequence of textures alternating over the lifetime of the particle. This creates the transition

of a growing flame expiring into smoke and embers that rise above the tree in a towering plume implemented in Ogre* 3D3 shader code.

Figure 5. The fire system spreads upwards, following the

fuel sources.

Figure 6. Once the initial tree is fully ablaze, the fire spreads

to the next tree.

AN OVERVIEW OF HOW TO ACCURATELY MODEL PROCEDURALLY SPREADING FIRE

Water

The Smoke demo includes a water hose (8) that allows users to move around the scene and extinguish the fire caused by the falling meteors. As already noted, each fire object—in this case an entity

bound to the geometric tree object—is iterated over every branch that is on fire. The fire object is checked against each heat emitter contained in the fire for a collision with an adjacent non-burning branch. Water is a natural extension of the fire system, and additional checks are used to extinguish a burning element in the collision checking code and to prevent the water from spreading, as the fire does, to another object. Much like the geometric meteors use an attached fire system that is parameterized to start fires as a heat emitter, the invisible geometric cold object has a fire system instance set as a cold emitter, enabling it to participate in collision detection and to interact with other fire system objects. The Smoke demo uses the Particle FX plug-in provided in the Ogre 3D package for its script-based

particle system. Collision detection occurs through the fire system that is bound to an invisible geometric object fired from the camera’s position like a rapid-fire projectile. As the water object passes by or collides with a burning object, the attached fire system determines if any of the cold emitter’s rays intersect with a burning object, thereby extinguishing the fire. An added bonus to this effect is that the invisible water object is also bound to Smoke’s multi-core-compatible Havok4 physics system, allowing the water object to act as a projectile and to interact with destructible elements in the scene, such as the farmhouse.

Intel does not make any representations or warranties whatsoever regarding quality, reliability, functionality,

Figure 8. The fire system—water introduced as a cold emitter