Tutorial
Introduction: Creating a Spinning Box
This example creates the spinning textured box shown below. It uses several JavaScript utilities for common WebGL tasks: webgl-utils.js which contains some basic WebGL helpers, J3DI.js which contains general utilities, and J3DIMath.js, which provides matrix functions. These utilities allow the discussion to focus on the main phases of a WebGL program.
Click this link to launch in a full tab.
To download a browser that can display this WebGL program, follow these instructions.
Creating the Context
WebGL is built on top of the HTML5 <canvas>
element. As for a 2D canvas, you start out by getting a WebGLRenderingContext
with a call to the getContext()
method of the <canvas>
element, passing the string "experimental-webgl". (This string is temporary and will eventually change to "webgl".) The returned object has a set of functions very similar to OpenGL ES 2.0.
Creating the Shaders
Nothing happens in WebGL without shaders. Shaders take shape data and turn it into pixels on the screen. When using GLSL, you define two separate shaders. The vertex shader runs on each corner of every triangle being rendered. Here, you transform the points, pass along the texture coordinates, and use the normals to compute a lighting factor based on the normals of each triangle. (For more information on lighting, see this helpful GLSL Tutorial.) GLSL gives you one special variable to store the transformed corner point, gl_Position
. The value stored there for each of the corners of a triangle is used to interpolate all the pixels being output. The texture coordinates and lighting factor are passed in varying variables created for the purpose.
All these values are passed to the fragment shader, which runs on each pixel of every transformed triangle passed in. This is where you get the appropriate pixel from the texture, adjust its lighting, and output the pixel. GLSL gives you a special variable for this, gl_FragColor
. Whatever color you store there will be the color of that pixel.
The following code defines the vertex and fragment shaders used in the spinning box example. This example uses HTML script notation, which is a useful way to include GLSL. (HTML ignores this script element.) The contents of the script are passed as a string to the shaderSource()
function:
<script id="vshader" type="x-shader/x-vertex">
uniform mat4 u_modelViewProjMatrix;
uniform mat4 u_normalMatrix;
uniform vec3 lightDir;
attribute vec3 vNormal;
attribute vec4 vTexCoord;
attribute vec4 vPosition;
varying float v_Dot;
varying vec2 v_texCoord;
void main()
{
gl_Position = u_modelViewProjMatrix * vPosition;
v_texCoord = vTexCoord.st;
vec4 transNormal = u_normalMatrix * vec4(vNormal, 1);
v_Dot = max(dot(transNormal.xyz, lightDir), 0.0);
}
</script>
<script id="fshader" type="x-shader/x-fragment">
precision mediump float;
uniform sampler2D sampler2d;
varying float v_Dot;
varying vec2 v_texCoord;
void main()
{
vec2 texCoord = vec2(v_texCoord.s, 1.0 - v_texCoord.t);
vec4 color = texture2D(sampler2d, texCoord);
color += vec4(0.1, 0.1, 0.1, 1);
gl_FragColor = vec4(color.xyz * v_Dot, color.a);
}
</script>
The vertex shader in this example simply transforms the vertex position by a composite model-view/projection matrix (see Setting Up the Viewport) and then sends along the vertex position, vPosition
, to the fragment shader. Then the vertex shader passes along the texture coodinate, vTexCoord
and uses the normal in vNormal
to compute a lighting factor, v_Dot
for the fragment shader. The fragment shader is even simpler. It just gets a pixel from the texture (after flipping the texture coordinate so the image is right-side up). Then it multiplies that by the lighting factor passed in from the vertex shader. This causes the pixels to be brighter when a side of the cube is facing you and darker when it is at an angle, giving it a realistic lighting effect.
Initializing the Engine
The next step is to get WebGL up and running. The init()
function uses the webgl-debug.js utility library:
function init()
{
// Initialize
var gl = initWebGL("example");
if (!gl) {
return;
}
g.program = simpleSetup(
gl, "vshader", "fshader",
[ "vNormal", "vColor", "vPosition"], [ 0, 0, 0, 1 ], 10000);
// Set some uniform variables for the shaders
gl.uniform3f(gl.getUniformLocation(g.program, "lightDir"), 0, 0, 1);
gl.uniform1i(gl.getUniformLocation(g.program, "sampler2d"), 0);
// Create a box. with the BufferObjects containing the arrays
// for vertices, normals, texture coords, and indices.
g.box = makeBox(gl);
// Load an image to use. Returns a WebGLTexture object
spiritTexture = loadImageTexture(gl, "resources/spirit.jpg");
// Create some matrices to use later and save their locations in the shaders
g.mvMatrix = new J3DIMatrix4();
g.u_normalMatrixLoc = gl.getUniformLocation(g.program, "u_normalMatrix");
g.normalMatrix = new J3DIMatrix4();
g.u_modelViewProjMatrixLoc = gl.getUniformLocation(g.program, "u_modelViewProjMatrix");
g.mvpMatrix = new J3DIMatrix4();
// Enable all of the vertex attribute arrays.
gl.enableVertexAttribArray(0);
gl.enableVertexAttribArray(1);
gl.enableVertexAttribArray(2);
// Set up all the vertex attributes for vertices, normals and texCoords
gl.bindBuffer(gl.ARRAY_BUFFER, g.box.vertexObject);
gl.vertexAttribPointer(2, 3, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, g.box.normalObject);
gl.vertexAttribPointer(0, 3, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, g.box.texCoordObject);
gl.vertexAttribPointer(1, 2, gl.FLOAT, false, 0, 0);
// Bind the index array
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, g.box.indexObject);
return gl;
}
The simpleSetup()
utility function takes the following parameters:
gl
- The WebGL context
"vshader", "fshader",
- Ids of the vertex and fragment shaders
"vNormal", "vColor", "vPosition"
- Vertex shader attribute names used by the shaders.
[ 0, 0, 0, 1 ], 10000)
- Clear color and depth values
This initialization loads the shaders and attaches them to a GLSL program, which is how you define the interface to your shaders. You pass uniform parameters to a shader for values that don't change and vertex attributes for things that do change, like vertices. Most of this is taken care of in the utility library, but you can pass additional values here, as with the lightDir
and sampler2d
uniforms. This code also tells WebGL to use the arrays the makeBox()
function sets up containing the vertices, normals, and texture coordinates.
Setting Up the Viewport
Before you can render, you have to tell the canvas how to map the objects from modeling space to screen space. Initially, an object's geometry is described in modeling coordinates, local coordinates that describe the shape itself. These coordinates are transformed into other types of coordinates as follows:
MODELING COORDINATES ->> WORLD COORDINATES ->> VIEW COORDINATES ->> VIEWPORT COORDINATES
- World coordinates are the global coordinate system that takes into account all objects in the scene.
- View coordinates are the coordinate system that incorporates a virtual camera's view of the scene.
- Viewport coordinates are the coordinate system that describes the camera projection for the scene (for example, orthographic or perspective) and fits the projected scene into screen space. This projection takes the scene from a 3D to a 2D projection so that it can be displayed on the screen. The textured spinning box example uses a perspective projection, which will make closer objects look larger than further ones, just as in the real world.
A transformation matrix is used to perform the calculations from one coordinate system to the next. In this example, the transformation from modeling to world to view coordinates is performed by the model-view matrix, which combines two transformations into one matrix. The perspective matrix, pMatrix
, performs the transformation from view coordinates to viewport coordinates. This perspective matrix is created in the reshape()
function and saved for use later at the end of the transformation pipeline, where it transforms view coordinates to viewport coordinates.
The reshape()
function uses the matrix function utility library (J3DIMath.js):
function reshape(gl)
{
// if the display size of the canvas has changed
// change the size we render at to match.
var canvas = document.getElementById('example');
if (canvas.clientWidth == canvas.width && canvas.clientHeight == canvas.height) {
return;
}
canvas.width = canvas.clientWidth;
canvas.height = canvas.clientHeight;
// Set the viewport and projection matrix for the scene
gl.viewport(0, 0, canvas.clientWidth, canvas.clientHeight);
g.perspectiveMatrix = new J3DIMatrix4();
g.perspectiveMatrix.lookat(0, 0, 7, 0, 0, 0, 0, 1, 0);
g.perspectiveMatrix.perspective(30, canvas.clientWidth / canvas.clientHeight, 1, 10000);
}
Drawing the Box
Now you're all set up and you can finally draw your box. Most of the hard work is done, but you still have to tell the box to spin, and to do that you define a model-view matrix, which transforms modeling coordinates to view coordinates. This transformation tells the box where and at what angle you want it to appear. Then you multiply the model-view matrix by the perspective matrix that was saved earlier to complete the transformation all the way from modeling coordinates to viewport coordinates. Note that the order of transformations is significant (that is, matrix multiplication is not commutative). You can also turn the model-view matrix into a normal matrix so it can be used to compute the proper lighting on the box:
function drawPicture(gl)
{
//Make sure the canvas is sized correctly.
reshape(gl);
// Clear the canvas
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
// Make a model/view matrix.
g.mvMatrix.makeIdentity();
g.mvMatrix.rotate(20, 1, 0, 0);
g.mvMatrix.rotate(currentAngle, 0, 1, 0);
// Construct the normal matrix from the model-view matrix and pass it in
g.normalMatrix.load(g.mvMatrix);
g.normalMatrix.invert();
g.normalMatrix.transpose();
g.normalMatrix.setUniform(gl, g.u_normalMatrixLoc, false);
// Construct the model-view * projection matrix and pass it in
g.mvpMatrix.load(g.perspectiveMatrix);
g.mvpMatrix.multiply(g.mvMatrix);
g.mvpMatrix.setUniform(gl, g.u_modelViewProjMatrixLoc, false);
// Bind the texture to use
gl.bindTexture(gl.TEXTURE_2D, spiritTexture);
// Draw the cube
gl.drawElements(gl.TRIANGLES, g.box.numIndices, gl.UNSIGNED_BYTE, 0);
// Show the framerate
framerate.snapshot();
currentAngle += incAngle;
if (currentAngle > 360) {
currentAngle -= 360;
}
}
Finally, you simply add a JavaScript call to requestAnimationFrame to keep changing the angle and rendering the box in its new position—and you have a spinning box!
What's Next?
For a nontextured version of the spinning cube, see the Spinning Box example. Also see the Demo Repository for more WebGL samples.
Khronos WebGL Spinning Box Tutorial is licensed under a Creative Commons Attribution 3.0 Unported License.