Saturday, December 27, 2014

My OpenGL 4.4 tutorial at FIT2014

I gave an invited tutorial at the 12th International Conference on Frontiers of Information Technology (FIT2014) held at the Serena Hotel Islamabad, Pakistan from 17th to 19th December 2014. This is a beginner level tutorial on how to get started with OpenGL v 4.4. I had setup a tutorial web page and source code repository especially for this tutorial.

Source Codes:
Github Repo:

Tuesday, August 19, 2014

Resolving Visual Studio samples fail to load error when upgrading from CUDA v 5.5 to CUDA v 6.0

I recently upgraded my CUDA installation from v 5.5 to 6.0. I uninstalled the old v 5.5 and then installed the v 6.0. To my surprise, as soon as I tried to load the Visual Studio sample project solution files, they failed to load. An error dialog popup saying something along these lines
"Unable to read the project file . The imported project was not found file cannot be found...."
 A close inspection revealed that the CUDA build tools were still referring to the old CUDA v 5.5 sdk build tools. Since I had already uninstalled the previous CUDA version, the system could not find the path.

You just have to update the CUDAPropsPath environment variable to point to the new SDK MSBuildExtensions folder. Doing so loads all of the sample projects fine.

Hope this tip will help others as well.

Sunday, May 18, 2014

Video course review: Building Android Games with OpenGL ES

Packt publishing invited me to do a review for their video course Building Android Games with OpenGL ES. Here is my complete review. I have broken it down chapter by chapter so lets get started.

The Review:
Chapter 1:
Section 1.1 starts up with an introduction to the development environment Eclipse and Android ADt plugin. The author details all the steps rather briskly to show the URLs from where the libraries are to be downloaded and how they are setup. He also shows how to setup the Android SDK manager as well as the eclipse IDE to load the ADT plugin.It concludes with the creation of an emulator for OpenGL ES code debugging/testing.

Section 1.2 gets our hand dirty with basic clear screen program that outputs a blue coloured screen on the Android emulator. The author starts with saying "add this code to the Activity class". As a beginner, I dont know what an activity is in the first place and why do I use Activity for? Why should I add the GLSurfaceView? There is no mention of what a Toast popup is? Another thing is about the overriden methods in our Renderer implemented class. When are they called and is it necessary to implement each of these functions? What if I am only interested in some of them?

Section 1.3 gives an overview of shaders and defines two shaders, vertex shader which simply assigns the per-vertex position to the gl_Position register. The fragment shader outputs the given uniform u_Color to the output. One thing to note here is that the author mentions that the fragment shader is sometimes also called pixel shader. This is only correct in the context of DirectX. In OpenGL, we never use the term pixel shader because in OpenGL, there is clear distinction between a fragment and a pixel. The fragment carries more information not just colour. So after the color is assigned in the fragment shader, the fragment travels through the raster ops to its final destination in the frame buffer. The output color of the eventual pixel is subject to the depth test/stencil test/blending outcome. So clearly in OpenGL parlance we cannot work on pixels we can only work on fragments thats why the term fragment shaders.

Section 1.4 shows how to compile and link shader programs. The author mentions the steps required to compile and link shader programs. He also details how to get the location of the attributes and upload data to GPU. One thing that I did not understand is the 4th parameter of glVertexAttribPointer which is given as false. What does it do? The author skipped this parameter without detailing what it does and what happens if we set it to true. Another thing that i dont understand is that the output looks more like a rectangle than a square to me. Why is that so even though we input coordinates for a square. I think this should have been discussed in detail. The reason is the viewport which is on a non-square window. If we were to draw a correct square geometrically, we must take the aspect ratio of the window/viewport into account when passing in vertex coordinates to OpenGL rather than passing in constant values.

Chapter 2:
Section 2.1 shows how to implement two basic classes, one for matrix and other for vector. Basic function definitions like identity matrix and vector addition, subtraction, multiplication and division were implemented. This section appears to be the shallowest of all the sections in the course.

Section 2.2 explains how to implement the dot and cross product functions in the vector class. The author first implements the normalization routine. One thing that should have been emphasized and discussed was the special case when the length of the vector is 0. The code shown does not handle this special case hence if the code is run on a zero vector (0,0,0), it will surely crash with a divide by zero arithmetic exception at runtime. The proper code should be this

static final float EPSILON = 0.0001f;
public void normalize() {
   float length = Math.sqrt(x*x + y*y + z*z);
   if(Math.abs(length) <= EPSILON)
   x /= length;
   y /= length;
   z /= length;

In addition to the normalization code, the lecture also covers how to implement the dot and cross product functions.

Section 2.3 shows how to implement orthographic and perspective projection matrices. In addition it also details how to implement the camera transform matrix. I find the projection matrix discussion to be very concise but the camera transform matrix is hardly detailed. More information should have be given about the orthonormal vectors that are obtained from the camera position, target and up vectors. Details should have been given on the elements stored in the camera matrix.

Section 2.4 shows how to create the scaling and translation matrices. The details about their implementation are given along with the definition of a Pipeline class that store the required properties and transforms.

Section 2.5 details how to implement the rotation and combined model view projection matrices. Although the author shows how the functions are implemented in detail, however, the analysis part is missing completely.

Section 2.6 covers Quaternions. It shows how to implement the quaternion class to do transformation using an axis and an angle. It also details how to find the conjugate of a quaternion as well as other required functions.

Chapter 3
Section 3.1 starts with an introduction to the lighting model, in particular it details about the ambient light calculation. It then shows how to create new wrapper classes to represent the new LightingProgram by inheriting a class from the abstract ShaderProgram class. One thing that is
more a design decision that I cannot digest is why the location variables are not stored in the parent class as protected variables. That way all inheriting class could use the same variables without any modification. If implemented the way it is shown, the user will have to provide new location variables for each shader when instead the basic variable locations (like MVP location, a_pos etc. which we are certain will always be in all shaders) could be stored as protected base class members. Another thing that is more a performance note is that the glUniform4f call in the onDrawFrame function could also be move in the onSurfaceCreated function. It is not a good idea to pass a constant uniform variable each frame and this could cause performance loss.

Section 3.2 list diffuse lighting model. It adds additional attributes both in the LightShader program as well as shaders. One thing that is really strange is that the example given in this video does not show diffuse lighting. Rather it simply outputs the interpolated colour from the rasterizer as the fragment colour. Considering that the section was on diffuse lighting, the given code example should have shown diffuse lighting.

Section 3.3 details specular lighting model theory. It then details how to add the Camera and Pipeline object in the exisiting class to allow positioning of camera and setting up of the projection transformation for the camera.

Section 3.4 shows how to implement the directional light. It creates a new class DirectionalLight and then modifies shader program to contain structs similar to the DirectionalLight class. Then it details how to bind attributes from the class to the shader program. While the details and implementation of the light are very good, the given example merely shows any lighting effect at all. To me it appears the same as the output from the last section. Clearly a much better example would have been to show a light circling around the object.

Section 3.5 describes implementation of a point light with attenuation. The author adds in aadditional attributes, classes and struct to implement a point light as in the previous sections. The details are very hastily given without giving the reader time to digest. The output render shown from the code does not show the lighting effect at all. I cannot distinguish between the result shown in section 3.3, 3.4 and 3.5. There are no visual cues like light position visible nor are there any light shades anywhere. The author is in such a hurry he just blinks the result on screen and leaves in a flash. See the reader wants to understand this stuff, he is new to OpenGL ES and Android please hold his hand rather than leaving him in the wild.

Section 3.6 implement a spot light using a new class. While the explanation of the shader is nicely done, in the introductory part (0:25-0:38) the author says that "we compare dotproduct to the cutoff value, and if it is larger than the cutoff value, then we light the pixel" This is wrong we light the pixel if dotproduct is less than the cutoff value as shown later in the video. The details are given on how to add the class and required attribute and uniform variables in the shaders. The final snapshot shows the light spot but honestly it would have been better to give a more distinct example.

Chapter 4
Section 4.1 shows how to create a simple Texture class that loads the given texture file from the resource and creates an OpenGL texture object from it. I uses the BitmapFactor.DecodeResource function to load the image. The required OpenGL texture functions, their parameter values and image loading functions are introduced without any further details about them.

Section 4.2 details the required changes to pass the texture object from the android application to the OpenGL shader. It details the required additional per-vertex attribute the texture coordinate. It also details the sampler that represents the loaded texture in the shader. One thing that I would like to rectify here. In the code sample shown in the video, in the setUniform function, the glUniform1i function is passed two parameters, first is the location of the texture sampler unit in the shader, the second parameter passed to glUniform1i function is the loaded texture id that was returned by OpenGL when the texture loading function was called. This is wrong. The glUniform1i second parameter is the texture unit where the texture is bound. It is not the texture id. The correct call then is to pass 0 as second parameter since the texture was bound to texture unit 0.

glUniform1i(uTexUnitLocation, 0);

Section 4.3 details how to draw the texture. It details how to pass the additional attributes (the texture coordinates) along with each vertex and then how they are passed in the draw function. Quite surprisingly, the main shader texture sampling function is not discussed at all.

Section 4.4 shows how to add filtering parameters to the loaded texture to improve the texture quality/performance.It details how to set the OpenGL texture parameters to enable bi-linear, tri-linear, nearest neighbor and mipmap filtering. These are just basic OpenGL texture states so the contents of this chapter in particular are very shallow.

Chapter 5
Chapter 5 is all about creating a very basic particle system class and associated classes like Emitter and Explosion to create some particle effects. The chapter starts with Section 5.1 which details how to create a particle system shader program class to hold the required attributes and uniforms to create a stateless particle system. The relevant changes and the class generated is detailed. Section 5.2 shows how to create two classes a particle system class to create and manage particles and a particle emitter class to add/remove particles from the particle system at a particular position in a particular direction.Section 5.3 shows how to draw the particle system the output shown displays a stream of particles. Section 5.4 refines the particle system to look more like an explosion.The output generated from the example is not that impressive but it serves the purpose well showing a bunch of particles that are simulated and that respond to gravity.

Chapter 6
Section 6.1 shows how to create the background asset for the Breakout game. The required image is copied into android drawable folder and then the necessary changes are made in the xml file. Next a new activity is created on which the background image and buttons are added. Then, the activity manifest file is modified to load the background activity at start up rather than the main activity.

Section 6.2 details how to add UI elements like the score and lives of player. For these text elements, TextView is used and corresponding member variable for score and lives are read from the main activity class. The final change is to add the UI elemens to the layout rather than the contents view.

Section 6.3 shows how to load the bricks for the game. The details about the required additions are given but the important information like the space in which the brick vertex positions are defined are not discussed.

Section 6.4 details how to create the ball and paddle objects. There are some design decision which I question. First, the values of limits are given as contant literals and it is very hard to discern which space these are given in. I also personally feel that defining the ball class with two triangle lists is overkill considering that this code will be run on a mobile device. I was hoping that the author would use a single particle to render as a ball. This can be easily achieved using shaders.

Section 6.5 adds sound effects and music to the game. The author creates a wrapper class that uses system services to play sound files stored in the res folder of the project.

Chapter 7
Section 7.1 is on ball movement in particular how to create frame rate independent movement. For this, the author creates tow variables, currentTime and elapsedTime. At startup, he initializes the current time and in the render function, he subtracts the current time from the currentTime variable value and divide by 10^9 to get the time value in seconds. Next, the ball movement direction is multiplied to the elapsed time and the resulting movement is frame rate independent.

Section 7.2 shows how to handle touch events to enable paddle movement. To enable listening of touch events, a custom function is hooked into the onTouchListener. The function forwards the event to the appropriate class of the OpenGL renderer. The function determines the touch event properties to see if the screen was pressed or released. Based on the result, the paddle is moved.

Section 7.3 details how to handle accelerometer input. This is done by requesting sustem service. Relevant changes are detailed and the code adjusted to accommodate the new input.

Section 7.4 provides information on how to handle broad phase collision data structure a spatial hash grid which is first populated and then queries for possible narrow phase collisions. Details of the sptial grid are not given.

Section 7.5 describes narrow phase collision to check the bounding boxes of the two game entities that might collide. This is done after each movement to ensure that there are no collisions. The potential colliders list is updated and then the ball is checked for collision with all entities in the same cell as the ball. If there is collision, the ball is rebound.

Section 7.6 details how to adjust scores and lives based on the outcome of narrow phase collision. the explosion and scores are handled in the collision code. The lives are handles based on the ball position if it is lower than the paddle's y value. Based on available lives, if they fall to zero, a new activity called Gameover activity is created and launched. The ontouchlistener is handled to allow game to return to the main menu.

Chapter 8
Section 8.1 adds final touches to the created game. It first creates a local high scores table. Section 8.2 details how to publish the game on the Google Play store. The required settings and options are detailed nicely. Section 8.3 describes some optimizations that are possible for the game for example removing useless variables and making constants as static final. The final section is on how to enable in app purchases for game related stuff like extra lives etc.

Now that i have gone through the videos. I can say that the author has done a commendable job with these video tutorials. The breadth of the topics covered is tremendous. I feel the depth of the topics is severely lacking. Following are my main concerns about these video lectures.
1) I was hoping that the videos would cover OpenGL ES 3.0 but they are covering OpenGL ES 2.0.
2) I find that the vdos are a bit brisk. The author is dumping loads of code to the viewer. Even though I have programmed on Android before but for me too it is very difficult to follow along with the author. In addition, the way he is referring to the code is rather vague most of the times for e.g. he says "add this line after this line and you will see ..." why should I add that line and what will happen if I dont put those in. 
3) It seems that the author is merely dumping information without explaining the reasons why a particular code is written that way. I find the why to be missing from most of the videos.

From the way they are done, these video lectures assume that you have already programmed in Android before. For a complete beginner, I would not recommend him/her to follow these as these lectures are severely lacking analysis and reasoning. You should have some basic android development experience to appreciate the course content.

All in all these tutorials are great to get you up and running with android development. The author details the entire process from  creation of game to hosting of the game on Google play store as well as in app purchases. Overall I will rate these video tutorials 4 out of 5 stars. Thanks to the author for sharing his insights and to Packt publishing for giving me this opportunity to review this video course.

Friday, May 16, 2014

A new course on Building Android Games with OpenGL ES

Packt publishing has launched a new course Building Android Games with OpenGL ES I will be reviewing this course in a couple of days on my blog. Looks like an interesting course to me. Here is the Table of Contents from the course.
  1. Getting Started with OpenGL ES [14:53 minutes]
    • Setting Up OpenGL ES in Eclipse
    • Creating an OpenGL ES Environment
    • Creating Your First Shaders
    • Loading and Compiling the Shaders

  2. OpenGL ES Math [16:23 minutes]
    • OpenGL ES Matrix System
    • Vector Math
    • Projection Matrix and Camera View
    • Transformation Matrix - Scale and Translate
    • Transformation Matrix - Rotation and Final
    • Theory - Quaternions

  3. Lighting [16:28 minutes]
    • Ambient Lighting
    • Diffuse Lighting
    • Specular Lighting
    • Directional Light
    • Point Light
    • Spot Light

  4. Texturing [08:46 minutes]
    • Loading Textures
    • Creating New Shaders for Texturing
    • Drawing the Texture
    • Texture Filtering

  5. Particle Systems [07:34 minutes]
    • Shaders for a Particle System
    • Adding a Particle System
    • Drawing the Particle System
    • Customizing the Particles

  6. Breakout – Assets and UI [12:25 minutes]
    • Menu Screens
    • Game Interface
    • Creating the Bricks
    • Creating the Ball and Paddle
    • Sound Effects

  7. Breakout – Gameplay [14:46 minutes]
    • Ball Movement
    • Paddle Input - Touch
    • Paddle Input - Accelerometer
    • Collisions - Broad Phase
    • Collisions - Narrow Phase
    • Scoring and Lives

  8. Breakout – Finishing Touches [11:21 minutes]
    • Creating a Local HighScores Table
    • How to Publish Your Game
    • Optimization Techniques
    • How to Add In-App Purchases

Thursday, May 15, 2014

Tuesday, May 6, 2014

SIBGRAPI 2012 Tutorials (The most up to date course on Modern OpenGL development with Qt)

I am utterly impressed at the quality and applied nature of tutorials presented at SIBGRAPI 2012. I particularly like Tutorial T3-Interactive Graphics Applications with OpenGL Shading Language and Qt . The course slides as well as course survey paper are worth checking out. The tutorial teacher Joao Paulo Gois has has maintained the source codes of this tutorial for older and newer Qt versions on his own web site.

This is by far the best and the most reachable tutorial I have ever read on modern OpenGL development on Qt. Have a look at it yourself.

Thursday, May 1, 2014

Qt loading shaders from resource files (call qmake after creating the resource file)

I was recently playing with Qt and its support of shaders. Qt provides a bunch of useful classes for shader handling. So I thought of making my hands dirty. The first thing I did was try to load my simple triangle demo using shaders. I started out with using the QGLShaderProgram class with loading shaders as resource. I kept getting errors saying that the shader files cannot be loaded from the specified path. So then I realized I had to create a Qt resource file and manually add my shaders into the resource file. 

After doing all this, once again when I tried to run the demo, it still told me that the shader did not exist on the specified path. After many tries I came to know the reason for why the path could not be found. Once the resource file is created and the shader files are added to the resource file, we need to call qmake on the project. Without calling it, I kept getting the file not found error. I could not find any information on this so I thought I would include it here on my blog for others who fall into the same trap.

Monday, April 28, 2014

Learn PhysX Modeling with PhysX: A Review

Recently, Packt publishing has released a new book on Game Physics called Learning Physics Modeling with PhysX Examining the accompanying code tells me that the author has shamelessly copied some of my tutorial code into this book. While I am not asking anything from neither the author nor the publisher but at least an acknowledgement or a reference to the original tutorials/source code should have been given in the book as the main source of this text or at least a mention some where in the book's source codes.

Here is my review for this book. The book starts with a gentle introduction to PhysX v 3.3.0 in Chapter 1. Chapter 2 is a copy of my simple box tutorial with the only difference being that the author does not render anything on screen; he simply outputs the box position to console. Chapter 3,4,5,6,7,8,9,10 are ripped from the PhysX guide. All of the code snippets as well as the figures are taken from the PhysX guide. There are no concrete use cases given in any of these chapters. I would suggest the reader to follow the official PhysX guide rather than reading through these chapters as the former is more elaborate. Apart from the simple box tutorial, the author has given his own examples for instance one on particles, character controller, joint, queries are based of PhysX guide.

All in all, I would ask the readers not to buy this book. Save your money as the information contained in this book is available for free both online and in the free PhysX guide. The books sample codes are good. You can read them alongside the PhysX guide which should be enough to understand whats going inside.

Friday, April 18, 2014

C++ port of the TraerPhysics library

I have ported the famous Processing Physics library called TraerPhysics to C++. I have put it on github for others to use. Refer to the file for details.


Tuesday, April 1, 2014

Havok Physics Engine Tutorial Series: Cloth

Before I start this tutorial, a short disclaimer. This is not the best way to model a cloth. Havok Physics SDK has a separate Havok Cloth package which is optimized for cloth simulation. I would recommend you to go and give it a try. The tutorial is showing one possible way to create cloth using distance constraint.
In this tutorial, I will show you how to create a simple cloth using the simple distance constraint which is wrapped into the hkpStiffSpringConstraint object in the Havok Physics SDK. This tutorial will be building on top of to the Simple Distance Constraint Tutorial , Picking Tutorial and Chain Tutorials that we did earlier. OK so lets get started.

Creating a cloth using simple distance constraint
A cloth can be modeled simply by using a set of masses linked with mass less springs. The springs can be approximated using hkpStiffSpringConstraint in Havok physics sdk. There are three types of springs: structural, shear and bending springs. Details of the cloth model are beyond the scope of this tutorial. Interested readers may drop in a comment and I will happily point you to relevant resources on cloth modeling. The first thing we need is a set of masses which we create using the following code snippet.

This is similar in spirit to the Multiple Bouncing Boxes Tutorial. The only difference is in the position of the boxes so that they lie in the XZ plane at a given Y value.

Setting up structural springs between the masses
OK once we lay down our masses, we can create a bunch of springs between them using the following code snippet. We basically iterate between each pair of rigid bodies and create a stiff spring constraint between them. First we iterate horizontally and then vertically.

I store the indices into a vector so that I would render the cloth springs easily in the OnRender function.

Rendering of Cloth
For this simple cloth model, I render the masses as boxes as was shown in the Multiple Boxes tutorial.

For springs, I iterate through the vector (indices), and create a line between each pair of indices as shown in the code below.

That's all, after compiling and building the given code, you will see a piece of cloth. The mass at each end of the top row are fixed to ensure that the cloth does not fall down. The rest of the boxes can be picked.

You can get the full source code from my github repo

Left click to rotate, left click and drag on a mass to pick and reposition
Middle click to zoom
Right click to pan

What's next:
In the next tutorial, I will show you how to create a basic vehicle.


Monday, March 31, 2014

Havok Physics Engine Tutorial Series: Chain

In this tutorial, I will show you how to create a rigid body chain using the simple distance constraint which is wrapped into the hkpStiffSpringConstraint object in the Havok Physics SDK. This tutorial will be building on top of to the Simple Distance Constraint Tutorial and Picking Tutorial that we did earlier.

Creating a Chain using a Simple Distance Constraint
In the previous tutorial, we saw how we could use a distance constraint to limit a pair of rigid bodies. In this tutorial, I will show you how to use a pair of distance constraints to create a chain of rigid bodies.You can pick any rigid body and move it around using the mouse.  So lets get started.

The AddRigidBodies function
This function is defined as follows

Lets have a look at this function piece by piece. The AddRigidBodies function first creates a number of blocks. This is similar to how we created the boxes in the Multiple Bouncing Box Tutorial.

Next, we create another loop but this time we create a pair of stiff spring constraints. The constraints are placed in parallel between each pair of boxes. The offsets are calculated based on the size of the box.

Finally, we create the ground rigid body as was seen in earlier tutorials.

The picking and rigid body manipulation functions are virtually unchanged. I simply copied the code from the picking tutorial and it worked without a problem.

That's all, after compiling and building the given code, you will see a chain of boxes. The top most box is fixed to ensure that the chain does not fall down. The rest of the boxes can be picked. You can pick the box by left clicking and dragging the mouse to reposition it as shown in the following figure.

You can get the full source code from my github repo

Left click to rotate, left click on box to pick and reposition
Middle click to zoom
Right click to pan

What's next:
In the next tutorial, I will see if I can do a simple cloth using the simple distance constraint object.

Havok Physics Engine Tutorial Series: Simple Distance Joint

In this tutorial, I will show you how to render a simple distance joint (a basic stiff spring). We will add to the picking code we covered in the last tutorial so that we can displace a given box using the mouse.

The scene in this tutorial contains two boxes: a static box and a dynamic box. The dynamic box and the static box are linked with a distance constraint. We can move the dynamic box using mouse and due to the spring constraint, it maintains a certain distance from the static box. So lets get started. For this demo, the AddRigidBodies function is as follow.

Adding a spring constraint
The only difference here is the addition of the stiff spring constraint. To create it, we first create the hkpStiffSpringConstraintData object. We then pass it the world space position of where the spring is located along with the two rigid bodies between which this constraint is created. Next,
the hkpStiffSpringConstraintData object and the two rigid bodies are passed to the hkpConstraintInstance object and then hkpWorld::addConstraint function is called passing it the hkpConstraintInstance object as shown in the following code snippet.

Rendering of the spring constraint
Another change we did is in the render function. It is now changed to render the spring constraint as well along with the two rigid bodies. The picked dynamic rigid body is rendered green when picked.

That's all, after compiling and running, you will see two boxes linked with a spring constraint. You can pick the lower (dynamic) box by left clicking and dragging the mouse to reposition it as shown in the following figure.

After displacement, the spring constraint acts on the dynamic rigid body to ensure that it remains at the fixed distance that was given at the time of instantiation of the constraint.

You can get the full source code from my github repo

Left click to rotate, left click on box to pick and reposition
Middle click to zoom
Right click to pan

What's next:
In the next tutorial, I will show you how to create a simple chain.

Sunday, March 30, 2014

Havok Physics Engine Tutorial Series: Picking

In this tutorial, I will show you how to pick a rigid body using picking features available in the Havok Physics SDK. This tutorial will be adding to the SimpleBox tutorial that we did earlier. We will augment it with picking using the mouse.

Picking can be implemented in numerous ways. We will cast a ray from the clicked position on the screen to the 3D world using the hkpWorld::castRay function provided by the Havok Physics SDK. The user clicks on the screen at a position(x,y). We implement this in the PickActor function. The first thing we need is to determine the ray that has its start position at the clicked position(x,y)on screen and an end position that is at the far clip plane.

Determining the ray from the clicked position
We first determine the z position of the clicked screen space point. We know from the projection transformation that a given point (x,y), has z=0 at the near clip plane and z=1 at the far clip plane. We can use this knowledge to create two 3D points from the given screen space point, one at near clip plane (x,y,0) and another at the far clip plane (x,y,1). We then unproject these two 3D points to obtain the world space points for the two 3D points. These are the ray start and the ray end points that we will pass to the hkpWorld::castRay function.

The ViewUnProject is a utility function that we define as follows. Note that this function definition will be modified if you use DirectX or any other graphics library.

This function uses the current viewport to determine the screen size. It then passes the given x,y and z position, the current modelview and projection matrices and the current viewport to the gluUnProject function from the OpenGL utility library. Of course if you are using any other library, you will have to find the appropriate function or create your own function. After the unproject call, we get the world space point for the given point.

At the end of these calls, we have the ray start and ray end points.

Casting picking ray using hkpWorld::castRay function
OK now we have our ray, we can call g_pWorld->castRay function. If we look at the Havok SDK reference, we can see that the castRay function takes two parameters. hkpWorldRaycastInput and hkpClosestRayHitCollector. You will have to include the following new headers for these

The hkpRaycastInput object has three fields, m_from which is the ray start point, m_to which is the ray end point, m_filterInfo which we pass 0. We then lock the hkpWorld, castRay and then unlock hkpWorld as shown below.

If any rigidbody in the physics world intersects with the given ray, the hkpClosestRayHitCollector::hasHit() function returns true. We call this function to determine the possible hit condition. Calling hkpClosestRayHitCollector::getHit() function returns the hkpWorldRaycastOutput object. This object contains the m_rootCollidable field which is the intersected rigid body. To get a rigidbody pointer from the m_rootCollidable object, we call hkpGetRigidBody function. Now since our world contains both static and dynamic rigid bodies, we are only interested in the dynamic rigid bodies so we determine the motion type field of the returned rigid body.

After we are sure that we have a dynamic rigid body, we then find the intersected point on the rigid body and then project the point to get the 3D hit point. This is just to render the intersected point. Next, we set the motion state of the picked rigid body as MOTION_FIXED to make the picked rigid body a static rigid body. This is to ensure that when we move the rigidbody around, it does not undergo motion. 

In the mouse move event handler, we determine if the user has picked a rigid body. If there is a picked rigid body, we set the translation field of the rigid body's transform to the unprojected mouse point as was shown earlier.

Finally, when the mouse button up event is raised, we set the motion state of the picked rigid body to ensure that the picked rigid body is physically simulated again.

That's all, you will see a box falling due to gravity. You can pick the box by left clicking and dragging the mouse to reposition it as shown in the following figure.

You can get the full source code from my github repo

Left click to rotate, left click on box to pick and reposition
Middle click to zoom
Right click to pan

What's next:
In the next tutorial, I will show you how to create a simple distance joint.

Havok Physics Engine Tutorial Series: Multiple Bouncing Boxes

OK Now that we know hot to create one box. We will now add multiple boxes and let them fall under gravity and collide with each other.  The InitializeHavok functions remains the same. The changes here are in the AddRigidBodies function, the ShutdownHavok function and the OnRender function. Lets go through each of these one by one.

We create a global vector to store all of our rigid bodies (boxes).

The AddRigidBodies function
The AddRigidBodies function is changed to the following.

The only difference here from the previous SimpleBox demo is that now we reuse the box shape to create several rigid bodies. We alter their positions by using the loop variable. We add the rigid body to the global vector as shown in the code snippet below.

The ShutdownHavok function
This function is changed to the following.

We simply run a loop to remove reference of each rigidbody one by one.

The OnRender function
The only difference in this function is that instead of one box we run a loop to access the ith box rigid body and pass it to DrawBox function as follows.

That's all, the accompanying tutorial code should give you a number of boxes falling under gravity on the grid plane colliding with each other as shown in the following figure.  

You can get the full source code from my github repo

Left click to rotate
Middle click to zoom
Right click to pan

What's next:
In the next tutorial, I will show you how to pick a rigid body.


Friday, March 28, 2014

Havok Physics Engine Tutorial Series: A Simple Bouncing Box

OK Now that we know how to setup Havok SDK on VisualStudio 2012. If not, you may want to go through my first tutorial which tells you how to get started with the new Havok SDK. In this tutorial, similar to my PhysX Tutorials, I start with a very simple box falling on the floor due to gravity.

In this tutorial, the InitializeHavok function is changed to the following.

I have highlighted the new additions in bold. Similar to the first tutorial, we initialize the memory, physics world settings and the visual debugger. After that, we lock the Havok physics world, then add our rigid bodies to the world and then we unlock the world. This is done to ensure that no two threads may access the world at the same time. All functions that may effect the physics world should be called by sandwiching them inside the world lock/unlock call.

Adding rigid bodies:
Similar to other physics engines like PhysX and Bullet, Havok also adds rigid bodies to the physics world by first specifying shape and its properties like moment of inertia tensor. In the tutorial, I add these in using the AddRigidBodies function which is defined as follows.

Lets take a closer look at this function piece by piece. In this tutorial, we will allow a box to fall under gravity on a flat floor. We have purposely created two scopes to highlight the two added rigid bodies. In the first scope, we create a dynamic rigid body, a simple box that falls under gravity.

In the above lines, we first specify the box's shape(hkpBoxShape) by passing it the half extents of the box.

In the above lines, after we specify the box's shape, we have to fill in the rigid body info structure which stores the shape as well as other rigid body dynamics properties like mass, moment of inertia, position, motion type etc. We then set its motion state to be dynamic (MOTION_DYNAMIC). If we want to make the rigid body static, we set the motion state to MOTION_FIXED. For all physics object, a collision margin is specified which controls the offset at which the collision is detected. We set the value to 0.001 which creates a very thin offset. If we left this line out, the box will collide at a much higher position than what is visible.

In the above lines, we first calculate the box's moment of inertia and then specify the mass properties.
Once the rigidbody cinfo structure is filled, we can create a rigid body from it using the following code.

Note that once the rigid body has been created, we can delete the box shape. We do so by making a call to removeReference function. This is the recommended approach to delete objects in Havok rather than calling the delete function.  After this call, we add the rigid body to the physics world by calling g_pWorld->addEntity function passing it the rigid body. On success, the returned reference contains the created rigid body entity. After this call, we can safely remove the reference to the rigid body to ensure that we always have a single reference to the rigid body.

In case of the static box (our floor), the only difference is in the motion state which is set at MOTION_FIXED. The rest of the code is same.

Obtaining the box's transform:
If all goes well, once we add the rigid body into the world, it starts to simulate. The step function calculates the next transform for all rigid bodies in the world taking into account their collisions and collision responses. For rendering or any other purposes, we need to know the local transform of the dynamic box. In order to prevent two threads from accessing the same object, Havok mandates that all world objects and their properties should be read inside a hkpWorld->lockForRead/unlockForRead fuction pair as follows.

Now to access the matrix of the given box, we first call the box rigid body's approxCurrentTransform function that calculates the box's transformation matrix. Next, we store the matrix into a local variable. Using the local matrix varible, we fill our float array which we then pass to the rendering API. 

Drawing the box:
If we are successful, the mat array contains our 4x4 matrix. In OpenGL, we can then multiply this matrix to the current modelview matrix and then the object is placed and oriented in the 3D graphics world using the transform calculated by the Havok physics engine.

For all those modern OpenGL lovers, I know this is legacy OpenGL code but my point is to preset the concept. You can convert this to modern OpenGL if required without a problem by using math libraries like glm.

That's all, the accompanying tutorial code should give you a simple box falling under gravity on the grid plane as shown in the following figure.  

You can get the full source code from my github repo

Left click to rotate
Middle click to zoom
Right click to pan

What's next:
In the next tutorial, I will show you how to add multiple boxes.


Monday, March 24, 2014

Havok Physics Engine Tutorial Series: Getting Started

Hi all,
I am starting a new tutorial series on Havok which is an industry standard physics engine. A couple of its components namely the Havok Physics Engine and the Havok Animation SDKs were recently released free (binary only) with sponsorship of Intel under the following terms and conditions.
Havok's Intel® sponsored free binary-only PC download can be used during development to evaluate, prototype, and commercially release any PC game. There are some basic licensing rules to follow:
  • PC titles sold for a retail value of less than $10.00 USD do not require a Havok distribution license to be executed.
  • PC titles sold for a retail value of more than $10.00 USD or more do require a Havok license to be executed but at no additional cost.

Details here:

As always, here is the disclaimer, I am not a Havok employee nor do I represent Havok. I am a hobbyist programmer who is trying to fill the online void in the OpenGL world. I have noticed this in the Havok physics engine case as well. So I am trying to make it easier for other OpenGL programmers to get up and running with Havok Physics SDK. All information contained in these tutorials is based on concepts gained from, the two tutorials cited below, the excellent Havok physics user guide and sample demos. These tutorials are written with clarity in mind showing clearly what is required to get started with Havok Physics SDK in VisualStudio 2012 on Windows 7. Note that there might be better and more optimized paths for these tutorials and I hope users will spot those in the comments below the tutorials.

When I started out with Havok Physics SDK, I was really surprised with the detailed documentation given with the Havok Physics SDK. This includes a lot of sample demos which show a lot of varied real-time physics concepts. Unfortunately though, the Havok Physics SDK uses its own DirectX based framework. There are no OpenGL demos in there. Ofcourse hiding the details behind a framework is good but it makes understanding of minute details difficult and you have to dive in code to know what is really required. Therefore, I like other programmers started out but then was frustrated to get up. I went online to find some information and luckily got these two links

Both of these cover the basics really well including how to get started from scratch with Havok Physics. I will hope that the readers of this blogs will follow these two tutorials first before proceeding forward. The issue with these is that the Havok sdk has changed a bit and there are some more changes that are required to get up and running with the latest free Havok physics sdk, So here are the missing links.

For all of the tutorial series, I will assume that VisualStudio2012 is used and that you have downloaded the Havok sdk and freeglut libraries to some place on your harddisk. To make it  smoother to follow, I suggest you create two environment variables
  1. HAVOK_ROOT (pointing to the root folder of Havok sdk typically named by date for e.g. hk2013_1_0_r1)
  2. LIBRARIES_ROOT  (generic folder for e.g. E:\Libraries containing the freeglut root folder e.g. E:\Libraries\freeglut-2.8.1
Compiler Settings
OK once this is done, you need to add the following paths to the includes directory (
C/C++ General->Additional Include Directories)

In addition, add HK_CONFIG_SIMD=1 in preprocessor definitions (C/C++->Preprocessor->Preprocessor Definitions)

Also change the C/C++->Code Generation page so that it appears as shown in the figure below

Linker Settings
Change Additional Libraries Directories (Linker->General->Additional Libraries Directories) to $(LIBRARIES_ROOT)\freeglut-2.8.1\lib\x86\Debug\;$(HAVOK_ROOT)\Lib\win32_vs2012_win7\debug_dll;%(AdditionalLibraryDirectories)

In Linker->Input->Additional Dependencies, add

That's it for the compiler and linker settings now to the real code. First, the include files

Include Files
To get our very first tutorial up, we need to include the following headers

Havok Initialization
A lot of code is required to initialize Havok. I will go through these step by step. Basically all Havok codes need to initialize at least two main components (Havok Memory Settings and Havok Physical World Settings) and an optional third component (the Visual Debugger). I create an InitialzieHavok function which is defined as follows

Basically, this code is just calling the three initialization functions. Here we will look at each of these one by one.

(a) Initializing Havok Memory routines       
I create a simple function InitMemory which does the memory initialization.  Here is how the InitMemory function is defined.

Lets see the details of the function piece by piece. First is a macro call _MM_SET_FLUSH_ZERO_MODE 
which is used to ensure that there are no subnormal numbers (numbers that are very small close to zero) because they can lead to slower performance. If you want to know more about this macro, have a look at the wikipedia entry

The above calls initialize the memory subsystem by allocating 0.5MB for physics calculation with a memory router using the default allocator. We also call the Havok base system initialization function passing it our memory router and an error callback function. I just name this error callback OnError and I define it globally as follows which just dumps the passed msg to standard error stream (std::err)

Next, we create a thread pool with the given number of threads. We get the current device's capabilities to obtain the maximum number of parallel threads available on the multithreaded platform we are running on. 

We then create a job queue for Havok modules to run multithreaded work. Finally, the function ends with the creation of a stream of monitors for the thread pool.

(b) Initialize Physical World
The InitPhysicalWorld is defined as follows

In Havok, the physics simulation world is represented by a hkpWorld* object. This object is initialized by calling the hkpWorld constructor passing it the hkpWorldCInfo structure. This structure stores the global physics world settings like gravity etc. We first set the simulation type to a multi-threaded simulation. We then set the broadphase border behavior (which tells to the Havok physics engine to remove an entity if it goes out of the border). We pass the modified hkpWorldCInfo structure to the hkpWorld constructor to create our Havok physics world object. After this call, we set the deactivation flag of the hkpWorld to false to ensure that there is no deactivation of rigid bodies. At this point we have our physics world object created.

The next few calls modify the hkpWorld. To ensure that no two threads modify the shared hkpWorld instance at the same time, we issue a call to hkpWorld::markForWrite function. After this call we can issue all calls that modify the state of the physics world. We register collision dispatchers and the created job queue. Note that hkpWorld::markForWrite function call is paired with hkpWorld::unmarkForWrite call which is issued in the InitVDB detailed below. 

(c) Initialize Visual Debugger
Usually, you will need some mechanism to ensure that your physics world is behaving as expected. For debugging purposes or for checking physics simulation states, Havok provides a very useful application called the Visual Debugger in the SDK. We need to establish a connection to the running instance of the Havok Visual Debugger. This connection is established by creating an instance of the hkVisualDebugger object. This is done in the InitVDB function which is defined as follows.

We first create a Havok Physics context object (hkpPhysicsContext). We then call the static function registerAllPhysicsProcesses function. We then add the Havok physics world to the created hkpPhysicsContext by calling hkpPhysicsContext::addWorld function. We then store the hkpPhysicsContext pointer in an hkArray object. We then call the hkpWorld::unmarkForWrite function that was paired with the hkpWorld::markForWrite in the InitPhysicalWorld function. The hkArray containing the hkpPhysicsContext object is passed to the hkVisualDebugger constructor and then the hkpVisualDebugger::serve function is called to initialize the connection with the hkVisualDebugger. The connection will be established with the running instance of the visual debugger.

Stepping the Havok Physics Engine and the Havok Visual Debugger 
In order to mode the Havok physics engine and the visual debugger forward in time, we need to make a call to the step function. This call is made in each frame before calling the render function. I name this function StepHavok and it is defined as follows.

We first call hkpWorld::stepMultithreaded function passing it the job queue and the timestep value which is given a constant step size of 1/60. Next, if the visual debugger is enabled, we step visual debugger using StepVDB function which is implemented as follows.

The StepVDB function first syncs timers in the thread pool and then calls the hkVisualDebugger::step function passing it the time step value which is also a constant step size of 1/60. Finally, the hkMonitorStream is reset and the time data values in the thread pool are cleared.

Havok Shutdown
The Havok Physics engine shutdown is carried out in the ShutdownHavok function. This function is defined as follows.

We first ensure that the is thread safe deletion by calling hkpWorld::markForWrite function. Then, we call hkpWorld::removeReference which deletes the hkpWorld object. Since Havok internally keeps reference counts, the recommended approach to delete all Havok objects is to call removeReference on the object pointer instead of the delete function. Next, we delete the job queue and then call removeReference on the thread pool object. If the hkVisualDebugger is enabled, we call the ShutdownVDB function. Finally, we call hkBaseSystem and hkMemoryInitUtil interface's quit functions. The ShutdownVDB function is defined as follows.

We first call, hkVisualDebugger::removeReference function to delete the connection to the hkVisualDebugger instance. We then delete the context pointer again by calling hkpPhysicsContext::removeReference function. 

Running this tutorial does not show anything interesting. We just get a simple 3D grid rendered on screen as shown below. 

The console output shows the Havok initialization messages as shown below. It should only display the messages shown in the following figure. 

If you get any other message like some errors or stacktrace information, you are probably doing something incorrectly.

That's it for the first getting started tutorial. You can get the full source code from my github repository here:



Saturday, March 8, 2014

GPU ray march renderer for PVR (Production Volume Rendering)

I have managed to integrate my GPU ray marcher with the awesome Production Volume Renderer (PVR) by Magnus Wrenninge on Windows 7 using Visual Studio 2012. The results are amazing as can be seen in the image below.
GPU based Ray Marcher for Production Volume Renderer
Here is the video

I had to put the video on vimeo as youtube is occassionally blocked in my country. The PVR code to model this volume is follows. This is based of the example python code from Chapter 1.

#include < pvr/Modeler.h >
#include < pvr/Primitives/Rasterization/PyroclasticPoint.h >
#include < pvr/Renderer.h >
#include < pvr/RaymarchSamplers/PhysicalSampler.h >
#include < pvr/Raymarchers/UniformRaymarcher.h >
#include < pvr/Camera.h >
#include < pvr/Occluders/OtfTransmittanceMapOccluder.h >
#include < pvr/Lights/PointLight.h >
#include < pvr/Volumes/VoxelVolume.h >
#include < Imath/ImathVec.h >
#include < pvr/VoxelBuffer.h >

void GenerateVolumeData(std::vector < GLubyte > & buffer, int& xdim, int& ydim, int& zdim) {
    pvr::Model::Modeler::Ptr modeler = pvr::Model::Modeler::create();
    pvr::Model::ModelerInput::Ptr input = pvr::Model::ModelerInput::create();
    pvr::Geo::Particles::Ptr parts = pvr::Geo::Particles::create();
    pvr::Geo::Geometry::Ptr geo = pvr::Geo::Geometry::create();
    pvr::Model::Prim::Rast::PyroclasticPoint::Ptr prim =    pvr::Model::Prim::Rast::PyroclasticPoint::create();
    parts - > add(1);
    geo - > setParticles(parts);       
    pvr::Util::ParamMap map;
    map.floatMap["amplitude"] = 0.5f;
    prim - > setParams(map);

    input - > setGeometry(geo);
    input - > setVolumePrimitive(prim);

    modeler - > addInput(input);   
    modeler - > updateBounds();
    modeler - > setResolution(200);
    modeler - > execute();

    pvr::VoxelBuffer::Ptr buf =    modeler - > buffer();
    Imath::V3i res = buf - > dataResolution(); 
    xdim = res.x;
    ydim = res.y;
    zdim = res.z;
    int index = 0; 
    for(pvr::VoxelBuffer::iterator i = buf - > begin(); i!= buf - > end(); ++i)
        Imath::V3f value = *i; 

Saturday, March 1, 2014

Making an OpenGL object look at another object in three different ways: quaternions, matrices and gluLookAt

Recently, I was trying to see how I could make one object look at another object in OpenGL. As always, I started out with google which give me the first link which listed to use quaternions. The second link showed me to use vector and matrices. I wanted to use gluLookAt but to my surprise none of the google links seem to provide me the answer. After some time, I got it working so here are the three methods that I have found and how they worked for me. I will be using glm library for math utilities. 

Method 1: Using Quaternions
Main reference:
You first need to provide a function RotationBetweenVectors that returns the rotation as a quaternion. We define this function as given in the link above.

glm::quat RotationBetweenVectors(glm::vec3 start, glm::vec3 dest){
   start = glm::normalize(start);
   dest = glm::normalize(dest);

   float cosTheta = glm::dot(start, dest);
   glm::vec3 rotationAxis;

   if (cosTheta < -1 + 0.001f){
     rotationAxis = glm::cross(glm::vec3(0.0f, 0.0f, 1.0f), start);
     if (glm::length(rotationAxis) < 0.01 ) 

      rotationAxis = glm::cross(glm::vec3(1.0f, 0.0f, 0.0f), start);
      rotationAxis = glm::normalize(rotationAxis);
      return glm::angleAxis(180.0f, rotationAxis);

    rotationAxis = glm::cross(start, dest);

    float s = sqrt( (1+cosTheta)*2 );
    float invs = 1 / s;

    return glm::quat(
        s * 0.5f,
        rotationAxis.x * invs,
        rotationAxis.y * invs,
        rotationAxis.z * invs

After the RotationBetweenVectors function is defined, we can use it as follows. Target is the object you want to look at and object position is the position of the object that is going to look at the target.

glm::vec3 delta =  (targetPosition-objectPosition);
glm::vec3 desiredUp(0,1,0.00001);
glm::quat rot1 = RotationBetweenVectors(glm::vec3(0,0,1), delta);
glm::vec3 right = glm::cross(delta, desiredUp);
desiredUp = glm::cross(right, delta);
glm::vec3 newUp = rot1 * glm::vec3(0.0f, 1.0f, 0.0f);
glm::quat rot2 = RotationBetweenVectors(newUp, desiredUp);
glm::quat targetOrientation = rot2 * rot1;
glm::mat4 M=glm::toMat4(targetOrientation);


Now the matrix M is the desired matrix. To use this matrix, you multiply the matrix M with the current modelview matrix. I do it as follows.


Method 2: Matrix based approach
Main Reference:
This method uses basic vector calcualtion as follows.

glm::vec3 delta = targetPosition-objectPosition;
glm::vec3 up;
glm::vec3 direction(glm::normalize(delta));
if(abs(direction.x)< 0.00001 && abs(direction.z) < 0.00001){ 

   if(direction.y > 0)
     up = glm::vec3(0.0, 0.0, -1.0); //if direction points in +y
        up = glm::vec3(0.0, 0.0, 1.0); //if direction points in -y 
   } else {
        up = glm::vec3(0.0, 1.0, 0.0); //y-axis is the general up

glm::vec3 right = glm::normalize(glm::cross(up,direction));
up= glm::normalize(glm::cross(direction, right));
return glm::mat4(right.x, right.y, right.z, 0.0f,     
        up.x, up.y, up.z, 0.0f,                      
        direction.x, direction.y, direction.z, 0.0f, 
objectPosition.x, objectPosition.y, objectPosition.z, 1.0f);   
Now the matrix M is the desired matrix. To use this matrix, you multiply the matrix M with the current modelview matrix. I do it as follows.


Once again, rather than me explaining the theory, I would ask you to go to the original link given above for details.

Method 3: Using gluLookAt
Main Reference: None :( I found it myself

Now this is the method I was trying to find online but none of the references seem to provide the details. I wanted to use the gluLookAt function to find the look at matrix. So here is how I implemented it. In order to get the matrix from OpenGL, I store the current matrix (glPushMatrix) clear it to identitfy (glLoadIdentity), then call the gluLookAt function. Then I extract the current modelview matrix as follows

glPushMatrix(); //store the current MV matrix
   glLoadIdentity(); //clear current MV matrix
   GLfloat MV[16];

The gluLookAt function calculates the orientation matrix I want but it is to orient my object in eye space. In order to get the object space matrix, I need to find the inverse of this matrix. I use glm library to calculate the inverse by passing it my MV matrix as follows (please note how the MV matrix is passed to glm)

glm::mat4 M(MV[0], MV[1], MV[2], MV[3],
            MV[4], MV[5], MV[6], MV[7],
            MV[8], MV[9], MV[10], MV[11],
            MV[12], MV[13], MV[14], MV[15]);

M = glm::inverse(M);

Another thing we need to do is to invert the X and Z axes which points in the -ve X and -ve Z axis in object space after the inverse. Thus, I do the following calls.

M[0][0] = -M[0][0];
M[0][1] = -M[0][1];
M[0][2] = -M[0][2];

M[2][0] = -M[2][0];
M[2][1] = -M[2][1];
M[2][2] = -M[2][2];

Now the matrix M is the desired matrix. To use this matrix, you multiply the matrix M with the current modelview matrix. I do it as follows.


This produces the same result as the previous two methods.  For your convenience, here are the three methods in their separate functions.

glm::mat4 GetMatrixMethod1(const glm::vec3& object, const glm::vec3& target) {
    glm::vec3 delta =  (target-object);
    glm::vec3 desiredUp(0,1,0.00001);
    glm::quat rot1 = RotationBetweenVectors(glm::vec3(0,0,1), delta);
    glm::vec3 right = glm::cross(delta, desiredUp);
    desiredUp = glm::cross(right, delta);
    glm::vec3 newUp = rot1 * glm::vec3(0.0f, 1.0f, 0.0f);
    glm::quat rot2 = RotationBetweenVectors(newUp, desiredUp);
    glm::quat targetOrientation = rot2 * rot1;
    glm::mat4 M=glm::toMat4(targetOrientation);
    M[3][0] = object.x;
    M[3][1] = object.y;
    M[3][2] = object.z;
    return M;

glm::mat4 GetMatrixMethod2(const glm::vec3& object, const glm::vec3& target) {
    //second method
    glm::vec3 up;
    glm::vec3 direction(glm::normalize(target-object));
    if(abs(direction.x)< 0.00001 && abs(direction.z) < 0.00001){  

    if(direction.y > 0)
        up = glm::vec3(0.0, 0.0, -1.0);
        up = glm::vec3(0.0, 0.0, 1.0); 

} else {
        up = glm::vec3(0.0, 1.0, 0.0);      }
   glm::vec3 right = glm::normalize(glm::cross(up,direction));
   up= glm::normalize(glm::cross(direction, right));
   return glm::mat4(right.x, right.y, right.z, 0.0f,     
        up.x, up.y, up.z, 0.0f,                      
        direction.x, direction.y, direction.z, 0.0f, 
        object.x, object.y, object.z, 1.0f);  

glm::mat4 GetMatrixMethod3(const glm::vec3& object, const glm::vec3& target) {
    //assuming that the current matrix mode is modelview matrix
        gluLookAt(object.x, object.y, object.z,
                  target.x, target.y, target.z,

        GLfloat MV[16];
        glGetFloatv(GL_MODELVIEW_MATRIX, MV);       

    glm::mat4 T(MV[0], MV[1], MV[2], MV[3],
                MV[4], MV[5], MV[6], MV[7],
                MV[8], MV[9], MV[10], MV[11],
                MV[12], MV[13], MV[14], MV[15]);
    T = glm::inverse(T);
    T[2][0] = -T[2][0];
    T[2][1] = -T[2][1];
    T[2][2] = -T[2][2];

    T[0][0] = -T[0][0];
    T[0][1] = -T[
    T[0][2] = -T[0][2];

    return  T ;

And their usage is as follows.

glm::mat4 M = GetMatrixMethod1(boxPosition, spherePosition);
//glm::mat4 M = GetMatrixMethod2(boxPosition, spherePosition);
//glm::mat4 M = GetMatrixMethod3(boxPosition, spherePosition);

  glTranslatef(spherePosition[0], spherePosition[1], spherePosition[2]);



Popular Posts

Copyright (C) 2011 - Movania Muhammad Mobeen. Awesome Inc. theme. Powered by Blogger.