顯示包含「Tools」標籤的文章。顯示所有文章
顯示包含「Tools」標籤的文章。顯示所有文章

Transforming points in unit square with p-norm

Introduction
Continue with previous post talking about some little tricks I used when implementing the animation editor. This time, I will talk about another maths problem when implementing the directional blend node in the animation tree editor. In this node, it blends 5 different animations(i.e. forward, backward, left, right and center) according to the current direction:

blending 5 different animation according to the current direction

And the UI of this node has a square input region like this for artist to preview how those animations are blended together:


So, to compute a blended output, we need to transform the points in the square region to a rhombus, and then we can use barycentric coordinates to determine the blend weight for blending:

Transform the current direction for computing the blend weight

p-norm
After searching google for a while, something called Lp-space has shown up with the following figures which looks similar to what I need:

Showing "unit circle" with different p-norm

All of the above figures are showing a "unit circle" (i.e. the vectors are of unit-length), but measured with different p-norms. The definition of p-norm is:


This equation describe the length of a vector. When p=2, it is the Euclidean norm that we normally use. Our goal is to transform points inside the unit square to the rhombus, in other words, we want the transformed points having the same length as before the transformation but measured with different p-norm, using p=1 after the transformation and p=infinity before transformation. In other words, we have (assume only dealing with the 2D case, and only consider points in the first quadrant):


, where v=(x, y) is the original point and v'=(x', y') is the transformed point.

Mapping the points
As our target is to find a function to transform the points, having only the above equation is not enough as there are infinitely many way to transform the point that satisfy the above equation. So we need another equation to find a unique solution, the idea is to have the transformed point, original point and the origin are lying on the same straight line(i.e. having the same slope), which gives another equation:
By solving the above equations, which gives the transform functions:
Taking into account those points in other quadrant and special case, which gives the final mapping functions:


However, taking the limit p to infinity and compute it with Wolfram Alpha does not gives any useful result... So I just take p=10 (taking a too large value will result in floating point precision problem...). Here is the plot of how the points are transformed/squeezed into the rhombus:

Graph showing how points are transformed from square to rhombus 

Although the function cannot transformed the points into a perfect rhombus, it does not have too much artifact when used for animation blending.

Conclusion
In this post, I have described how to map points inside a unit square to a "rhombus" which used for animation blending. But the problem is not completely solved as I am not taking the limit of p to infinity, and there are some power function in the transformation which is not fast... But this is enough for calculating the blend weight of the animations.

Reference
[1] Lp space: http://en.wikipedia.org/wiki/Lp_space

Checking whether a point lies on a Bezier curve

Introduction
It has been a long time since my last post. I was busy in the past months, and finally have time to finish dual-quaternion skinning and added some basic animation to my code base.

The animation editor
In the animation tree editor, those tree nodes can be connected/disconnected, which displayed by a cubic Bezier curve. I want the user can right click the curve to disconnect the tree nodes, so checking whether a point lies on a Bezier curve is need.

Sub-dividing the curve
A cubic Bezier is defined as:


solving an analytical  solution for this is quite complicated. So I just sub-divide the curve into several line segments and stop the sub-division if the points are close to a straight line. After that we can check if the point is lying on the line segments instead. First we start at the 2 end points of Bezier curve and check whether the mid-point(the red cross) need to be sub-divided.
check whether the red point need to be sub-divided
We can't simply check whether this 3 points close to a straight line because there may have an inflection point on the curve, so we need to look ahead 1 step further to see whether the 2 line segments(the blue and green lines below) are a straight line, if not, then the red point need to be sub-divided.
look ahead one more point to avoid inflection point problem
Repeat these steps for each newly generated sub-divided curve segments until all the points are close to a straight line (with a threshold). The sub-divided curve may look like this:

Before sub-division
After sub-division
With this approach, when we zoom into the graph, more points will get sub-divided:

Zooming in will sub-divide more points

Conclusion
This short post describe a way for checking whether a point is on a cubic bezier curve by using line sub-division. In the next post, I will talk about another fun stuff when writing the animation editor.

Implementing Client/Server Tools Architecture

Introduction
In the past few weeks, I was rewriting my toy engine. During the rewrite of the tools, I want to try out the Client/Server Tools Architecture that used in Insomniac Games because this architecture has a benefit that it is crash proofing and have "free" undo/redo. My tools is built using C# instead of web-apps like Insomniac Games because there are not much resources about it on the internet as they said. I don't want to spend lots of time to solve some tricky problems in my hobby project and instead focus on graphics programming after the tools are done. Currently only material editor, model editor and asset browser are implemented.

Screen shot showing all the editors

Overview
The client/server architecture for tools is that, all the editors are clients and need to connect to the server to perform editing. All the editing states about a file are maintain in the server. The client/server communicate by sending/receiving JSON messages (which is easy and fast to parse) such as:

          {
                 "message type" : "load file",
                 " file path" : "package/default.texture"
          }

The client will send request to the server such as loading a file (all the assets are in JSON format such as mesh exported from Maya) and the server will reply with the appropriate response such as the loaded file, or whether the request operation is successful. If there is no client request, the server will do nothing, the clients will keep polling about the change of the editing state. My tools are designed both the client and server are run on the same machine and can run without internet connection.

The Server
The server is responsible for handling a set of client request such as create a new JSON file, load file, save file, change file...  The server maintains the opened JSON files that loaded/created by the clients. An undo/redo queue is associate with each opened JSON file in the server. When client send a request to modify a JSON file, the inverse change operation is computed and stored in the undo/redo queue. So if the client program crash, all the editing states including the undo/redo queue will not be lost as they are stored in the server.

The editor launcher which runs
the server in the background
The above picture showing my server program which is a simple C# program. The server is presented as an editor launcher, user can open other tools (e.g. asset browser, material editors) through this launcher. The launcher only has a few functions as shown in the UI which is to fire up the .exe of the appropriate editor, while in the background, it is a JSON file server for listening the client requests.

The Clients
All the editors are implemented as a client which need to connect to the server to perform editing. Each editor is a small application that perform only a specific task and they will keep polling changes from the server. Below shows the asset browser of the editor which is also a client program.  The asset browser is used to import assets such as mesh, texture and surface shader. Also, it can change the loaded assets package directory and the current working directory by modifying the corresponding JSON file in the server so that other editor will know about the change. Besides, assets can be drag and drop to other editors which specific the relative path of the assets inside a package and other editors can get the assets from the server.

The asset browser, importing a texture
For other tools that have a 3D viewport, such as material editor. The viewport is implemented as a unmanaged C++ .dll (which use the same code as the engine), which can be called by the C# program. During initialization of the tools, the C# program will pass the window handle of the Control (e.g. picture box) to the C++ .dll to create the D3D swap chain. Also the C++ .dll will provide several more hook up functions such as mouse up/down, viewport resize, timer update callback to the C# program. After the start up, the C++ .dll can poll changes from the server inside the update function and logically acts like a separate program which is independent of the C# program.

The 3D viewport is logically separate from the user interface
As mentioned before, the client/server architecture is crash proofing, if the client program crash, no data is lost because all the file changes are maintained in the server. But during the development of the tools, I encountered some situation that when the client program crash, the client program cannot restart. This is because the server is maintaining the file that cause the crash, so every time the client program restart, it crashes... So to avoid this problem, I added application exception handler in C# form so that every time it crashes, a dialog box would appear to ask whether to undo the last operation before the crash. This feature helps debugging a bit because every time the client crash, I can undo the changes and then the same crash can be reproduced easily by repeating the last editing step.

User is offered a last chance to undo the operation before the crash
However, not all the crash caused in the unmanaged C++ .dll can be caught by the application exception handler. Only some of the crash can be handled in C# by using [HandleProcessCorruptedStateExceptions].

Delta JSON
For the client to communicate with the server about the file changes(poll changes/make changes), delta JSON is used. The delta JSON I used is similar to the one used in Client/Server Tools Architecture. For example, we have a material file "package/default.material" like:
{
     "diffuse color" : "red",
      "specular color" : "white",
      "glossiness" : 0.5
}
To change the diffuse color to blue in the above file, a delta JSON message will be sent to the server:

{
     "message type" : "delta changes",
     "file" : "package/default.material",
     "delta changes" :
                              [
                                    {
                                             "key" : "diffuse color",
                                             "value" : "blue"
                                    }
                              ]
}
When the server receive the above message, it will change the "package/default.material" file, and at the same time, the server will compute the inverse of the delta change, i.e.
{
     "key" : "diffuse color",
     "value" : "red"
}
so that undo can be performed. Note that the delta changes is contained in the array of a JSON message since more than 1 change operation can be performed and all of the changes in a single message will be considered as an "atomic operation". The server will roll back to the previous change if the current change is invalid(such as changing a value of an array with an index greater than the length of the array).

After the above update, there may have other clients polling for the file change, the client need to tell the server what version of the file they held so that the server can send them the correct delta changes. But, instead of sending the whole JSON file to server to compute the delta changes, an 'editing step' counter is stored in the server for each opened file. This counter will be increased for each delta change (and decrease after undo). So the client only need to send their 'editing step' counter to the server and the server will know how many delta changes are needed to send back to the client. Also, this 'editing step' counter can be used to ensure that if 2 clients are making delta change requests to the server at the same time, only 1 client will success while the other would fail. However, there is one bug: there will have a chance of having the same 'editing step' but different file state if the file has perform an undo operation and the document is changed afterward. So to uniquely identify the 'editing step', a GUID is generated with each editing step. If the server check that the GUID does not match, the server cannot compute the delta changes for the client and the client may simply need to reload the whole document.

Conclusion
The client/server tool architecture approach is really a smart idea that can provide a crash proofing editor by taking the advantage of the server software would mature much earlier than the client editors. I really like this approach and glad that I give it a try to implement it. This approach gives my tools 'free' undo/redo function, multiple viewport, easier for debugging and splitting the editors into smaller client applications makes the code easier to maintain. I guess that this approach can do something interesting such as doing the rigs/animation in Maya which is live sync to the editors just like this CryEngine trailer.

References
[1] A Client/Server Tools Architecture http://www.itshouldjustworktm.com/?p=875
[2] Developing Imperfect Software: How to prepare for development failure : http://www.gdcvault.com/play/1015319/
[3] Developing Imperfect Software: The Movie http://www.itshouldjustworktm.com/?p=652
[4] New generation of @insomniacgames tools as webapp: http://www.insomniacgames.com/new-generation-of-insomniacgames-tools-as-webapp/
[5] bitsquid Our Tool Architecture: http://bitsquid.blogspot.hk/2010/04/our-tool-architecture.html
[6] Assets are exported from UDK


Shader Generator

Introduction
In the last few weeks, I was busy with rewriting my iPhone engine so that it can also run on the Windows platform (so that I can use Visual Studio in stead of Xcode~) and most importantly, I can play around with D3D11. During the rewrite, I want to improve the process of writing shaders so that I don't need to write similar shaders multiple times for each shader permutation (say, for each surface, I have to write a shader for static mesh, skinned mesh, instanced static mesh... multiplies with the number of render pass), and instead I can focus on coding how the surface would looks like. So I decided to write a shader generator that will generate those shaders which is similar to the surface shader in Unity. I choose the surface shader approach instead of a graph based approach like Unreal Engine, because being a programer, I feel more comfortable (and faster) to write code than dragging tree nodes using the GUI. In the current implementation of the shader generator, it can only generate vertex and pixel shaders for the light pre pass renderer which is the lighting model used before.

Defining the surface
To generate the target vertex and pixel shaders by the shader generator, we need to define how the surface looks like by writing surface shader. In my version of surface shader, I need to define 3 functions: vertex function, surface function and lighting function. The vertex function defines the vertex properties like position and texture coordinates.
  1. VTX_FUNC_OUTPUT vtxFunc(VTX_FUNC_INPUT input)
  2. {
  3. VTX_FUNC_OUTPUT output;
  4. output.position = mul( float4(input.position, 1), worldViewProj  );
  5. output.normal = mul( worldInv, float4(input.normal, 0) ).xyz;
  6. output.uv0 = input.uv0;
  7. return output;
  8. }
The surface function which describe how the surface looks like by defining the diffuse color of the surface, glossiness and the surface normal.
  1. SUF_FUNC_OUTPUT sufFunc(SUF_FUNC_INPUT input)
  2. {
  3. SUF_FUNC_OUTPUT output;
  4. output.normal = input.normal;
  5. output.diffuse = diffuseTex.Sample( samplerLinear, input.uv0 ).rgb;
  6. output.glossiness = glossiness;
  7. return output;
  8. }
Finally the lighting function will decide which lighting model is used to calculate the reflected color of the surface.
  1. LIGHT_FUNC_OUTPUT lightFuncLPP(LIGHT_FUNC_INPUT input)
  2. {
  3. LIGHT_FUNC_OUTPUT output;
  4. float4 lightColor = lightBuffer.Sample(samplerLinear, input.pxPos.xy * renderTargetSizeInv.xy );
  5. output.color = float4(input.diffuse * lightColor.rgb, 1);
  6. return output;
  7. }
By defining the above functions, writer of the surface shader only need to fill in the output structure of the function by using the input structure with some auxiliary functions and shader constants provided by the engine.

Generating the shaders
As you can see in the above code snippet, my surface shader is just defining normal HLSL function with a fixed input and output structure for the functions. So to generate the vertex and pixel shaders, we just need to  copy these functions to the target shader code which will invoke those functions defined in the surface shader. Take the above vertex function as an example, the generated vertex shader would look like:
  1. #include "include.h"
  2. struct VS_INPUT
  3. {
  4. float3 position : POSITION0;
  5. float3 normal : NORMAL0;
  6. float2 uv0 : UV0;
  7. };
  8. struct VS_OUTPUT
  9. {
  10. float4 position : SV_POSITION0;
  11. float3 normal : NORMAL0;
  12. float2 uv0 : UV0;
  13. };
  14. typedef VS_INPUT VTX_FUNC_INPUT;
  15. typedef VS_OUTPUT VTX_FUNC_OUTPUT;
  16. /********************* User Defined Content ********************/
  17. VTX_FUNC_OUTPUT vtxFunc(VTX_FUNC_INPUT input)
  18. {
  19. VTX_FUNC_OUTPUT output;
  20. output.position = mul( float4(input.position, 1), worldViewProj  );
  21. output.normal = mul( worldInv, float4(input.normal, 0) ).xyz;
  22. output.uv0 = input.uv0;
  23. return output;
  24. }
  25. /******************** End User Defined Content *****************/
  26. VS_OUTPUT main(VS_INPUT input)
  27. {
  28. return vtxFunc(input);
  29. }
During code generation, the shader generator need to figure out what input and output structure are needed to feed into the user defined functions. This task is simple and can be accomplished by using some string functions.

Simplifying the shader
As I mentioned before, my shader generator is used for generating shaders used in the light pre pass renderer. There are 2 passes in light pre pass renderer which need different shader input and output. For example in the G-buffer pass, the shaders are only interested in the surface normal data but not the diffuse color while the data need by second geometry pass are the opposite. However all the surface information (surface normal and diffuse color) are defined in the surface function inside the surface shader. If we simply generating shaders like last section, we will generate some redundant code that cannot be optimized by the shader compiler. For example, the pixel shader in G buffer pass may need to sample the diffuse texture which require the texture coordinates input from vertex shader but the diffuse color is actually don't needed in this pass, the compiler may not be able to figure out we don't need the texture coordinates output in vertex shader. Of course we can force the writer to define some #if preprocessor inside the surface function for the particular render pass to eliminate the useless output, but this will complicated the surface shader authoring process as writing surface shader is to describe how the surface looks like, ideally, don't need to worry about the output of a render pass.

So the problem is to figure out what the output data are actually need in a given pass and eliminate those outputs that are not needed. For example, given we are generating shaders for the G buffer pass and a surface function:

  1. SUF_FUNC_OUTPUT sufFunc(SUF_FUNC_INPUT input)
  2. {
  3. SUF_FUNC_OUTPUT output;
  4. output.normal = input.normal;
  5. output.diffuse = diffuseTex.Sample( samplerLinear, input.uv0 ).rgb;
  6. output.glossiness = glossiness;
  7. return output;
  8. }
We only want to keep the variables output.normal and output.glossiness. And the variable output.diffuse, and other variables that is referenced by output.diffuse (diffuseTex, samplerLinear, input.uv0) are going to be eliminated. To find out such variable dependency, we need to teach the shader generator to understand HLSL grammar and find out all the assignment statements and branching conditions to derive the variable dependency.

To do this, we need to generate an abstract syntax tree from the shader source code. Of course we can write our own LALR parser to achieve this goal, but I chose to use lex&yacc (or flex&bison) to generate the parse tree. Luckily we are working on a subset of the HLSL syntax(only need to define functions and don't need to use pointers) and HLSL syntax is similar to C language, so modifying the ANSI-C grammar rule for lex&yacc would do the job. Here is my modified grammar rule used to generate the parse tree. By traversing the parse tree, the variable dependency can be obtained, hence we know which variables need to be eliminated and eliminate them by taking out the assignment statements, then the compiler will do the rest. Below is the simplified pixel shader generated in the previous example:
  1. #include "include.h"
  2. cbuffer _materialParam : register( MATERIAL_CONSTANT_BUFFER_SLOT_0 )
  3. {
  4. float glossiness;
  5. };
  6. Texture2D diffuseTex : register( MATERIAL_SHADER_RESOURCE_SLOT_0 );
  7. struct PS_INPUT
  8. {
  9. float4 position : SV_POSITION0;
  10. float3 normal : NORMAL0;
  11. };
  12. struct PS_OUTPUT
  13. {
  14. float4 gBuffer : SV_Target0;
  15. };
  16. struct SUF_FUNC_OUTPUT
  17. {
  18. float3 normal;
  19. float glossiness;
  20. };
  21. typedef PS_INPUT SUF_FUNC_INPUT;
  22. /********************* User Defined Content ********************/
  23. SUF_FUNC_OUTPUT sufFunc(SUF_FUNC_INPUT input)
  24. {
  25. SUF_FUNC_OUTPUT output;
  26. output.normal = input.normal;
  27.                                                                  ;
  28. output.glossiness = glossiness;
  29. return output;
  30. }
  31. /******************** End User Defined Content *****************/
  32. PS_OUTPUT main(PS_INPUT input)
  33. {
  34. SUF_FUNC_OUTPUT sufOut= sufFunc(input);
  35. PS_OUTPUT output;
  36. output.gBuffer= normalToGBuffer(sufOut.normal, sufOut.glossiness);
  37. return output;
  38. }
Extending the surface shader syntax
As I use lex&yacc to parse the surface shader, I can extend the surface shader syntax by adding more grammar rule, so that writer of the surface shader can define what shader constants and textures are needed in their surface function to generate the constant buffer and shader resources in the source code. Also my surface shader syntax permit user to define their struct and function other than their 3 main functions (vertex, surface and lighting function), where they will also be copied into the generated source code. Here is a sample of how my surface shader would looks like:

  1. RenderType{
  2. opaque;
  3. };
  4. ShaderConstant
  5. {
  6. float glossiness : ui_slider_0_255_Glossiness;
  7. };
  8. TextureResource
  9. {
  10. Texture2D diffuseTex;
  11. };
  12. VTX_FUNC_OUTPUT vtxFunc(VTX_FUNC_INPUT input)
  13. {
  14. VTX_FUNC_OUTPUT output;
  15. output.position = mul( float4(input.position, 1), worldViewProj  );
  16. output.normal = mul( worldInv, float4(input.normal, 0) ).xyz;
  17. output.uv0 = input.uv0;
  18. return output;
  19. }
  20. SUF_FUNC_OUTPUT sufFunc(SUF_FUNC_INPUT input)
  21. {
  22. SUF_FUNC_OUTPUT output;
  23. output.normal = input.normal;
  24. output.diffuse = diffuseTex.Sample( samplerLinear, input.uv0 ).rgb;
  25. output.glossiness = glossiness;
  26. return output;
  27. }
  28. LIGHT_FUNC_OUTPUT lightFuncLPP(LIGHT_FUNC_INPUT input)
  29. {
  30. LIGHT_FUNC_OUTPUT output;
  31. float4 lightColor = lightBuffer.Sample(samplerLinear, input.pxPos.xy * renderTargetSizeInv.xy );
  32. output.color = float4(input.diffuse * lightColor.rgb, 1);
  33. return output;
  34. }
Conclusions
This post described how I generate vertex and pixel shader source codes for different render passes by defining a surface shader which avoid me to write similar shaders multiple times and without worrying the particular shader input and output for each render pass. Currently, the shader generator can only generate vertex and pixel shader in HLSL for static mesh in the light pre pass renderer. The shader generator is still under progress where generating shader source code for the forward pass is still have not done yet. Besides domain, hull and geometry shaders are not implemented. Also GLSL support is missing, but this can be generated (in theory...) by building a more sophisticated abstract syntax tree during parsing the surface shader grammar or defining some new grammar rule in the surface shader (using lex&yacc) for easier generating both HLSL and GLSL source code. But these will be left for the future as I still need to rewrite my engine and get it running again...

References
[1] Unity - Surface Shader Examples http://docs.unity3d.com/Documentation/Components/SL-SurfaceShaderExamples.html
[2] Lex & Yacc Tutorial http://epaperpress.com/lexandyacc/
[3] ANSI C grammar, Lex specification http://www.lysator.liu.se/c/ANSI-C-grammar-l.html
[4] ANSI C Yacc grammar http://www.lysator.liu.se/c/ANSI-C-grammar-y.html
[5] http://www.ibm.com/developerworks/opensource/library/l-flexbison/index.html
[6] http://www.gamedev.net/topic/200275-yaccbison-locations/


Writing an iPhone Game Engine (Part 2- Maya Tools)

Tools are very important in game production, especially when you are working with someone who cannot write code. In my project, I worked with 2 artists, so I need to write some tools to export their models to my engine. There are different choices to export the models, you can parse.obj file format(for static model only), reading .fbx file using FBX SDK, reading COLLADA files... But I choose to extract it directly from the modeling package that the artists use - Writing Maya plugin to extract the model data.

To write Maya plugin for exporting models, we should know how data are stored in Maya first. Basically, Maya stores most of its data (e.g. meshes, transformation...) in a Directed Acyclic Graphic(DAG). In my case, I just need to locate those DAG nodes that store the mesh data. We can traverse the DAG using the iterator MItDag like this:

  1. MStatus status;
  2. MItDag dagIter( MItDag::kDepthFirst, MFn::kInvalid, &status );
  3. MDagPathArray meshPath; // store the DAG nodes that contains mesh
  4. for ( ; !dagIter.isDone(); dagIter.next())
  5. {
  6.   MDagPath dagPath;
  7.   status = dagIter.getPath( dagPath );
  8.   if ( status )
  9.   {
  10.     MFnDagNode dagNode( dagPath, &status );

  11.     // Filter out the DAG nodes that do not contain mesh
  12.     if ( dagNode.isIntermediateObject()) continue;
  13.     if ( !dagPath.hasFn( MFn::kMesh )) continue;
  14.     if ( dagPath.hasFn( MFn::kTransform )) continue;
  15.     meshPath.append(dagPath);
  16.   }
  17. }
then, we can get the mesh data in the DAG nodes using the MFnMesh like this:

  1. for(int i=0; i< meshPath.length(); ++i)
  2. {   
  3.   MDagPath dagPath= meshPath[i];
  4.         
  5.   MFnMesh  fnMesh( dagPath );
  6.   MPointArray meshPoints; // store the position of vertices
  7.   fnMesh.getPoints( meshPoints, MSpace::kWorld );

  8.   // get more mesh data such as normals, UV...
  9. }
For the details of getting the mesh data, you may refer to MAYA API How-To and Maya Exporter Factfile. After getting the mesh data you can export them by creating a sub-class of the MPxFileTranslator and overwrite the writer() function. You can find some useful sample code provided by Maya inside the Maya directory (/Applications/Autodesk/maya2010/devkit/plugin-ins/ on Mac platform) such as the maTranslator.cpp and objExport.cpp.

Another reason I choose to write plugin instead of parsing .fbx/COLLADA is because of extracting the animation data. In my project, I just need to export some simple animations which linear interpolates between key frames, and I would like to get the key frames defined by artists in Maya. I have tried using the FBX SDK but when exporting animation data, it bakes all the animation frames as key frames... Using COLLADA get even worse because I cannot find a good exporter for Maya on the Mac platform... So writing Maya plugin can get rid of all these problems and get the data I want. I can also write a script for artists to set the animation clip data:

After exporting the mesh data, I think it would be nice to edit the collision geometry inside Maya, so I have written another plugin to define the collision shapes of the models:

This plugin works similar to the Dynamica Plugin (In fact, I learnt a lot from it.), except mine can just define simple shapes with only spheres, boxes and capsule shapes. And my plugin cannot do physics simulation inside Maya, it is just for defining the collision shapes. Those collision shapes (sphere/box/capsule) are just sub-class of MPxLocatorNode by overriding the draw() methods with some openGL calls to render the corresponding shapes.

In conclusions, extracting mesh data directly from Maya is not that hard. We can get all the data such as vertex normals, UV sets and key frame data from Maya and do not need to worry about the data loss during export through another formats, especially animation data. Also Maya provides a convenient API to get those data and it is easy to learn. After familiar with the Maya API, I can also write another plugin to define the collision shapes. Next time when you need to export mesh data, you may consider to extract them directly from the modeling package rather than parsing a file format.

Reference:
[1] MAYA API How-To: http://ewertb.soundlinker.com/api/api.018.php
[2] Maya Exporter Factfile: http://nccastaff.bournemouth.ac.uk/jmacey/RobTheBloke/www/research/index.htm
[3] Rob The Bloke: http://nccastaff.bournemouth.ac.uk/jmacey/RobTheBloke/www/
[4] http://www.vfxoverflow.com/questions/add-remove-framelayouts-in-a-window-using-mel
[5] http://bulletphysics.org/mediawiki-1.5.8/index.php/Maya_Dynamica_Plugin