Mobile Java 3D programming

Introduction
To begin with I'd like you to know of a few links on the net that will be very helpful on your journey towards M3G land.

First of all, and probably most importantly, is the dedicated Mobile Java 3D web section on Sony Ericsson Developer World. Second, if you ever get stuck visit to the Sony Ericsson Mobile Java 3D forum. For everything else, use the Sony Ericsson Developer World portal, where you will find the answers to your questions and more.

Now that you know where to go if you get in trouble, let's proceed with the tutorial. The goal of this tutorial is to teach you how to set up your own 3D Canvas and make it render stuff on screen. To render models, I'll first show you how to load them and tell you about the tools that are available to create M3G models. Then we'll finish by manipulating the camera some so that we can walk around in our scene. I just want you to get warm in your seat and see how fast one can develop a 3D application with M3G, so this tutorial will be pretty fast and straight-forward with little in-depth explanation. The other parts of this series will explore the various M3G topics in detail.

Since the code is meant for educational purposes it isn't optimal nor does it cover all the errors the might occur. These are more advanced topics that will be addressed later on.

What you should know
Before you start reading this, you should know the basics of a MIDlet class and a Canvas class. This isn't a hard topic and if you feel lost, consult the source code (distributed with this tutorial) and check out the M3GMIDlet and M3GCanvas classes. It's also very good if you have some background in 3D programming/math, but it's not required.

The Canvas
When we develop in JSR 184 we will be using the MIDP 2.0 profile, which means we get a few great functions for free. Let's begin with setting up our Canvas. It is the same procedure as with a normal 2D Java game, you set up your MIDlet class, you start your Canvas and draw in your paint method. This is a fairly easy process and since you already should know this I'll just quickly skim through it. Let's first take a look at the header of the Canvas class, the import and variable declarations.

import javax.microedition.lcdui.Graphics;
import javax.microedition.lcdui.game.GameCanvas;
import javax.microedition.M3G.Camera;
import javax.microedition.M3G.Graphics3D;
import javax.microedition.M3G.Light;
import javax.microedition.M3G.Loader;
import javax.microedition.M3G.Object3D;
import javax.microedition.M3G.Transform;
import javax.microedition.M3G.World;
/**
*
* @author Biovenger
* @version
*/
public class M3GCanvas
extends GameCanvas
implements Runnable {
// Thread-control
boolean running = false;
boolean done = true;

// If the game should end
public static boolean gameOver = false;

// Rendering hints
public static final int STRONG_RENDERING_HINTS = Graphics3D.ANTIALIAS | Graphics3D.TRUE_COLOR | Graphics3D.DITHER;
public static final int WEAK_RENDERING_HINTS = 0;
public static int RENDERING_HINTS = STRONG_RENDERING_HINTS;

// Key array
boolean[] key = new boolean[5];

// Key constants
public static final int FIRE = 0;
public static final int UP = FIRE + 1;
public static final int DOWN = UP + 1;
public static final int LEFT = DOWN + 1;
public static final int RIGHT = LEFT + 1;


That was pretty basic stuff but let's quickly see what's going on. First of all we have a lot of imports, we're just importing all the classes that we're going to use in this tutorial and you can find their documentation in the general JSR 184 API javadoc. We also have some thread variables such as running and done, but those should be pretty self-explanatory.

Now, let's check out the rendering hints. These "hints" are ways of telling the mobile device just what kind of quality you want while rendering. However, since they are hints it's not guaranteed that the mobile device will act upon them. Here I define two different hints. Weak and Strong. As you see, the Strong rendering hints hold both anti-aliasing, true color and dithering. The weak holds no hints at all, which is basically the ugliest and fastest rendering you can get. As you see from the code, the hints can be combined by a simple logical OR. I'll talk more about hints in a later part of this tutorial series.

Next we have the key array, that's just a very simple array that holds which key was pressed. If you are curious as to how keys are processed, check out the source code of this example. Suffice to say: by querying if(key[UP]) you will find out if the UP key is pressed right now.

The M3G File Format
The JSR 184 standard has its own format, called M3G. This very versatile 3D format can hold tons of data such as models, lights, cameras, textures and even animation. Nifty! Not even is the format really good, it's also real easy to load into your application, more on that later though. Anyhow, I bet you're thinking "M3G? Never heard of it. How do I create an M3G file?" and even if you weren't thinking it, I'll explain. There are numerous ways to create M3G files:

1. First of all, the latest iteration of Discreet's 3D Studio Max has a built-in M3G exporter. Just hit the Export button and you can export your entire scene, animation, bones, materials and all, into an M3G file. However, many find Discreet's exporter a bit cumbersome and has some bugs, so for best results, use method 2.

2. HiCorp, who are also the providers of Sony Ericsson's JSR 184 implementation, have created very powerful exporters available to the 3 most popular 3D modeling programs. They're available for 3D Studio Max, LightWave and Maya. You can find them here>>

3. Blender, a powerful and free 3D modeling tool also has an M3G exporter available. However, it's still early in development and a bit buggy. Check out Blender here>>


So, how do we load these very powerful files into our program? Very easy. The JSR 184 contains a class called Loader, and it does exactly that, loads files. With one simple method call one can load all references from an M3G file. The method is called Loader.load and has two different argument lists. One takes an URL as a String and the other takes a raw byte array. Here's an example of how it is used.

Object3D[] objects = Loader.load("file.M3G");
Object3D[] objects2 = Loader.load(byteArray, offset);


The load method always returns an array of Object3Ds and there's a very good reason for it. The best one is that the Loader class can load much more than M3G files, it can basically deserialize any class that inherits from Object3D. However, you'll mostly use it for loading M3G files.

Now, I've created a simple M3G file, called map. M3G and I want to render it. To load this file we'll use the Loader. load method, however as you've just seen it returns an array of Object3D. We can't use the Object3D array for rendering as it is so we need to convert it to something that we can render to screen later on. In this tutorial we'll load the World node. The World node is the top node of the JSR 184 scene graph. It holds all kinds of information such as camera, lighting, background and a lot of meshes. I'll go through scene graphs and JSR 184's implementation of scene graphs in a later part of this series, for now you just need to know that the World class can hold a whole scene, and that's exactly what we want! Check out this method, it loads the World node from a M3G file.

/** Loads our world */
private void loadWorld()
{
try
{
// Loading the world is very simple. Note that I like to use a
// res-folder that I keep all files in. If you normally just put your
// resources in the project root, then load it from the root.
Object3D[] buffer = Loader.load("/res/map.M3G");

// Find the world node, best to do it the "safe" way
for(int i = 0; i < buffer.length; i++)
{
if(buffer[i] instanceof World)
{
world = (World)buffer[i];
break;
}
}

// Clean objects
buffer = null;
}
catch(Exception e)
{
// ERROR!
System.out.println("Loading error!");
reportException(e);
}
}


As you can see, after we load the Object3D array with the Loader class, we just simply go through the entire array and look for the World note. This is the safest way of finding a World node. After we find our world node we just break out of the loop and clean our buffer (not that it's needed, as they'd automatically get cleared upon leaving the method. Good practice though.)

OK, now we have loaded our World node, which I've already told you is the top node of a scene graph and holds all scene information. Before I show you how easy it is to render, let's first extract a camera so that we can move around the world we just loaded.

Camera handling
We have our World node ready for rendering and now all we need is a camera, that we can move around in the world. If you remember, I've already told you that the World node holds Camera information, so we should be able to extract the camera from the World and manipulate it.

A camera in JSR 184 is described by the Camera class. This class makes it very easy to manipulate the camera in our 3D application with simple translation and orientation methods. The only two methods we'll be using in this example are the translate(float, float, float) and the setOrientation(float, float, float, float). The first one simply moves the camera in 3D space by an offset in x, y and z. So, for instance, if you wanted to move the camera 3 units on the X axis and 3 units on the Z axis, you'd do this:

Camera cam = new Camera(); // This is our camera
//Move camera X Y Z
cam.translate(3.0f, 0.0f, 3.0f);

Piece of cake! Each method call translates the camera further, so two of the above calls would actually translate the camera 6 units along the x axis and 6 units along the z axis. Rotating is just as easy, but I have to explain the method first. It works like almost all 3D API rotation methods. You have four parameters, the first one is the actual rotation in degrees and the last three compose an orientation vector (xAxis, yAxis, zAxis) to rotate around. Orientation and orientation vectors will come later in the series, for now, just know this:

//Rotate camera 30 degrees around the X axis
cam.setOrientation(30.0f, 1.0f, 0.0f, 0.0f);
//Rotate camera 30 degrees around the Y axis
cam.setOrientation(30.0f, 0.0f, 1.0f, 0.0f);
//Rotate camera 30 degrees around the Z axis
cam.setOrientation(30.0f, 0.0f, 0.0f, 1.0f);

Note that the method is named setOrientation, which means that it actually clears any previous rotation you might have done. I'll assume that you already know what rotation around an axis means and won't go into detail on that topic here.

Now that you know everything to make the camera move and rotate as you wish I'll show you how to extract a Camera from a World.

/** Loads our camera */
private void loadCamera()
{
// BAD!
if(world == null)
return;

// Get the active camera from the world
cam = world.getActiveCamera();

// Create a light
Light l = new Light();

// Make sure it's AMBIENT
l.setMode(Light.AMBIENT);

// We want a little higher intensity
l.setIntensity(3.0f);

// Add it to our world
world.addChild(l);
}


It's that simple? Yes, it's that simple. We just extract the active camera from the World by using the getActiveCamera. This gives us the camera that the world was exported with. With the method above we have a camera that we can move around as much as we want to. However, the method is doing something else as well, it adds a light! We will delve deeper into lights in later parts, but here you see how easy it is to add a light to your world. I create an Ambient light (for you who don't know, an ambient light lights all surfaces from all directions) and add it to the world. This way we get a very-well lit world. As I told you earlier, the World node can hold all kinds of information, including light, so we only need to add the light once to our world and JSR 184 will do the rest for us. Isn't that handy? Before we get to the last part; rendering, let's make the camera move. I've already told you that the array of booleans, key, holds our key information so all we have to do is query the array and make our camera behave. First of all, we'll need some variables to make our camera obey.

// Camera rotation
float camRot = 0.0f;
double camSine = 0.0f;
double camCosine = 0.0f;

// Head bobbing
float headDeg = 0.0f;


The variables above will help us keep track of the camera's rotation, trigonometry and the head bobbing. The trigonometry is used for movement later on. Head bobbing is really quite simple, it's just a cheap effect we'll insert to make the camera bob up and down as we walk around the world, for a more natural feeling. All right, all we need to do is move the camera. This is done in the following method:

private void moveCamera() {
// Check controls
if(key[LEFT])
{
camRot += 5.0f;
}
else if(key[RIGHT])
{
camRot -= 5.0f;
}

// Set rotation
cam.setOrientation(camRot, 0.0f, 1.0f, 0.0f);

// Calculate trigonometry for camera movement
double rads = Math.toRadians(camRot);
camSine = Math.sin(rads);
camCosine = Math.cos(rads);


As you can see this half of the method is pretty simple. First we check if the user has pressed the LEFT or the RIGHT joystick key and if he has we just increase or decrease the rotation of the camera. That's simple enough. The next few lines are interesting though. We want the head to turn as the user presses right or left, so we'll rotate around the Y axis, which means an orientation vector of 0.0f, 1.0f, 0.0f. After we rotate the camera we calculate the new Sine and Cosine of the angle, which is used later for movement calculation. Now on to the next half of the method:

if(key[UP])
{
// Move forward
cam.translate(-0.1f * (float)camSine, 0.0f, -0.1f * (float)camCosine);

// Bob head
headDeg += 0.5f;

// A simple way to "bob" the camera as the user moves
cam.translate(0.0f, (float)Math.sin(headDeg) / 40.0f, 0.0f);
}
else if(key[DOWN])
{
// Move backward
cam.translate(0.1f * (float)camSine, 0.0f, 0.1f * (float)camCosine);

// Bob head
headDeg -= 0.5f;

// A simple way to "bob" the camera as the user moves
cam.translate(0.0f, (float)Math.sin(headDeg) / 40.0f, 0.0f);
}
// If the user presses the FIRE key, let's quit
if(key[FIRE])
M3GMidlet.die();
}

Here we check for UP or DOWN. UP will move the camera forward and DOWN will move it backwards. These are really simple translations, but I'll explain them fast. The Camera always starts looking down the negative Z axis, so to move the camera forward, we only have to move it negative on the Z axis. However, if we rotate the camera we can't just move it along the Z axis anymore, it will look wrong. We have to move the camera on the X axis as well so that we get the movement we desire. This is attained by using trigonometrical functions. Since this isn't a tutorial on 3D math I won't go into more detail, after all you should already know this and if you think it's hard then look up a good 3D math tutorial on the internet.

After every translation we also move the head with my simple head-bobbing hack. I just supply the translate method with a sine function along the Y axis, so that it looks like the head is going up and down, that's why we either increase or decrease the headDeg variable each time the camera is moved. We also check for the FIRE key at the end, so that the user can quit the application whenever he wants. (He can also use the invisible EXIT command that I added when the canvas was created).
That's it! That's all our advanced camera movement, now all that's left is to render the World node!


Rendering
Before we jump into code, I have to tell you about immediate and retained mode rendering. The retained mode is what we are using in this tutorial and it basically is the mode you use when you render an entire World node with all its cameras, lights and meshes. This is the easiest mode to render in, but also the mode you have the least control over your world. Immediate mode is when you render a Group of meshes, a single mesh or vertex data directly. This gives you much more control as with each rendering you supply a transform matrix that transforms the object before it's rendered. You can render a World node in Immediate mode, by supplying a transform matrix to the render method call, but then you'd be ignoring all the nifty effects a World node has such as camera, background and others. I'll go more into detail about the two different rendering modes in later parts of this series. For now, let's see how we can render a world.


Graphics3D
All rendering in JSR 184 is done with the Graphics3D object. It can even hold camera and light information, if you are rendering in immediate mode. Let's not worry about that now though since that's an issue I'll address later on.

To render with a Graphics3D object, you first must bind it to a graphics context. A graphics context basically means any Graphics object that draws into something. It could be a Graphics object of an Image if you want to render to an Image, or it could be the main Graphics object obtained from the getGraphics() method. By using the main Graphics object you render directly to the screen, which is what we want to do here. To get a Graphics3D object is simple, you just call the Graphics3D.getInstance() method. You get only one Graphics3D object per MIDlet, which is why you only obtain it from the getInstance method. Binding is done with the bindTarget method, and there are a few ways of using it. Here are a few examples.


//Here is our Graphics3D object
Graphics3D g3d = Graphics3D.getInstance();
// Bind to an image
Image img = Image.createImage("myImage.png");
Graphics g = img.getGraphics();
g3d.bindTarget(g);
// Bind to the main Graphics object
g3d.bindTarget(getGraphics());
// We can also supply rendering hints. Remember those? I talked about them at the beginning.
// This is done by using the other form of the bindTarget method.
// It takes a Graphics object to begin with, as always, and then it needs a boolean
// and an integer mask of hints.
// The boolean simply tells the Graphics3D object if it should use a depth buffer
// and you'll probably always set it to 'true'. Here is how we'll use it to bind
// with our hints:
g3d.bindTarget(getGraphics(), true, RENDERING_HINTS);


Now that you know how to bind your target, you also must know that the target must be released each game loop. That means that when you've rendered everything, you must release the target. There are problems sometimes with releasing and binding, so most people just keep the whole game loop within a try/catch block and put the releaseTarget call in the finally clause. That's how we'll do it in this example. Now, let's take a look at the rendering method. To render something you can use a variety of rendering methods, but we'll only be interested in one today, the render(World) method. Simple huh? Yeah, you only need to supply your world node and it'll render it for you. Let's take a look at what our game loop will look like:

/** Draws to screen
*/
private void draw(Graphics g)
{
// Envelop all in a try/catch block just in case
try
{
// Move the camera around
moveCamera();

// Get the Graphics3D context
g3d = Graphics3D.getInstance();

// First bind the graphics object. We use our pre-defined rendering hints.
g3d.bindTarget(g, true, RENDERING_HINTS);

// Now, just render the world. Simple as pie!
g3d.render(world);
}
catch(Exception e)
{
reportException(e);
}
finally
{
// Always remember to release!
g3d.releaseTarget();
}
}

Wow, that's a really short game loop, let's see what it does! First is called the moveCamera method that moves and rotates our camera. We've seen it before. Then it gets the Graphics3D instance and binds it to the Graphics object supplied to the draw method. (Note: The draw method is called by the Thread's run method which passes the global Graphics object to it).

It also adds the rendering hints we defined at the start of our Canvas. After all this is done, it just calls the g3d.render(world) method which does all the work for us. It renders our entire scene, meshes, materials, lights and camera.

JAVA 3D

1.What is Java 3D?

Java3D is a low level 3D scene-graph based graphics programming API for the java language. It does not form part of the core APIs required by the Java specification. The class libraries exist under the javax.media.j3d top level package as well as utility classes provided in javax.vecmath.

A low level API provides routines for creation of 3D geometeries in a scenegraph structure that is independent of the underlying hardware implementation for realtime programming. The API provides scenegraph compilation and other optimization techniques. It is heavily optimised towards the requirements of realtime 3D rendering and hence does not contain capabilities for photo-realistic rendering effects used to produce movie quality images (ie ray-tracing or radiosity based rendering algorithms).

The Java3D API consists of two parts: The API Specification and the implementation. Java3D mainly consists of the API specification. Anyone may implement the spec. Sun also provide an implementation of this specification but is encouraging 3rd party developers to implement J3D directly to the hardware.


2. What alternative APIs are available?

Depending on how close to the metal you want to go, there are a number of alternate Java-based scenegraphs and APIs.

Starting at the bottom, there are the OpenGL bindings JOGL and LWJGL. Next up are scene graphs built on these. The most well known are: Xith3D, jME and our own Aviatrix3D


3. What is the future of Java 3D?

Whatever you make of it. Sun has now released the source code to Java3D as a project under the Java.net site. The license is split between BSD for the utility code, and a SCSL-like license for the core runtime and vecmath parts. The current version of all the code is available in CVS, details of which can be found at the various subproject pages.

Java3D is now being developed as an open project (but not open source) through the java.net development group. 1.4.0 has been released and work is progressing on 1.5.0. The most recent information can be found at the j3d-core project homepage.


4. What's the difference between Java 3D and OpenGL/Direct3D/PHIGS/, etc?

Java3D is another 3D programming API that exists on a similar level to OpenGL, Direct3D, PHIGS and similar systems. It is designed to use hardware accelaration wherever possible based on the underlying graphics architecture of the OS. That is, J3D provides a 3D rendering API for the Java language, but at the same time it may use OpenGL to do the interface to the hardware. J3D does not require direct hardware device driver support like the other APIs because it could rely on them to build its functionality.

For unix users Sun's Java3D is implemented on top of OpenGL. For Win32 users Java3D is available for OpenGL and Direct3D.

There are Java bindings to the other APIs. JOGL is a Sun-driven opensource OpenGL binding that is being actively developed. There is also JSR 231 which will be the formal bindings to OpenGL. This is expected to be released in mid to late 2006 as a final release. A JSR is also in the process of being approved that will add Java bindings to the OpenGL ES specification. Finally, there is JSR-184 that defines a Java scenegraph API targeted at mobile devices. It's not Java3D, but looks somewhat like it, though with many structural differences.


5. Isn't using Java to do 3D graphics going to be slow?

Java3D is capable of taking advantage of graphics hardware in your system. The speed you see will depend on the quality of the graphics hardware on your machine.

You can also run Java3D on machines without special graphics hardware, but it will require software graphics libraries. Be aware that it won't run nearly as quickly in software alone as it will with dedicated graphics hardware.


6. Where can I get Java3D, and where is the Java 3D home page?

The current release version of Java 3D is 1.4.0 This can be downloaded at here for Solaris Win32 and Linux platforms. 1.5.0 is the next release, but has not yet reached alpha quality yet.

J3D.org maintains a comprehensive page for downloading J3D for all known platforms

The Java 3D Computing home page is located at: Java 3D Computing


7. Can you run Java 3D under JDK 1.1?

No. Java 3D requires the use of a number of Java 2 specific features in order to run that cannot be removed.

More specifically: Java 3D uses the GraphicsConfiguration classes to get screen information to decide how to do the hardware/software rendering. Java 1.1 does not provide this therefore no implementation of Java 3D will ever run on Java 1.1.


8. Who Uses Java 3D?

j3d.org maintains a list of sites that use Java 3D as well as a number of other interesting Java 3D related links. Please visit it!

Java3D is being used in many different application environments. There's a highly publicised use of J3D in a CAVE in Canada (see Sun's Java3D homepage for a link to the whitepaper) down to the PC. We're not aware of J3D being used in small footprint devices like PDAs and mobile phones.


9. What Platforms Does Java 3D Run on?

In runs on Linux, Win32 and most Unices and Mac OSX 10.3 (Panther). Earlier versions of Mac OS are not supported. Mac support is heavily dependent on Apple and not the Java3D team. It runs on different release cycles.

j3d.org maintains a complete list of Java 3D implementations that you can check out to download the software for your platform

Overview of the Mobile 3D Graphics API for J2ME

JSR is the first Java-specific standard for three-dimensional graphics on mobile devices. The JSR's 26-member expert group includes all the major players in the mobile arena, including Sun Microsystems, Sony Ericsson, Symbian, Motorola, ARM, Cingular Wireless, and specification lead Nokia. The API takes the form of an optional package expected to be used with MIDP and version 1.1 of the Connected Limited Device Configuration (CLDC). It defines low- and high-level programming interfaces that bring efficient, interactive 3D graphics to devices with little memory and processing power, and with no hardware support for 3D graphics or floating-point operations. As new phones with diverse functionality appear, however, the API can scale up to higher-end devices that have color displays, 3D graphics hardware, and support for floating-point operations.

Application areas that will benefit from a 3D graphics API include games, map visualization, user interfaces, animated messages, and screen savers. Each of these areas requires simple content creation, some require high polygon throughput, and others require high-quality still images with special effects. To meet this wide spectrum of needs, the API supports both high-level and low-level graphics features, with a footprint of only 150 KB. In the high-level implementation (called retained mode), the developer works with scene graphs, and the world renders itself based on the positions of virtual cameras and lights. The low-level access (immediate mode) allows applications to draw objects directly. You can use either mode, or both at the same time, depending on the task at hand. 

The features of immediate mode are aligned with OpenGL ES standardization by Khronos. OpenGL ES (from "OpenGL for Embedded Systems") is a low-level, lightweight API for advanced embedded graphics using well-defined subset profiles of OpenGL. It provides a low-level interface between applications and hardware or software graphics engines. This standard makes it easy and inexpensive to offer a variety of advanced 3D graphics and games across all major mobile and embedded platforms. Because OpenGL ES is based on OpenGL, no new technologies are needed, which ensures synergy with, and a migration path to, the most widely adopted cross-platform graphics API, full OpenGL.
 
Requirements

The JSR 184 expert group has agreed on a set of capabilities that the API must support:

The API must support both retained-mode access (scene graphs) and immediate-mode access (the OpenGL ES subset or similar), and allow mixing and matching of the two modes in a unified way. 
The API must have no optional parts; all methods must be implemented. 
To reduce the amount of programming required, the API must include importers for certain key data types, including meshes, textures, and scene graphs. 
Data must be encoded in a binary format for compact storage and transmission. 
It must be possible to implement the API efficiently on top of OpenGL ES, without floating-point hardware. 
The API must use the float data type of the Java programming language, not introduce a custom type. 
Because using integer arithmetic is difficult and error-prone, floating-point values should be used wherever feasible. 
The ROM and RAM footprint must be small; the API should be implementable within 150KB on a real mobile terminal. 
The API must provide minimal garbage collection. 
The API must interoperate properly with other Java APIs, especially MIDP. 


JSR 184 requires version 1.1 of CLDC for its floating-point capability. Because most mobile devices do not actually have floating-point hardware, the API designers struck a balance between the speed obtained through integer operations and the ease of programming provided by floating-point operations. Calculations that require the fastest processing accept 8- or 16-bit integer parameters, and other calculations are performed with floating-point math for easier application programming. Figure 1 shows how the relevant APIs relate:  

3D graphics for Java mobile devicesCreate 3D scenes with JSR 184

Playing games on mobile devices is a fun pastime. Up until now, hardware performance has favored classic game concepts that use addictive game play, but simple graphics. Today, Tetris and Pac-Man are increasingly complemented by two-dimensional action games with extensive graphics. Consequently, the next step is to move toward 3D graphics. Sony's PlayStation Portable shows the graphics power you can put into a mobile device. Although the average mobile phone is technologically behind this specialized game machine, you can see where the market is heading. The Mobile 3D Graphics API (M3G for short), defined in Java Specification Request (JSR) 184, is an industry effort to create a standard 3D API for mobile devices that support Java programming.

M3G's API can be divided roughly into two parts: immediate and retained mode. In immediate mode, you render individual 3D objects. In retained mode, you define and display an entire world of 3D objects, including information on their appearance. You can imagine immediate mode as the low-level access to 3D functions, and retained mode as a more abstract, but also more comfortable, way of displaying 3D graphics. In this article, I'll explain the immediate mode APIs. The second part of this series shows how to use retained mode.

Alternatives to M3G
M3G is not alone. HI Corporation's Mascot Capsule API is popular in Japan, where all three major operators use it in different incarnations, and abroad. Sony Ericsson, for example, ships phones with both M3G and HI Corporation's proprietary API. Application developers report on Sony Ericsson's Web site that Mascot Capsule is a stable and fast 3D environment.

JSR 239, Java Bindings for OpenGL ES, targets similar devices to M3G. OpenGL ES is a subset of the widely known OpenGL 3D library and is becoming the de facto standard for native 3D implementations on constraint devices. JSR 239 defines a Java API that resembles OpenGL ES's C interface as much as possible, making it easy to port existing OpenGL content. As of September 2005, JSR 239 is still in early draft status. I can only speculate whether it will make any impact on mobile phones. However, while not compatible with its API, OpenGL ES did influence M3G's definition: JSR 184's expert group stipulated that it be possible to efficiently implement M3G on top of OpenGL ES. If you know OpenGL, you will recognize many M3G features.

Despite the alternatives, M3G has the support of all the major phone manufacturers and operators. Although I've mentioned games as the main attraction, M3G is a general-purpose API that you can use to create all kinds of 3D content. It will be the 3D API to use for mobile phones for years to come.