O'Reilly logo

iPhone 3D Programming by Philip Rideout

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Holodeck Sample

In this chapter’s introduction, we promised to present a poor man’s augmented reality app. As a starting point, we’ll create a 3D environment that includes the aforementioned geodesic dome with antialiased borders. We’ll also render a mossy ground plane and some moving clouds in the background. Later we’ll replace the clouds with a live camera image. Another interesting aspect to this sample is that it’s designed for landscape mode; see Figure 6-12.

The Holodeck sample

Figure 6-12. The Holodeck sample

For rendering the AA lines in the dome, let’s use a different trick than the one presented in the previous section. Rather than a filling a texture with a circle, let’s fill it with a triangle, as shown in Figure 6-13. By choosing texture coordinates in the right places (see the hollow circles in the figure), we’ll be creating a thick border at every triangle.

Antialiased triangle with transparency

Figure 6-13. Antialiased triangle with transparency

For controlling the camera, the app should use the compass and accelerometer APIs to truly qualify as an augmented reality app. However, initially let’s just show four buttons in a HUD: touching any button will cause the environment to “scroll.” Horizontal buttons control azimuth (angle from north); vertical buttons control altitude (angle above horizon). These terms may be familiar to you if you’re an astronomy buff.

Later we’ll replace the azimuth/altitude buttons with the compass and accelerometer APIs. The benefit of this approach is that we can easily provide a fallback option if the app discovers that the compass or accelerometer APIs are not available. This allows us to gracefully handle three scenarios:

iPhone Simulator

Show buttons for both azimuth and altitude.

First- and second-generation iPhones

Show buttons for azimuth; use the accelerometer for altitude.

Third-generation iPhones

Hide all buttons; use the accelerometer for altitude and the compass for azimuth.

In honor of my favorite TV show, the name of this sample is Holodeck. Without further ado, let’s begin!

Application Skeleton

The basic skeleton for the Holodeck sample is much like every other sample we’ve presented since Chapter 3. The main difference is that we forgo the creation of an IApplicationEngine interface and instead place the application logic directly within the GLView class. There’s very little logic required for this app anyway; most of the heavy footwork is done in the rendering engine. Skipping the application layer makes life easier when we add support for the accelerometer, compass, and camera APIs.

Another difference lies in how we handle the dome geometry. Rather than loading in the vertices from an OBJ file or generating them at runtime, a Python script generates a C++ header file with the dome data, as shown in Example 6-20; you can download the full listing, along with the Holodeck project, from this book’s website. This is perhaps the simplest possible way to load geometry into an OpenGL application, and some modeling tools can actually export their data as a C/C++ header file!

Example 6-20. GeodesicDome.h

const int DomeFaceCount = 2782;
const int DomeVertexCount = DomeFaceCount * 3;
const float DomeVertices[DomeVertexCount * 5] = {
    -0.819207, 0.040640, 0.572056,
    0.000000, 1.000000,

    ...

    0.859848, -0.065758, 0.506298,
    1.000000, 1.000000,
};

Figure 6-14 shows the overall structure of the Holodeck project.

Note that this app has quite a few textures compared to our previous samples: six PNG files and two compressed PVRTC files. You can also see from the screenshot that we’ve added a new property to Info.plist called UIInterfaceOrientation. Recall that this is a landscape-only app; if you don’t set this property, you’ll have to manually rotate the virtual iPhone every time you test it in the simulator.

Interfaces.hpp is much the same as in our other sample apps, except that the rendering engine interface is somewhat unique; see Example 6-21.

Example 6-21. Interfaces.hpp for Holodeck

...

enum ButtonFlags {
    ButtonFlagsShowHorizontal = 1 << 0,
    ButtonFlagsShowVertical = 1 << 1,
    ButtonFlagsPressingUp = 1 << 2,
    ButtonFlagsPressingDown = 1 << 3,
    ButtonFlagsPressingLeft = 1 << 4,
    ButtonFlagsPressingRight = 1 << 5,
};

typedef unsigned char ButtonMask;

struct IRenderingEngine {
    virtual void Initialize() = 0;
    virtual void Render(float theta, float phi, 
                        ButtonMask buttons) const = 0;
    virtual ~IRenderingEngine() {}
};

...

The new Render method takes three parameters:

float theta

Azimuth in degrees. This is the horizontal angle off east.

float phi

Altitude in degrees. This is the vertical angle off the horizon.

ButtonMask buttons

Bit mask of flags for the HUD.

Xcode screenshot of the Holodeck project

Figure 6-14. Xcode screenshot of the Holodeck project

The idea behind the buttons mask is that the Objective-C code (GLView.mm) can determine the capabilities of the device and whether a button is being pressed, so it sends this information to the rendering engine as a set of flags.

Rendering the Dome, Clouds, and Text

For now let’s ignore the buttons and focus on rendering the basic elements of the 3D scene. See Example 6-22 for the rendering engine declaration and related types. Utility methods that carry over from previous samples, such as CreateTexture, are replaced with ellipses for brevity.

Example 6-22. RenderingEngine declaration for Holodeck

struct Drawable {
    GLuint VertexBuffer;
    GLuint IndexBuffer;
    int IndexCount;
    int VertexCount;
};

struct Drawables {
    Drawable GeodesicDome;
    Drawable SkySphere;
    Drawable Quad;
};

struct Textures {
    GLuint Sky;
    GLuint Floor;
    GLuint Button;
    GLuint Triangle;
    GLuint North;
    GLuint South;
    GLuint East;
    GLuint West;
};

struct Renderbuffers {
    GLuint Color;
    GLuint Depth;
};

class RenderingEngine : public IRenderingEngine {
public:
    RenderingEngine(IResourceManager* resourceManager);
    void Initialize();
    void Render(float theta, float phi, ButtonMask buttonFlags) const;
private:
    void RenderText(GLuint texture, float theta, float scale) const;
    Drawable CreateDrawable(const float* vertices, int vertexCount);
    // ...
    Drawables m_drawables;
    Textures m_textures;
    Renderbuffers m_renderbuffers;
    IResourceManager* m_resourceManager;
};

Note that Example 6-22 declares two new private methods: RenderText for drawing compass direction labels and a new CreateDrawable method for creating the geodesic dome. Even though it declares eight different texture objects (which could be combined into a texture atlas; see Chapter 7), it declares only three VBOs. The Quad VBO is re-used for the buttons, the floor, and the floating text.

Example 6-23 is fairly straightforward. It first creates the VBOs and texture objects and then initializes various OpenGL state.

Example 6-23. RenderingEngine initialization for Holodeck

#include "../Models/GeodesicDome.h"

...

void RenderingEngine::Initialize()
{
    // Create vertex buffer objects.
    m_drawables.GeodesicDome = 
      CreateDrawable(DomeVertices, DomeVertexCount);
    m_drawables.SkySphere = CreateDrawable(Sphere(1));
    m_drawables.Quad = CreateDrawable(Quad(64));
    
    // Load up some textures.
    m_textures.Floor = CreateTexture("Moss.pvr");
    m_textures.Sky = CreateTexture("Sky.pvr");
    m_textures.Button = CreateTexture("Button.png");
    m_textures.Triangle = CreateTexture("Triangle.png");
    m_textures.North = CreateTexture("North.png");
    m_textures.South = CreateTexture("South.png");
    m_textures.East = CreateTexture("East.png");
    m_textures.West = CreateTexture("West.png");

    // Extract width and height from the color buffer.
    int width, height;
    glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES,
                                    GL_RENDERBUFFER_WIDTH_OES, &width);
    glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES,
                                    GL_RENDERBUFFER_HEIGHT_OES, &height);
    glViewport(0, 0, width, height);

    // Create a depth buffer that has the same size as the color buffer.
    glGenRenderbuffersOES(1, &m_renderbuffers.Depth);
    glBindRenderbufferOES(GL_RENDERBUFFER_OES, m_renderbuffers.Depth);
    glRenderbufferStorageOES(GL_RENDERBUFFER_OES, 
                             GL_DEPTH_COMPONENT16_OES, width, height);
        
    // Create the framebuffer object.
    GLuint framebuffer;
    glGenFramebuffersOES(1, &framebuffer);
    glBindFramebufferOES(GL_FRAMEBUFFER_OES, framebuffer);
    glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, 
                                 GL_COLOR_ATTACHMENT0_OES,
                                 GL_RENDERBUFFER_OES, 
                                 m_renderbuffers.Color);
    glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, 
                                 GL_DEPTH_ATTACHMENT_OES,
                                 GL_RENDERBUFFER_OES, 
                                 m_renderbuffers.Depth);
    glBindRenderbufferOES(GL_RENDERBUFFER_OES, m_renderbuffers.Color);
    
    // Set up various GL state.
    glEnableClientState(GL_VERTEX_ARRAY);
    glEnableClientState(GL_TEXTURE_COORD_ARRAY);
    glEnable(GL_TEXTURE_2D);
    glEnable(GL_DEPTH_TEST);
    glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

    // Set the model-view transform.
    glMatrixMode(GL_MODELVIEW);
    glRotatef(90, 0, 0, 1);
    
    // Set the projection transform.
    float h = 4.0f * height / width;
    glMatrixMode(GL_PROJECTION);
    glFrustumf(-2, 2, -h / 2, h / 2, 5, 200);
    glMatrixMode(GL_MODELVIEW);
}

Drawable RenderingEngine::CreateDrawable(const float* vertices, 
                                         int vertexCount)
{
    // Each vertex has XYZ and ST, for a total of five floats.
    const int FloatsPerVertex = 5;
    
    // Create the VBO for the vertices.
    GLuint vertexBuffer;
    glGenBuffers(1, &vertexBuffer);
    glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
    glBufferData(GL_ARRAY_BUFFER,
                 vertexCount * FloatsPerVertex * sizeof(float),
                 vertices,
                 GL_STATIC_DRAW);
    
    // Fill in the description structure and return it.
    Drawable drawable = {0};
    drawable.VertexBuffer = vertexBuffer;
    drawable.VertexCount = vertexCount;
    return drawable;
}

Let’s finally take a look at the all-important Render method; see Example 6-24.

Example 6-24. Render method for Holodeck

void RenderingEngine::Render(float theta, float phi, 
                             ButtonMask buttons) const
{
    static float frameCounter = 0;1
    frameCounter++;

    glPushMatrix();

    glRotatef(phi, 1, 0, 0);2
    glRotatef(theta, 0, 1, 0);
    
    glClear(GL_DEPTH_BUFFER_BIT);3

    glPushMatrix();
    glScalef(100, 100, 100);
    glRotatef(frameCounter * 2, 0, 1, 0);
    glBindTexture(GL_TEXTURE_2D, m_textures.Sky);
    RenderDrawable(m_drawables.SkySphere);4
    glPopMatrix();

    glEnable(GL_BLEND);
    glBindTexture(GL_TEXTURE_2D, m_textures.Triangle);
    glPushMatrix();
    glTranslatef(0, 10, 0);
    glScalef(90, 90, 90);
    glColor4f(1, 1, 1, 0.75f);
    RenderDrawable(m_drawables.GeodesicDome);5
    glColor4f(1, 1, 1, 1);
    glPopMatrix();

    float textScale = 1.0 / 10.0 + sin(frameCounter / 10.0f) / 150.0;6
    
    RenderText(m_textures.East, 0, textScale);
    RenderText(m_textures.West, 180, textScale);
    RenderText(m_textures.South, 90, textScale);
    RenderText(m_textures.North, -90, textScale);
    glDisable(GL_BLEND);

    glTranslatef(0, 10, -10);
    glRotatef(90, 1, 0, 0);
    glScalef(4, 4, 4);
    glMatrixMode(GL_TEXTURE);
    glScalef(4, 4, 1);
    glBindTexture(GL_TEXTURE_2D, m_textures.Floor);
    RenderDrawable(m_drawables.Quad);7
    glLoadIdentity();
    glMatrixMode(GL_MODELVIEW);
    glPopMatrix();

    if (buttons) {8
        ...
    }
}
1

Use a static variable to keep a frame count for animation. I don’t recommend this approach in production code (normally you’d use a delta-time value), but this is fine for an example.

2

Rotate theta degrees (azimuth) around the y-axis and phi degrees (altitude) around the x-axis.

3

We’re clearing depth only; there’s no need to clear color since we’re drawing a sky sphere.

4

Render the sky sphere.

5

Render the geodesic dome with blending enabled.

6

Create an animated variable called textScale for the pulse effect, and then pass it in to the RenderText method.

7

Draw the mossy ground plane.

8

Render the buttons only if the buttons mask is nonzero. We’ll cover button rendering shortly.

The RenderText method is fairly straightforward; see Example 6-25. Some glScalef trickery is used to stretch out the quad and flip it around.

Example 6-25. RenderText method for Holodeck

void RenderingEngine::RenderText(GLuint texture, float theta, 
                                 float scale) const
{
    glBindTexture(GL_TEXTURE_2D, texture);
    glPushMatrix();
    glRotatef(theta, 0, 1, 0);
    glTranslatef(0, -2, -30);
    glScalef(-2 * scale, -scale, scale);
    RenderDrawable(m_drawables.Quad);
    glPopMatrix();
}

Handling the Heads-Up Display

Most applications that need to render a HUD take the following approach when rendering a single frame of animation:

  1. Issue a glClear.

  2. Set up the model-view and projection matrices for the 3D scene.

  3. Render the 3D scene.

  4. Disable depth testing, and enable blending.

  5. Set up the model-view and projection matrices for 2D rendering.

  6. Render the HUD.

Warning

Always remember to completely reset your transforms at the beginning of the render routine; otherwise, you’ll apply transformations that are left over from the previous frame. For example, calling glFrustum alone simply multiplies the current matrix, so you might need to issue a glLoadIdentity immediately before calling glFrustum.

Let’s go ahead and modify the Render method to render buttons; replace the ellipses in Example 6-24 with the code in Example 6-26.

Example 6-26. Adding buttons to Holodeck

glEnable(GL_BLEND);
glDisable(GL_DEPTH_TEST);
glBindTexture(GL_TEXTURE_2D, m_textures.Button);
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glOrthof(-160, 160, -240, 240, 0, 1);

if (buttons & ButtonFlagsShowHorizontal) {
    glMatrixMode(GL_MODELVIEW);
    glTranslatef(200, 0, 0);
    SetButtonAlpha(buttons, ButtonFlagsPressingLeft);
    RenderDrawable(m_drawables.Quad);
    glTranslatef(-400, 0, 0);
    glMatrixMode(GL_TEXTURE);
    glRotatef(180, 0, 0, 1);
    SetButtonAlpha(buttons, ButtonFlagsPressingRight);
    RenderDrawable(m_drawables.Quad);
    glRotatef(-180, 0, 0, 1);
    glMatrixMode(GL_MODELVIEW); 
    glTranslatef(200, 0, 0);
}

if (buttons & ButtonFlagsShowVertical) {
    glMatrixMode(GL_MODELVIEW);
    glTranslatef(0, 125, 0);
    glMatrixMode(GL_TEXTURE);
    glRotatef(90, 0, 0, 1);
    SetButtonAlpha(buttons, ButtonFlagsPressingUp);
    RenderDrawable(m_drawables.Quad);
    glMatrixMode(GL_MODELVIEW);
    glTranslatef(0, -250, 0);
    glMatrixMode(GL_TEXTURE);
    glRotatef(180, 0, 0, 1);
    SetButtonAlpha(buttons, ButtonFlagsPressingDown);
    RenderDrawable(m_drawables.Quad);
    glRotatef(90, 0, 0, 1);
    glMatrixMode(GL_MODELVIEW);
    glTranslatef(0, 125, 0);
}


glColor4f(1, 1, 1, 1);
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glEnable(GL_DEPTH_TEST);
glDisable(GL_BLEND);

Note that Example 6-26 contains quite a few transform operations; while this is fine for teaching purposes, in a production environment I recommend including all four buttons in a single VBO. You’d still need four separate draw calls, however, since the currently pressed button has a unique alpha value.

In fact, making this optimization would be an interesting project: create a single VBO that contains all four pretransformed buttons, and then render it with four separate draw calls. Don’t forget that the second argument to glDrawArrays can be nonzero!

The SetButtonAlpha method sets alpha to one if the button is being pressed; otherwise, it makes the button semitransparent:

void RenderingEngine::SetButtonAlpha(ButtonMask buttonFlags, 
                                     ButtonFlags flag) const
{
    float alpha = (buttonFlags & flag) ? 1.0 : 0.75;
    glColor4f(1, 1, 1, alpha);
}

Next let’s go over the code in GLView.mm that detects button presses and maintains the azimuth/altitude angles. See Example 6-27 for the GLView class declaration and Example 6-28 for the interesting potions of the class implementation.

Example 6-27. GLView.h for Holodeck

#import "Interfaces.hpp"
#import <UIKit/UIKit.h>
#import <QuartzCore/QuartzCore.h>
#import <CoreLocation/CoreLocation.h>

@interface GLView : UIView {
@private
    IRenderingEngine* m_renderingEngine;
    IResourceManager* m_resourceManager;
    EAGLContext* m_context;
    bool m_paused;
    float m_theta;
    float m_phi;
    vec2 m_velocity;
    ButtonMask m_visibleButtons;
    float m_timestamp;
}

- (void) drawView: (CADisplayLink*) displayLink;

@end

Example 6-28. GLView.mm for Holodeck

...


- (id) initWithFrame: (CGRect) frame
{
    m_paused = false;
    m_theta = 0;
    m_phi = 0;
    m_velocity = vec2(0, 0);
    m_visibleButtons = ButtonFlagsShowHorizontal | ButtonFlagsShowVertical; 1
    
    if (self = [super initWithFrame:frame]) {
        CAEAGLLayer* eaglLayer = (CAEAGLLayer*) self.layer;
        eaglLayer.opaque = YES;

        EAGLRenderingAPI api = kEAGLRenderingAPIOpenGLES1;
        m_context = [[EAGLContext alloc] initWithAPI:api];
        
        if (!m_context || ![EAGLContext setCurrentContext:m_context]) {
            [self release];
            return nil;
        }
        
        m_resourceManager = CreateResourceManager();

        NSLog(@"Using OpenGL ES 1.1");
        m_renderingEngine = CreateRenderingEngine(m_resourceManager);

        [m_context
            renderbufferStorage:GL_RENDERBUFFER
            fromDrawable: eaglLayer];
        
        m_timestamp = CACurrentMediaTime();

        m_renderingEngine->Initialize();
        [self drawView:nil];
        
        CADisplayLink* displayLink;
        displayLink = [CADisplayLink displayLinkWithTarget:self
                                     selector:@selector(drawView:)];
        
        [displayLink addToRunLoop:[NSRunLoop currentRunLoop]
                     forMode:NSDefaultRunLoopMode];
    }
    return self;
}

- (void) drawView: (CADisplayLink*) displayLink
{
    if (m_paused)
        return;
    
    if (displayLink != nil) {
        const float speed = 30;
        float elapsedSeconds = displayLink.timestamp - m_timestamp;
        m_timestamp = displayLink.timestamp;
        m_theta -= speed * elapsedSeconds * m_velocity.x;2
        m_phi += speed * elapsedSeconds * m_velocity.y;
    }

    ButtonMask buttonFlags = m_visibleButtons;3
    if (m_velocity.x < 0) buttonFlags |= ButtonFlagsPressingLeft;
    if (m_velocity.x > 0) buttonFlags |= ButtonFlagsPressingRight;
    if (m_velocity.y < 0) buttonFlags |= ButtonFlagsPressingUp;
    if (m_velocity.y > 0) buttonFlags |= ButtonFlagsPressingDown;
    
    m_renderingEngine->Render(m_theta, m_phi, buttonFlags);
    [m_context presentRenderbuffer:GL_RENDERBUFFER];
}

bool buttonHit(CGPoint location, int x, int y)4
{
    float extent = 32;
    return (location.x > x - extent && location.x < x + extent &&
            location.y > y - extent && location.y < y + extent);
}

- (void) touchesBegan: (NSSet*) touches withEvent: (UIEvent*) event5
{
    UITouch* touch = [touches anyObject];
    CGPoint location  = [touch locationInView: self];
    float delta = 1;

    if (m_visibleButtons & ButtonFlagsShowVertical) {
        if (buttonHit(location, 35, 240))
            m_velocity.y = -delta;
        else if (buttonHit(location, 285, 240))
            m_velocity.y = delta;
    }
    
    if (m_visibleButtons & ButtonFlagsShowHorizontal) {
        if (buttonHit(location, 160, 40))
            m_velocity.x = -delta;
        else if (buttonHit(location, 160, 440))
            m_velocity.x = delta;
    }
}

- (void) touchesEnded: (NSSet*) touches withEvent: (UIEvent*) event
{
    m_velocity = vec2(0, 0);
}
1

For now, we’re hardcoding both button visibility flags to true. We’ll make this dynamic after adding compass and accelerometer support.

2

The theta and phi angles are updated according to the current velocity vector and delta time.

3

Right before passing in the button mask to the Render method, take a look at the velocity vector to decide which buttons are being pressed.

4

Simple utility function to detect whether a given point (location) is within the bounds of a button centered at (x, y). Note that we’re allowing the intrusion of a vanilla C function into an Objective-C file.

5

To make things simple, the velocity vector is set up in response to a “finger down” event and reset to zero in response to a “finger up” event. Since we don’t need the ability for several buttons to be pressed simultaneously, this is good enough.

At this point, you now have a complete app that lets you look around inside a (rather boring) virtual world, but it’s still a far cry from augmented reality!

Replacing Buttons with Orientation Sensors

The next step is carefully integrating support for the compass and accelerometer APIs. I say “carefully” because we’d like to provide a graceful runtime fallback if the device (or simulator) does not have a magnetometer or accelerometer.

We’ll be using the accelerometer to obtain the gravity vector, which in turn enables us to compute the phi angle (that’s “altitude” for you astronomers) but not the theta angle (azimuth). Conversely, the compass API can be used to compute theta but not phi. You’ll see how this works in the following sections.

Adding accelerometer support

Using the low-level accelerometer API directly is ill advised; the signal includes quite a bit of noise, and unless your app is somehow related to The Blair Witch Project, you probably don’t want your camera shaking around like a shivering chihuahua.

Discussing a robust and adaptive low-pass filter implementation is beyond the scope of this book, but thankfully Apple includes some example code for this. Search for the AccelerometerGraph sample on the iPhone developer site (http://developer.apple.com/iphone) and download it. Look inside for two key files, and copy them to your project folder: AccelerometerFilter.h and AccelerometerFilter.m.

Note

You can also refer to Stabilizing the counter with a low-pass filter for an example implementation of a simple low-pass filter.

After adding the filter code to your Xcode project, open up GLView.h, and add the three code snippets that are highlighted in bold in Example 6-29.

Example 6-29. Adding accelerometer support to GLView.h

#import "Interfaces.hpp"
#import "AccelerometerFilter.h"
#import <UIKit/UIKit.h>
#import <QuartzCore/QuartzCore.h>

@interface GLView : UIView <UIAccelerometerDelegate> {
@private
    IRenderingEngine* m_renderingEngine;
    IResourceManager* m_resourceManager;
    EAGLContext* m_context;
    AccelerometerFilter* m_filter;
    ...
}

- (void) drawView: (CADisplayLink*) displayLink;

@end

Next, open GLView.mm, and add the lines shown in bold in Example 6-30. You might grimace at the sight of the #if block, but it’s a necessary evil because the iPhone Simulator pretends to support the accelerometer APIs by sending the application fictitious values (without giving the user much control over those values). Since the fake accelerometer won’t do us much good, we turn it off when building for the simulator.

Note

An Egyptian software company called vimov produces a compelling tool called iSimulate that can simulate the accelerometer and other device sensors. Check it out at http://www.vimov.com/isimulate.

Example 6-30. Adding accelerometer support to initWithFrame

- (id) initWithFrame: (CGRect) frame
{
    m_paused = false;
    m_theta = 0;
    m_phi = 0;
    m_velocity = vec2(0, 0);
    m_visibleButtons = 0;

    if (self = [super initWithFrame:frame]) {
        CAEAGLLayer* eaglLayer = (CAEAGLLayer*) self.layer;
        eaglLayer.opaque = YES;

        EAGLRenderingAPI api = kEAGLRenderingAPIOpenGLES1;
        m_context = [[EAGLContext alloc] initWithAPI:api];
        
        if (!m_context || ![EAGLContext setCurrentContext:m_context]) {
            [self release];
            return nil;
        }
        
        m_resourceManager = CreateResourceManager();

        NSLog(@"Using OpenGL ES 1.1");
        m_renderingEngine = CreateRenderingEngine(m_resourceManager);

#if TARGET_IPHONE_SIMULATOR
        BOOL compassSupported = NO;
        BOOL accelSupported = NO;
#else
        BOOL compassSupported = NO; // (We'll add compass support shortly.)
        BOOL accelSupported = YES;
#endif
        
        if (compassSupported) {
            NSLog(@"Compass is supported.");
        } else {
            NSLog(@"Compass is NOT supported.");
            m_visibleButtons |= ButtonFlagsShowHorizontal;
        }
        
        if (accelSupported) {
            NSLog(@"Accelerometer is supported.");
            float updateFrequency = 60.0f;
            m_filter = 
              [[LowpassFilter alloc] initWithSampleRate:updateFrequency
                                        cutoffFrequency:5.0];
            m_filter.adaptive = YES;

            [[UIAccelerometer sharedAccelerometer] 
              setUpdateInterval:1.0 / updateFrequency];
            [[UIAccelerometer sharedAccelerometer] setDelegate:self];
        } else {
            NSLog(@"Accelerometer is NOT supported.");
            m_visibleButtons |= ButtonFlagsShowVertical;
        }

        [m_context
            renderbufferStorage:GL_RENDERBUFFER
            fromDrawable: eaglLayer];
        
        m_timestamp = CACurrentMediaTime();

        m_renderingEngine->Initialize();
        [self drawView:nil];
        
        CADisplayLink* displayLink;
        displayLink = [CADisplayLink displayLinkWithTarget:self
                                     selector:@selector(drawView:)];
        
        [displayLink addToRunLoop:[NSRunLoop currentRunLoop]
                     forMode:NSDefaultRunLoopMode];
    }
    return self;
}

Since GLView sets itself as the accelerometer delegate, it needs to implement a response handler. See Example 6-31.

Example 6-31. Accelerometer response handler

- (void) accelerometer: (UIAccelerometer*) accelerometer
         didAccelerate: (UIAcceleration*) acceleration
{
    [m_filter addAcceleration:acceleration];
    float x = m_filter.x;
    float z = m_filter.z;
    m_phi = atan2(z, -x) * 180.0f / Pi;
}

You might not be familiar with the atan2 function, which takes the arctangent of the its first argument divided by the its second argument (see Equation 6-1). Why not use the plain old single-argument atan function and do the division yourself? You don’t because atan2 is smarter; it uses the signs of its arguments to determine which quadrant the angle is in. Plus, it allows the second argument to be zero without throwing a divide-by-zero exception.

Note

An even more rarely encountered math function is hypot. When used together, atan2 and hypot can convert any 2D Cartesian coordinate into a polar coordinate.

Equation 6-1. Phi as a function of acceleration

Phi as a function of acceleration

Equation 6-1 shows how we compute phi from the accelerometer’s input values. To understand it, you first need to realize that we’re using the accelerometer as a way of measuring the direction of gravity. It’s a common misconception that the accelerometer measures speed, but you know better by now! The accelerometer API returns a 3D acceleration vector according to the axes depicted in Figure 6-15.

Accelerometer axes in landscape mode

Figure 6-15. Accelerometer axes in landscape mode

When you hold the device in landscape mode, there’s no gravity along the y-axis (assuming you’re not slothfully laying on the sofa and turned to one side). So, the gravity vector is composed of X and Z only—see Figure 6-16.

Computing phi from acceleration

Figure 6-16. Computing phi from acceleration

Adding compass support

The direction of gravity can’t tell you which direction you’re facing; that’s where the compass support in third-generation devices comes in. To begin, open GLView.h, and add the bold lines in Example 6-32.

Example 6-32. Adding compass support to GLView.h

#import "Interfaces.hpp"
#import "AccelerometerFilter.h"
#import <UIKit/UIKit.h>
#import <QuartzCore/QuartzCore.h>
#import <CoreLocation/CoreLocation.h>

@interface GLView : UIView <CLLocationManagerDelegate,
                            UIAccelerometerDelegate> {
@private
    IRenderingEngine* m_renderingEngine;
    IResourceManager* m_resourceManager;
    EAGLContext* m_context;
    CLLocationManager* m_locationManager;
    AccelerometerFilter* m_filter;
    ...
}

- (void) drawView: (CADisplayLink*) displayLink;

@end

The Core Location API is an umbrella for both GPS and compass functionality, but we’ll be using only the compass functionality in our demo. Next we need to create an instance of CLLocationManger somewhere in GLview.mm; see Example 6-33.

Example 6-33. Adding compass support to initWithFrame

- (id) initWithFrame: (CGRect) frame
{
    ...

    if (self = [super initWithFrame:frame]) {

        ...

        m_locationManager = [[CLLocationManager alloc] init];

#if TARGET_IPHONE_SIMULATOR
        BOOL compassSupported = NO;
        BOOL accelSupported = NO;
#else
        BOOL compassSupported = m_locationManager.headingAvailable;
        BOOL accelSupported = YES;
#endif
        
        if (compassSupported) {
            NSLog(@"Compass is supported.");
            m_locationManager.headingFilter = kCLHeadingFilterNone;
            m_locationManager.delegate = self;
            [m_locationManager startUpdatingHeading];
        } else {
            NSLog(@"Compass is NOT supported.");
            m_visibleButtons |= ButtonFlagsShowHorizontal;
        }

        ...
    }
    return self;
}

Similar to how it handles the accelerometer feedback, GLView sets itself as the compass delegate, so it needs to implement a response handler. See Example 6-31. Unlike the accelerometer, any noise in the compass reading is already eliminated, so there’s no need for handling the low-pass filter yourself. The compass API is embarrassingly simple; it simply returns an angle in degrees, where 0 is north, 90 is east, and so on. See Example 6-34 for the compass response handler.

Example 6-34. Compass response handler

- (void) locationManager: (CLLocationManager*) manager
         didUpdateHeading: (CLHeading*) heading
{
    // Use magneticHeading rather than trueHeading to avoid usage of GPS:
    CLLocationDirection degrees = heading.magneticHeading;
    m_theta = (float) -degrees;
}

The only decision you have to make when writing a compass handler is whether to use magneticHeading or trueHeading. The former returns magnetic north, which isn’t quite the same as geographic north. To determine the true direction of the geographic north pole, the device needs to know where it’s located on the planet, which requires usage of the GPS. Since our app is looking around a virtual world, it doesn’t matter which heading to use. I chose to use magneticHeading because it allows us to avoid enabling GPS updates in the location manager object. This simplifies the code and may even improve power consumption.

Overlaying with a Live Camera Image

To make this a true augmented reality app, we need to bring the camera into play. If a camera isn’t available (as in the simulator), then the app can simply fall back to the “scrolling clouds” background.

The first step is adding another protocol to the GLView class—actually we need two new protocols! Add the bold lines in Example 6-35, noting the new data fields as well (m_viewController and m_cameraSupported).

Example 6-35. Adding camera support to GLView.h

#import "Interfaces.hpp"
#import "AccelerometerFilter.h"
#import <UIKit/UIKit.h>
#import <QuartzCore/QuartzCore.h>
#import <CoreLocation/CoreLocation.h>

@interface GLView : UIView <UIImagePickerControllerDelegate,
                            UINavigationControllerDelegate,
                            CLLocationManagerDelegate,
                            UIAccelerometerDelegate> {
@private
    IRenderingEngine* m_renderingEngine;
    IResourceManager* m_resourceManager;
    EAGLContext* m_context;
    CLLocationManager* m_locationManager;
    AccelerometerFilter* m_filter;
    UIViewController* m_viewController;
    bool m_cameraSupported;
    ...
}

- (void) drawView: (CADisplayLink*) displayLink;

@end

Next we need to enhance the initWithFrame and drawView methods. See Example 6-36. Until now, every sample in this book has set the opaque property in the EAGL layer to YES. In this sample, we decide its value at runtime; if a camera is available, don’t make the surface opaque to allow the image “underlay” to show through.

Example 6-36. Adding camera support to GLView.mm

- (id) initWithFrame: (CGRect) frame
{
    ...

    if (self = [super initWithFrame:frame]) {

        m_cameraSupported = [UIImagePickerController isSourceTypeAvailable:
                             UIImagePickerControllerSourceTypeCamera];

        CAEAGLLayer* eaglLayer = (CAEAGLLayer*) self.layer;
        eaglLayer.opaque = !m_cameraSupported;
        if (m_cameraSupported)
            NSLog(@"Camera is supported.");
        else
            NSLog(@"Camera is NOT supported.");

        ...

#if TARGET_IPHONE_SIMULATOR
        BOOL compassSupported = NO;
        BOOL accelSupported = NO;
#else
        BOOL compassSupported = m_locationManager.headingAvailable;
        BOOL accelSupported = YES;
#endif

        m_viewController = 0;

        ...

        m_timestamp = CACurrentMediaTime();

        bool opaqueBackground = !m_cameraSupported;
        m_renderingEngine->Initialize(opaqueBackground);

        // Delete the line [self drawView:nil];
        
        CADisplayLink* displayLink;
        displayLink = [CADisplayLink displayLinkWithTarget:self
                                     selector:@selector(drawView:)];

        ...
    }
    return self;
}

- (void) drawView: (CADisplayLink*) displayLink
{
    if (m_cameraSupported && m_viewController == 0)
        [self createCameraController];

    if (m_paused)
        return;
    
    ...
    
    m_renderingEngine->Render(m_theta, m_phi, buttonFlags);
    [m_context presentRenderbuffer:GL_RENDERBUFFER];
}

Next we need to implement the createCameraController method that was called from drawView. This is an example of lazy instantiation; we don’t create the camera controller until we actually need it. Example 6-37 shows the method, and a detailed explanation follows the listing. (The createCameraController method needs to be defined before the drawView method to avoid a compiler warning.)

Example 6-37. Creating the camera view controller

- (void) createCameraController
{
    UIImagePickerController* imagePicker = 
      [[UIImagePickerController alloc] init];
    imagePicker.delegate = self;1
    imagePicker.navigationBarHidden = YES;2
    imagePicker.toolbarHidden = YES;3
    imagePicker.sourceType = UIImagePickerControllerSourceTypeCamera;4
    imagePicker.showsCameraControls = NO;5
    imagePicker.cameraOverlayView = self;6
    
    // The 54 pixel wide empty spot is filled in by scaling the image.
    // The camera view's height gets stretched from 426 pixels to 480.
    
    float bandWidth = 54;
    float screenHeight = 480;
    float zoomFactor = screenHeight / (screenHeight - bandWidth);
    
    CGAffineTransform pickerTransform = 
      CGAffineTransformMakeScale(zoomFactor, zoomFactor);
    imagePicker.cameraViewTransform = pickerTransform;7
    
    m_viewController = [[UIViewController alloc] init];
    m_viewController.view = self;
    [m_viewController presentModalViewController:imagePicker animated:NO];8
}
1

Set the image picker’s delegate to the GLView class. Since we aren’t using the camera to capture still images, this isn’t strictly necessary, but it’s still a good practice.

2

Hide the navigation bar. Again, we aren’t using the camera for image capture, so there’s no need for this UI getting in the way.

3

Ditto with the toolbar.

4

Set the source type of the image picker to the camera. You might recall this step from the camera texture sample in the previous chapter.

5

Hide the camera control UI. Again, we’re using the camera only as a backdrop, so any UI would just get in the way.

6

Set the camera overlay view to the GLView class to allow the OpenGL content to be rendered.

7

The UI that we’re hiding would normally leave an annoying gap on the bottom of the screen. By applying a scale transform, we can fill in the gap. Maintaining the correct aspect ratio causes a portion of the image to be cropped, but it’s not noticeable in the final app.

8

Finally, present the view controller to make the camera image show up.

Since we’re using the camera API in a way that’s quite different from how Apple intended, we had to jump through a few hoops: hiding the UI, stretching the image, and implementing a protocol that never really gets used. This may seem a bit hacky, but ideally Apple will improve the camera API in the future to simplify the development of augmented reality applications.

You may’ve noticed in Example 6-36 that the view class is now passing in a boolean to the rendering engine’s Initialize method; this tells it whether the background should contain clouds as before or whether it should be cleared to allow the camera underlay to show through. You must modify the declaration of Initialize in Interfaces.cpp accordingly. Next, the only remaining changes are shown in Example 6-38.

Example 6-38. RenderingEngine modifications to support the camera “underlay”

...

class RenderingEngine : public IRenderingEngine {
public:
    RenderingEngine(IResourceManager* resourceManager);
    void Initialize(bool opaqueBackground);
    void Render(float theta, float phi, ButtonMask buttons) const;
private:
    ...
    bool m_opaqueBackground;
};
    
void RenderingEngine::Initialize(bool opaqueBackground)
{
    m_opaqueBackground = opaqueBackground;

    ...
}

void RenderingEngine::Render(float theta, float phi, ButtonMask buttons) const
{
    static float frameCounter = 0;
    frameCounter++;
    
    glPushMatrix();

    glRotatef(phi, 1, 0, 0);
    glRotatef(theta, 0, 1, 0);

    if (m_opaqueBackground) {
        glClear(GL_DEPTH_BUFFER_BIT);

        glPushMatrix();
        glScalef(100, 100, 100);
        glRotatef(frameCounter * 2, 0, 1, 0);
        glBindTexture(GL_TEXTURE_2D, m_textures.Sky);
        RenderDrawable(m_drawables.SkySphere);
        glPopMatrix();
    } else {
        glClearColor(0, 0, 0, 0);
        glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    }

    ...
}

Note that the alpha value of the clear color is zero; this allows the underlay to show through. Also note that the color buffer is cleared only if there’s no sky sphere. Experienced OpenGL programmers make little optimizations like this as a matter of habit.

That’s it for the Holodeck sample! See Figure 6-17 for a depiction of the app as it now stands.

Holodeck with camera underlay

Figure 6-17. Holodeck with camera underlay

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required