O'Reilly logo

iPhone 3D Programming by Philip Rideout

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Creating a Wireframe Viewer

Let’s use vertex buffer objects and the touchscreen to create a fun new app. Instead of relying on triangles like we’ve been doing so far, we’ll use GL_LINES topology to create a simple wireframe viewer, as shown in Figure 3-3. The rotation in Touch Cone was restricted to the plane, but this app will let you spin the geometry around to any orientation; behind the scenes, we’ll use quaternions to achieve a trackball-like effect. Additionally, we’ll include a row of buttons along the bottom of the screen to allow the user to switch between different shapes. They won’t be true buttons in the UIKit sense; remember, for best performance, you should let OpenGL do all the rendering. This application will provide a good foundation upon which to learn many OpenGL concepts, and we’ll continue to evolve it in the coming chapters.

Wireframe viewer

Figure 3-3. Wireframe viewer

If you’re planning on following along with the code, you’ll first need to start with the WireframeSkeleton project from this book’s example code (available at http://oreilly.com/catalog/9780596804831). In the Finder, make a copy of the directory that contains this project, and name the new directory SimpleWireframe. Next, open the project (it will still be named WireframeSkeleton), and then choose ProjectRename. Rename it to SimpleWireframe.

This skeleton project includes all the building blocks you saw (the vector library from the appendix, the GLView class, and the application delegate). There are a few differences between this and the previous examples, so be sure to look over the classes in the project before you proceed:

  1. The application delegate has been renamed to have a very generic name, AppDelegate.

  2. The GLView class uses an application engine rather than a rendering engine. This is because we’ll be taking a new approach to how we factor the ES 1.1– and ES 2.0–specific code from the rest of the project; more on this shortly.

Parametric Surfaces for Fun

You might have been put off by all the work required for tessellating the cone shape in the previous samples. It would be painful if you had to figure out a clever tessellation for every shape that pops into your head! Thankfully, most 3D modeling software can export to a format that has post-tessellated content; the popular .obj file format is one example of this. Moreover, the cone shape happens to be a mathematically defined shape called a parametric surface; all parametric surfaces are relatively easy to tessellate in a generic manner. A parametric surface is defined with a function that takes a 2D vector for input and produces a 3D vector as output. This turns out to be especially convenient because the input vectors can also be used as texture coordinates, as we’ll learn in a future chapter.

The input to a parametric function is said to be in its domain, while the output is said to be in its range. Since all parametric surfaces can be used to generate OpenGL vertices in a consistent manner, it makes sense to create a simple class hierarchy for them. Example 3-10 shows two subclasses: a cone and a sphere. This has been included in the WireframeSkeleton project for your convenience, so there is no need for you to add it here.

Example 3-10. ParametricEquations.hpp

#include "ParametricSurface.hpp"

class Cone : public ParametricSurface {
public:
    Cone(float height, float radius) : m_height(height), m_radius(radius)
    {
        ParametricInterval interval = { ivec2(20, 20), vec2(TwoPi, 1) };
        SetInterval(interval);
    }
    vec3 Evaluate(const vec2& domain) const
    {
        float u = domain.x, v = domain.y;
        float x = m_radius * (1 - v) * cos(u);
        float y = m_height * (v - 0.5f);
        float z = m_radius * (1 - v) * -sin(u);
        return vec3(x, y, z);
    }
private:
    float m_height;
    float m_radius;
};

class Sphere : public ParametricSurface {
public:
    Sphere(float radius) : m_radius(radius)
    {
        ParametricInterval interval = { ivec2(20, 20), vec2(Pi, TwoPi) };
        SetInterval(interval);
    }
    vec3 Evaluate(const vec2& domain) const
    {
        float u = domain.x, v = domain.y;
        float x = m_radius * sin(u) * cos(v);
        float y = m_radius * cos(u);
        float z = m_radius * -sin(u) * sin(v);
        return vec3(x, y, z);
    }
private:
    float m_radius;
};

// ...

The classes in Example 3-10 request their desired tessellation granularity and domain bound by calling SetInterval from their constructors. More importantly, these classes implement the pure virtual Evaluate method, which simply applies Equation 3-1 or 3-2.

Equation 3-1. Cone parameterization

Cone parameterization

Equation 3-2. Sphere parameterization

Sphere parameterization

Each of the previous equations is only one of several possible parameterizations for their respective shapes. For example, the z equation for the sphere could be negated, and it would still describe a sphere.

In addition to the cone and sphere, the wireframe viewer allows the user to see four other interesting parametric surfaces: a torus, a knot, a Möbius strip,[3] and a Klein bottle (see Figure 3-4). I’ve already shown you the classes for the sphere and cone; you can find code for the other shapes at this book’s website. They basically do nothing more than evaluate various well-known parametric equations. Perhaps more interesting is their common base class, shown in Example 3-11. To add this file to Xcode, right-click the Classes folder, choose AddNew file, select C and C++, and choose Header File. Call it ParametricSurface.hpp, and replace everything in it with the code shown here.

Parametric gallery

Figure 3-4. Parametric gallery

Example 3-11. ParametricSurface.hpp

#include "Interfaces.hpp"

struct ParametricInterval {
    ivec2 Divisions;1
    vec2 UpperBound;2
};

class ParametricSurface : public ISurface {
public:
    int GetVertexCount() const;
    int GetLineIndexCount() const;
    void GenerateVertices(vector<float>& vertices) const;
    void GenerateLineIndices(vector<unsigned short>& indices) const;
protected:
    void SetInterval(const ParametricInterval& interval);3
    virtual vec3 Evaluate(const vec2& domain) const = 0;4
private:
    vec2 ComputeDomain(float i, float j) const;
    vec2 m_upperBound;
    ivec2 m_slices;
    ivec2 m_divisions;
};

I’ll explain the ISurface interface later; first let’s take a look at various elements that are controlled by subclasses:

1

The number of divisions that the surface is sliced into. The higher the number, the more lines, and the greater the level of detail. Note that it’s an ivec2; in some cases (like the knot shape), it’s desirable to have more slices along one axis than the other.

2

The domain’s upper bound. The lower bound is always (0, 0).

3

Called from the subclass to describe the domain interval.

4

Abstract method for evaluating the parametric equation.

Example 3-12 shows the implementation of the ParametricSurface class. Add a new C++ file to your Xcode project called ParametricSurface.cpp (but deselect the option to create the associated header file). Replace everything in it with the code shown.

Example 3-12. ParametricSurface.cpp

#include "ParametricSurface.hpp"

void ParametricSurface::SetInterval(const ParametricInterval& interval)
{
    m_upperBound = interval.UpperBound;
    m_divisions = interval.Divisions;
    m_slices = m_divisions - ivec2(1, 1);
}

int ParametricSurface::GetVertexCount() const
{
    return m_divisions.x * m_divisions.y;
}

int ParametricSurface::GetLineIndexCount() const
{
    return 4 * m_slices.x * m_slices.y;
}

vec2 ParametricSurface::ComputeDomain(float x, float y) const
{
    return vec2(x * m_upperBound.x / m_slices.x, 
                y * m_upperBound.y / m_slices.y);
}

void ParametricSurface::GenerateVertices(vector<float>& vertices) const
{
    vertices.resize(GetVertexCount() * 3);
    vec3* position = (vec3*) &vertices[0];
    for (int j = 0; j < m_divisions.y; j++) {
        for (int i = 0; i < m_divisions.x; i++) {
            vec2 domain = ComputeDomain(i, j);
            vec3 range = Evaluate(domain);
            *position++ = range;
        }
    }
}

void ParametricSurface::GenerateLineIndices(vector<unsigned short>& indices) const
{
    indices.resize(GetLineIndexCount());
    vector<unsigned short>::iterator index = indices.begin();
    for (int j = 0, vertex = 0; j < m_slices.y; j++) {
        for (int i = 0; i < m_slices.x; i++) {
            int next = (i + 1) % m_divisions.x;
            *index++ = vertex + i;
            *index++ = vertex + next;
            *index++ = vertex + i;
            *index++ = vertex + i + m_divisions.x;
        }
        vertex += m_divisions.x;
    }
}

The GenerateLineIndices method deserves a bit of an explanation. Picture a globe of the earth and how it has lines for latitude and longitude. The first two indices in the loop correspond to a latitudinal line segment; the latter two correspond to a longitudinal line segment (see Figure 3-5). Also note some sneaky usage of the modulo operator for wrapping back to zero when closing a loop.

Generating line indices for a parametric surface

Figure 3-5. Generating line indices for a parametric surface

Designing the Interfaces

In the HelloCone and HelloArrow samples, you might have noticed some duplication of logic between the ES 1.1 and ES 2.0 backends. With the wireframe viewer sample, we’re raising the bar on complexity, so we’ll avoid duplicated code by introducing a new C++ component called ApplicationEngine (this was mentioned in Chapter 1; see Figure 1-5). The application engine will contain all the logic that isn’t coupled to a particular graphics API.

Example 3-13 shows the contents of Interfaces.hpp, which defines three component interfaces and some related types. Add a new C and C++ header file to your Xcode project called Interfaces.hpp. Replace everything in it with the code shown.

Example 3-13. Interfaces.hpp

#pragma once
#include "Vector.hpp"
#include "Quaternion.hpp"
#include <vector>

using std::vector;

struct IApplicationEngine {1
    virtual void Initialize(int width, int height) = 0;
    virtual void Render() const = 0;
    virtual void UpdateAnimation(float timeStep) = 0;
    virtual void OnFingerUp(ivec2 location) = 0;
    virtual void OnFingerDown(ivec2 location) = 0;
    virtual void OnFingerMove(ivec2 oldLocation, ivec2 newLocation) = 0;
    virtual ~IApplicationEngine() {}
};

struct ISurface {2
    virtual int GetVertexCount() const = 0;
    virtual int GetLineIndexCount() const = 0;
    virtual void GenerateVertices(vector<float>& vertices) const = 0;
    virtual void GenerateLineIndices(vector<unsigned short>& indices) const = 0;
    virtual ~ISurface() {}
};

struct Visual {3
    vec3 Color;
    ivec2 LowerLeft;
    ivec2 ViewportSize;
    Quaternion Orientation;
};

struct IRenderingEngine {4
    virtual void Initialize(const vector<ISurface*>& surfaces) = 0;
    virtual void Render(const vector<Visual>& visuals) const = 0;
    virtual ~IRenderingEngine() {}
};

IApplicationEngine* CreateApplicationEngine(IRenderingEngine* renderingEngine);5
namespace ES1 { IRenderingEngine* CreateRenderingEngine(); }6
namespace ES2 { IRenderingEngine* CreateRenderingEngine(); }
1

Consumed by GLView; contains logic common to both rendering backends.

2

Consumed by the rendering engines when they generate VBOs for the parametric surfaces.

3

Describes the dynamic visual properties of a surface; gets passed from the application engine to the rendering engine at every frame.

4

Common abstraction of the two OpenGL ES backends.

5

Factory method for the application engine; the caller determines OpenGL capabilities and passes in the appropriate rendering engine.

6

Namespace-qualified factory methods for the two rendering engines.

In an effort to move as much logic into the application engine as possible, IRenderingEngine has only two methods: Initialize and Render. We’ll describe them in detail later.

Handling Trackball Rotation

To ensure high portability of the application logic, we avoid making any OpenGL calls whatsoever from within the ApplicationEngine class. Example 3-14 is the complete listing of its initial implementation. Add a new C++ file to your Xcode project called ApplicationEngine.cpp (deselect the option to create the associated .h file). Replace everything in it with the code shown.

Example 3-14. ApplicationEngine.cpp

#include "Interfaces.hpp"
#include "ParametricEquations.hpp"

using namespace std;

static const int SurfaceCount = 6;

class ApplicationEngine : public IApplicationEngine {
public:
    ApplicationEngine(IRenderingEngine* renderingEngine);
    ~ApplicationEngine();
    void Initialize(int width, int height);
    void OnFingerUp(ivec2 location);
    void OnFingerDown(ivec2 location);
    void OnFingerMove(ivec2 oldLocation, ivec2 newLocation);
    void Render() const;
    void UpdateAnimation(float dt);
private:
    vec3 MapToSphere(ivec2 touchpoint) const;
    float m_trackballRadius;
    ivec2 m_screenSize;
    ivec2 m_centerPoint;
    ivec2 m_fingerStart;
    bool m_spinning;
    Quaternion m_orientation;
    Quaternion m_previousOrientation;
    IRenderingEngine* m_renderingEngine;
};
    
IApplicationEngine* CreateApplicationEngine(IRenderingEngine* renderingEngine)
{
    return new ApplicationEngine(renderingEngine);
}

ApplicationEngine::ApplicationEngine(IRenderingEngine* renderingEngine) :
    m_spinning(false),
    m_renderingEngine(renderingEngine)
{
}

ApplicationEngine::~ApplicationEngine()
{
    delete m_renderingEngine;
}

void ApplicationEngine::Initialize(int width, int height)
{
    m_trackballRadius = width / 3;
    m_screenSize = ivec2(width, height);
    m_centerPoint = m_screenSize / 2;

    vector<ISurface*> surfaces(SurfaceCount);
    surfaces[0] = new Cone(3, 1);
    surfaces[1] = new Sphere(1.4f);
    surfaces[2] = new Torus(1.4, 0.3);
    surfaces[3] = new TrefoilKnot(1.8f);
    surfaces[4] = new KleinBottle(0.2f);
    surfaces[5] = new MobiusStrip(1);
    m_renderingEngine->Initialize(surfaces);
    for (int i = 0; i < SurfaceCount; i++)
        delete surfaces[i];
}

void ApplicationEngine::Render() const
{
    Visual visual;
    visual.Color = m_spinning ? vec3(1, 1, 1) : vec3(0, 1, 1);
    visual.LowerLeft = ivec2(0, 48);
    visual.ViewportSize = ivec2(320, 432);
    visual.Orientation = m_orientation;
    m_renderingEngine->Render(&visual);
}

void ApplicationEngine::UpdateAnimation(float dt)
{
}

void ApplicationEngine::OnFingerUp(ivec2 location)
{
    m_spinning = false;
}

void ApplicationEngine::OnFingerDown(ivec2 location)
{
    m_fingerStart = location;
    m_previousOrientation = m_orientation;
    m_spinning = true;
}

void ApplicationEngine::OnFingerMove(ivec2 oldLocation, ivec2 location)
{
    if (m_spinning) {
        vec3 start = MapToSphere(m_fingerStart);
        vec3 end = MapToSphere(location);
        Quaternion delta = Quaternion::CreateFromVectors(start, end);
        m_orientation = delta.Rotated(m_previousOrientation);
    }
}

vec3 ApplicationEngine::MapToSphere(ivec2 touchpoint) const
{
    vec2 p = touchpoint - m_centerPoint;
    
    // Flip the y-axis because pixel coords increase toward the bottom.
    p.y = -p.y;
    
    const float radius = m_trackballRadius;
    const float safeRadius = radius - 1;
    
    if (p.Length() > safeRadius) {
        float theta = atan2(p.y, p.x);
        p.x = safeRadius * cos(theta);
        p.y = safeRadius * sin(theta);
    }
    
    float z = sqrt(radius * radius - p.LengthSquared());
    vec3 mapped = vec3(p.x, p.y, z);
    return mapped / radius;
}

The bulk of Example 3-14 is dedicated to handling the trackball-like behavior with quaternions. I find the CreateFromVectors method to be the most natural way of constructing a quaternion. Recall that it takes two unit vectors at the origin and computes the quaternion that moves the first vector onto the second. To achieve a trackball effect, these two vectors are generated by projecting touch points onto the surface of the virtual trackball (see the MapToSphere method). Note that if a touch point is outside the circumference of the trackball (or directly on it), then MapToSphere snaps the touch point to just inside the circumference. This allows the user to perform a constrained rotation around the z-axis by sliding his finger horizontally or vertically near the edge of the screen.

Implementing the Rendering Engine

So far we’ve managed to exhibit most of the wireframe viewer code without any OpenGL whatsoever! It’s time to remedy that by showing the ES 1.1 backend class in Example 3-15. Add a new C++ file to your Xcode project called RenderingEngine.ES1.cpp (deselect the option to create the associated .h file). Replace everything in it with the code shown. You can download the ES 2.0 version from this book’s companion website (and it is included with the skeleton project mentioned early in this section).

Example 3-15. RenderingEngine.ES1.cpp

#include <OpenGLES/ES1/gl.h>
#include <OpenGLES/ES1/glext.h>
#include "Interfaces.hpp"
#include "Matrix.hpp"

namespace ES1 {

struct Drawable {
    GLuint VertexBuffer;
    GLuint IndexBuffer;
    int IndexCount;
};

class RenderingEngine : public IRenderingEngine {
public:
    RenderingEngine();
    void Initialize(const vector<ISurface*>& surfaces);
    void Render(const vector<Visual>& visuals) const;
private:
    vector<Drawable> m_drawables;
    GLuint m_colorRenderbuffer;
    mat4 m_translation;
};
    
IRenderingEngine* CreateRenderingEngine()
{
    return new RenderingEngine();
}

RenderingEngine::RenderingEngine()
{
    glGenRenderbuffersOES(1, &m_colorRenderbuffer);
    glBindRenderbufferOES(GL_RENDERBUFFER_OES, m_colorRenderbuffer);
}

void RenderingEngine::Initialize(const vector<ISurface*>& surfaces)
{
    vector<ISurface*>::const_iterator surface;
    for (surface = surfaces.begin(); 
         surface != surfaces.end(); ++surface) {
        
        // Create the VBO for the vertices.
        vector<float> vertices;
        (*surface)->GenerateVertices(vertices);
        GLuint vertexBuffer;
        glGenBuffers(1, &vertexBuffer);
        glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
        glBufferData(GL_ARRAY_BUFFER,
                     vertices.size() * sizeof(vertices[0]),
                     &vertices[0],
                     GL_STATIC_DRAW);
        
        // Create a new VBO for the indices if needed.
        int indexCount = (*surface)->GetLineIndexCount();
        GLuint indexBuffer;
        if (!m_drawables.empty() && 
            indexCount == m_drawables[0].IndexCount) {
            indexBuffer = m_drawables[0].IndexBuffer;
        } else {
            vector<GLushort> indices(indexCount);
            (*surface)->GenerateLineIndices(indices);
            glGenBuffers(1, &indexBuffer);
            glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuffer);
            glBufferData(GL_ELEMENT_ARRAY_BUFFER,
                         indexCount * sizeof(GLushort),
                         &indices[0],
                         GL_STATIC_DRAW);
        }
        
        Drawable drawable = { vertexBuffer, indexBuffer, indexCount};
        m_drawables.push_back(drawable);
    }
    
    // Create the framebuffer object.
    GLuint framebuffer;
    glGenFramebuffersOES(1, &framebuffer);
    glBindFramebufferOES(GL_FRAMEBUFFER_OES, framebuffer);
    glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, 
                                 GL_COLOR_ATTACHMENT0_OES,
                                 GL_RENDERBUFFER_OES, 
                                 m_colorRenderbuffer);
    glBindRenderbufferOES(GL_RENDERBUFFER_OES, m_colorRenderbuffer);

    glEnableClientState(GL_VERTEX_ARRAY);
    m_translation = mat4::Translate(0, 0, -7);
}

void RenderingEngine::Render(const vector<Visual>& visuals) const
{
    glClear(GL_COLOR_BUFFER_BIT);
    
    vector<Visual>::const_iterator visual = visuals.begin();
    for (int visualIndex = 0; 
         visual != visuals.end(); 
         ++visual, ++visualIndex) 
    {
        // Set the viewport transform.
        ivec2 size = visual->ViewportSize;
        ivec2 lowerLeft = visual->LowerLeft;
        glViewport(lowerLeft.x, lowerLeft.y, size.x, size.y);
        
        // Set the model-view transform.
        mat4 rotation = visual->Orientation.ToMatrix();
        mat4 modelview = rotation * m_translation;
        glMatrixMode(GL_MODELVIEW);
        glLoadMatrixf(modelview.Pointer());
        
        // Set the projection transform.
        float h = 4.0f * size.y / size.x;
        mat4 projection = mat4::Frustum(-2, 2, -h / 2, h / 2, 5, 10);
        glMatrixMode(GL_PROJECTION);
        glLoadMatrixf(projection.Pointer());
        
        // Set the color.
        vec3 color = visual->Color;
        glColor4f(color.x, color.y, color.z, 1);
        
        // Draw the wireframe.
        int stride = sizeof(vec3);
        const Drawable& drawable = m_drawables[visualIndex];
        glBindBuffer(GL_ARRAY_BUFFER, drawable.VertexBuffer);
        glVertexPointer(3, GL_FLOAT, stride, 0);
        glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, drawable.IndexBuffer);
        glDrawElements(GL_LINES, drawable.IndexCount, GL_UNSIGNED_SHORT, 0);
    }
}
    
}

There are no new OpenGL concepts here; you should be able to follow the code in Example 3-15. We now have all the big pieces in place for the wireframe viewer. At this point, it shows only a single wireframe; this is improved in the coming sections.

Poor Man’s Tab Bar

Apple provides the UITabBar widget as part of the UIKit framework. This is the familiar list of gray icons that many applications have along the bottom of the screen, as shown in Figure 3-6.

UITabBar

Figure 3-6. UITabBar

Since UIKit widgets are outside the scope of this book, you’ll be using OpenGL to create a poor man’s tab bar for switching between the various parametric surfaces, as in Figure 3-7.

Poor man’s tab bar

Figure 3-7. Poor man’s tab bar

In many situations like this, a standard UITabBar is preferable since it creates a more consistent look with other iPhone applications. But in our case, we’ll create a fun transition effect: pushing a button will cause it to “slide out” of the tab bar and into the main viewport. For this level of control over rendering, UIKit doesn’t suffice.

The wireframe viewer has a total of six parametric surfaces, but the button bar has only five. When the user touches a button, we’ll swap its contents with the surface being displayed in the main viewport. This allows the application to support six surfaces with only five buttons.

The state for the five buttons and the button-detection code lives in the application engine. New lines in the class declaration from ApplicationEngine.cpp are shown in bold in Example 3-16. No modifications to the two rendering engines are required.

Example 3-16. ApplicationEngine declaration with tab bar

#include "Interfaces.hpp"
#include "ParametricEquations.hpp"
#include <algorithm>

using namespace std;

static const int SurfaceCount = 6;
static const int ButtonCount = SurfaceCount - 1;

class ApplicationEngine : public IApplicationEngine {
public:
    ApplicationEngine(IRenderingEngine* renderingEngine);
    ~ApplicationEngine();
    void Initialize(int width, int height);
    void OnFingerUp(ivec2 location);
    void OnFingerDown(ivec2 location);
    void OnFingerMove(ivec2 oldLocation, ivec2 newLocation);
    void Render() const;
    void UpdateAnimation(float dt);
private:
    void PopulateVisuals(Visual* visuals) const;
    int MapToButton(ivec2 touchpoint) const;
    vec3 MapToSphere(ivec2 touchpoint) const;
    float m_trackballRadius;
    ivec2 m_screenSize;
    ivec2 m_centerPoint;
    ivec2 m_fingerStart;
    bool m_spinning;
    Quaternion m_orientation;
    Quaternion m_previousOrientation;
    IRenderingEngine* m_renderingEngine;
    int m_currentSurface;
    ivec2 m_buttonSize;
    int m_pressedButton;
    int m_buttonSurfaces[ButtonCount];
};

Example 3-17 shows the implementation. Methods left unchanged (such as MapToSphere) are omitted for brevity. You’ll be replacing the following methods: ApplicationEngine::ApplicationEngine, Initialize, Render, OnFingerUp, OnFingerDown, and OnFingerMove. There are two new methods you’ll be adding: ApplicationEngine::PopulateVisuals and MapToButton.

Example 3-17. ApplicationEngine implementation with tab bar

ApplicationEngine::ApplicationEngine(IRenderingEngine* renderingEngine) :
    m_spinning(false),
    m_renderingEngine(renderingEngine),
    m_pressedButton(-1)
{
    m_buttonSurfaces[0] = 0;
    m_buttonSurfaces[1] = 1;
    m_buttonSurfaces[2] = 2;
    m_buttonSurfaces[3] = 4;
    m_buttonSurfaces[4] = 5;
    m_currentSurface = 3;
}

void ApplicationEngine::Initialize(int width, int height)
{
    m_trackballRadius = width / 3;
    m_buttonSize.y = height / 10;
    m_buttonSize.x = 4 * m_buttonSize.y / 3;
    m_screenSize = ivec2(width, height - m_buttonSize.y);
    m_centerPoint = m_screenSize / 2;

    vector<ISurface*> surfaces(SurfaceCount);
    surfaces[0] = new Cone(3, 1);
    surfaces[1] = new Sphere(1.4f);
    surfaces[2] = new Torus(1.4f, 0.3f);
    surfaces[3] = new TrefoilKnot(1.8f);
    surfaces[4] = new KleinBottle(0.2f);
    surfaces[5] = new MobiusStrip(1);
    m_renderingEngine->Initialize(surfaces);
    for (int i = 0; i < SurfaceCount; i++)
        delete surfaces[i];
}

void ApplicationEngine::PopulateVisuals(Visual* visuals) const
{
    for (int buttonIndex = 0; buttonIndex < ButtonCount; buttonIndex++) {
        int visualIndex = m_buttonSurfaces[buttonIndex];
        visuals[visualIndex].Color = vec3(0.75f, 0.75f, 0.75f);
        if (m_pressedButton == buttonIndex)
            visuals[visualIndex].Color = vec3(1, 1, 1);
        
        visuals[visualIndex].ViewportSize = m_buttonSize;
        visuals[visualIndex].LowerLeft.x = buttonIndex * m_buttonSize.x;
        visuals[visualIndex].LowerLeft.y = 0;
        visuals[visualIndex].Orientation = Quaternion();
    }
    
    visuals[m_currentSurface].Color = m_spinning ? vec3(1, 1, 1) : vec3(0, 1, 1);
    visuals[m_currentSurface].LowerLeft = ivec2(0, 48);
    visuals[m_currentSurface].ViewportSize = ivec2(320, 432);
    visuals[m_currentSurface].Orientation = m_orientation;
}

void ApplicationEngine::Render() const
{
    vector<Visual> visuals(SurfaceCount);
    PopulateVisuals(&visuals[0]);
    m_renderingEngine->Render(visuals);
}

void ApplicationEngine::OnFingerUp(ivec2 location)
{
    m_spinning = false;
    
    if (m_pressedButton != -1 && m_pressedButton == MapToButton(location))
        swap(m_buttonSurfaces[m_pressedButton], m_currentSurface);
    
    m_pressedButton = -1;
}

void ApplicationEngine::OnFingerDown(ivec2 location)
{
    m_fingerStart = location;
    m_previousOrientation = m_orientation;
    m_pressedButton = MapToButton(location);
    if (m_pressedButton == -1)
        m_spinning = true;
}

void ApplicationEngine::OnFingerMove(ivec2 oldLocation, ivec2 location)
{
    if (m_spinning) {
        vec3 start = MapToSphere(m_fingerStart);
        vec3 end = MapToSphere(location);
        Quaternion delta = Quaternion::CreateFromVectors(start, end);
        m_orientation = delta.Rotated(m_previousOrientation);
    }
    
    if (m_pressedButton != -1 && m_pressedButton != MapToButton(location))
        m_pressedButton = -1;
}

int ApplicationEngine::MapToButton(ivec2 touchpoint) const
{
    if (touchpoint.y  < m_screenSize.y - m_buttonSize.y)
        return -1;
    
    int buttonIndex = touchpoint.x / m_buttonSize.x;
    if (buttonIndex >= ButtonCount)
        return -1;
    
    return buttonIndex;
}

Go ahead and try it—at this point, the wireframe viewer is starting to feel like a real application!

Animating the Transition

The button-swapping strategy is clever but possibly jarring to users; after playing with the app for a while, the user might start to notice that his tab bar is slowly being re-arranged. To make the swap effect more obvious and to give the app more of a fun Apple feel, let’s create a transition animation that actually shows the button being swapped with the main viewport. Figure 3-8 depicts this animation.

Transition animation in wireframe viewer

Figure 3-8. Transition animation in wireframe viewer

Again, no changes to the two rendering engines are required, because all the logic can be constrained to ApplicationEngine. In addition to animating the viewport, we’ll also animate the color (the tab bar wireframes are drab gray) and the orientation (the tab bar wireframes are all in the “home” position). We can reuse the existing Visual class for this; we need two sets of Visual objects for the start and end of the animation. While the animation is active, we’ll tween the values between the starting and ending visuals. Let’s also create an Animation structure to bundle the visuals with a few other animation parameters, as shown in bold in Example 3-18.

Example 3-18. ApplicationEngine declaration with transition animation

struct Animation {
    bool Active;
    float Elapsed;
    float Duration;
    Visual StartingVisuals[SurfaceCount];
    Visual EndingVisuals[SurfaceCount];
};

class ApplicationEngine : public IApplicationEngine {
public:
    // ...
private:
    // ...
    Animation m_animation;
};

Example 3-19 shows the new implementation of ApplicationEngine. Unchanged methods are omitted for brevity. Remember, animation is all about interpolation! The Render method leverages the Lerp and Slerp methods from our vector class library to achieve the animation in a surprisingly straightforward manner.

Example 3-19. ApplicationEngine implementation with transition animation

ApplicationEngine::ApplicationEngine(IRenderingEngine* renderingEngine) :
    m_spinning(false),
    m_renderingEngine(renderingEngine),
    m_pressedButton(-1)
{
    m_animation.Active = false;

    // Same as in Example 3-17
    ....
}

void ApplicationEngine::Render() const
{
    vector<Visual> visuals(SurfaceCount);
    
    if (!m_animation.Active) {
        PopulateVisuals(&visuals[0]);
    } else {
        float t = m_animation.Elapsed / m_animation.Duration;
        for (int i = 0; i < SurfaceCount; i++) {
            const Visual& start = m_animation.StartingVisuals[i];
            const Visual& end = m_animation.EndingVisuals[i];
            Visual& tweened = visuals[i];
            
            tweened.Color = start.Color.Lerp(t, end.Color);
            tweened.LowerLeft = start.LowerLeft.Lerp(t, end.LowerLeft);
            tweened.ViewportSize = start.ViewportSize.Lerp(t, end.ViewportSize);
            tweened.Orientation = start.Orientation.Slerp(t, end.Orientation);
        }
    }
    
    m_renderingEngine->Render(visuals);
}

void ApplicationEngine::UpdateAnimation(float dt)
{
    if (m_animation.Active) {
        m_animation.Elapsed += dt;
        if (m_animation.Elapsed > m_animation.Duration)
            m_animation.Active = false;
    }
}

void ApplicationEngine::OnFingerUp(ivec2 location)
{
    m_spinning = false;
    
    if (m_pressedButton != -1 && m_pressedButton == MapToButton(location) &&
        !m_animation.Active)
    {
        m_animation.Active = true;
        m_animation.Elapsed = 0;
        m_animation.Duration = 0.25f;
        
        PopulateVisuals(&m_animation.StartingVisuals[0]);
        swap(m_buttonSurfaces[m_pressedButton], m_currentSurface);
        PopulateVisuals(&m_animation.EndingVisuals[0]);
    }
    
    m_pressedButton = -1;
}

That completes the wireframe viewer! As you can see, animation isn’t difficult, and it can give your application that special Apple touch.



[3] True Möbius strips are one-sided surfaces and can cause complications with the lighting algorithms presented in the next chapter. The wireframe viewer actually renders a somewhat flattened Möbius “tube.”

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required