O'Reilly logo

iPhone 3D Programming by Philip Rideout

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Chapter 5. Textures and Image Capture

Not everybody trusts paintings, but people believe photographs.

Ansel Adams

Shading algorithms are required for effects that need to respond to a changing condition in real time, such as the movement of a light source. But procedural methods can go only so far on their own; they can never replace the creativity of a professional artist. That’s where textures come to the rescue. Texturing allows any predefined image, such as a photograph, to be projected onto a 3D surface.

Simply put, textures are images; yet somehow, an entire vocabulary is built around them. Pixels that make up a texture are known as texels. When the hardware reads a texel color, it’s said to be sampling. When OpenGL scales a texture, it’s also filtering it. Don’t let the vocabulary intimidate you; the concepts are simple. In this chapter, we’ll focus on presenting the basics, saving some advanced techniques for later in this book.

We’ll begin by modifying ModelViewer to support image loading and simple texturing. Afterward we’ll take a closer look at some of the OpenGL features involved, such as mipmapping and filtering. Toward the end of the chapter we’ll use texturing to perform a fun trick with the iPhone camera.

Adding Textures to ModelViewer

Our final enhancement to ModelViewer wraps a simple grid texture around each of the surfaces in the parametric gallery, as shown in Figure 5-1. We need to store only one cell of the grid in an image file; OpenGL can repeat the source pattern as many times as desired.

Textured ModelViewer

Figure 5-1. Textured ModelViewer

The image file can be in a number of different file formats, but we’ll go with PNG for now. It’s popular because it supports an optional alpha channel, lossless compression, and variable color precision. (Another common format on the iPhone platform is PVR, which we’ll cover later in the chapter.)

You can either download the image file for the grid cell from this book’s companion site or create one using your favorite paint or graphics program. I used a tiny 16×16 white square with a 1-pixel black border. Save the file as Grid16.png.

Since we’ll be loading a resource file from an external file, let’s use the .OBJ-loading sample from the previous chapter as the starting point. To keep things simple, the first step is rewinding a bit and using parametric surfaces exclusively. Simply revert the ApplicationEngine::Initialize method so that it uses the sphere and cone shapes. To do this, find the following code:

string path = m_resourceManager->GetResourcePath();
surfaces[0] = new ObjSurface(path + "/micronapalmv2.obj");
surfaces[1] = new ObjSurface(path + "/Ninja.obj");

And replace it with the following:

surfaces[0] = new Cone(3, 1);
surfaces[1] = new Sphere(1.4f);

Keep everything else the same. We’ll enhance the IResourceManager interface later to support image loading.

Next you need to add the actual image file to your Xcode project. As with the OBJ files in the previous chapter, Xcode automatically deploys these resources to your iPhone. Even though this example has only a single image file, I recommend creating a dedicated group anyway. Right-click the ModelViewer root in the Overview pane, choose AddNew Group, and call it Textures. Right-click the new group, and choose Get Info. To the right of the Path label on the General tab, click Choose, and create a new folder called Textures. Click Choose, and close the group info window.

Right-click the new group, and choose AddExisting Files. Select the PNG file, click Add, and in the next dialog box, make sure the “Copy items” checkbox is checked; then click Add.

Enhancing IResourceManager

Unlike OBJ files, it’s a bit nontrivial to decode the PNG file format by hand since it uses a lossless compression algorithm. Rather than manually decoding the PNG file in the application code, it makes more sense to leverage the infrastructure that Apple provides for reading image files. Recall that the ResourceManager implementation is a mix of Objective-C and C++; so, it’s an ideal place for calling Apple-specific APIs. Let’s make the resource manager responsible for decoding the PNG file. The first step is to open Interfaces.hpp and make the changes shown in Example 5-1. New lines are in bold (the changes to the last two lines, which you’ll find at the end of the file, are needed to support a change we’ll be making to the rendering engine).

Example 5-1. Enhanced IResourceManager

struct IResourceManager {
    virtual string GetResourcePath() const = 0;
    virtual void LoadPngImage(const string& filename) = 0;1
    virtual void* GetImageData() = 0;2
    virtual ivec2 GetImageSize() = 0;3
    virtual void UnloadImage() = 0;4
    virtual ~IResourceManager() {}
};

// ...
namespace ES1 { IRenderingEngine* 
                CreateRenderingEngine(IResourceManager* resourceManager); }
namespace ES2 { IRenderingEngine* 
                CreateRenderingEngine(IResourceManager* resourceManager); }

1

Load the given PNG file into the resource manager.

2

Return a pointer to the decoded color buffer. For now, this is always 8-bit-per-component RGBA data.

3

Return the width and height of the loaded image using the ivec2 class from our vector library.

4

Free memory used to store the decoded data.

Now let’s open ResourceManager.mm and update the actual implementation class (don’t delete the #imports, the using statement, or the CreateResourceManager definition). It needs two additional items to maintain state: the decoded memory buffer (m_imageData) and the size of the image (m_imageSize). Apple provides several ways of loading PNG files; Example 5-2 is the simplest way of doing this.

Example 5-2. ResourceManager with PNG loading

class ResourceManager : public IResourceManager {
public:
    string GetResourcePath() const
    {
        NSString* bundlePath =[[NSBundle mainBundle] resourcePath];
        return [bundlePath UTF8String];
    }
    void LoadPngImage(const string& name)
    {
        NSString* basePath = [NSString stringWithUTF8String:name.c_str()];1
        NSString* resourcePath = [[NSBundle mainBundle] resourcePath];
        NSString* fullPath = [resourcePath stringByAppendingPathComponent:basePath];2
        UIImage* uiImage = [UIImage imageWithContentsOfFile:fullPath];3
        CGImageRef cgImage = uiImage.CGImage;4
        m_imageSize.x = CGImageGetWidth(cgImage);5
        m_imageSize.y = CGImageGetHeight(cgImage);
        m_imageData = CGDataProviderCopyData(CGImageGetDataProvider(cgImage));6
    }
    void* GetImageData()
    {
        return (void*) CFDataGetBytePtr(m_imageData);
    }
    ivec2 GetImageSize()
    {
        return m_imageSize;
    }
    void UnloadImage()
    {
        CFRelease(m_imageData);
    }
private:
    CFDataRef m_imageData;
    ivec2 m_imageSize;
};

Most of Example 5-2 is straightforward, but LoadPngImage deserves some extra explanation:

1

Convert the C++ string into an Objective-C string object.

2

Obtain the fully qualified path the PNG file.

3

Create an instance of a UIImage object using the imageWithContentsOfFile method, which slurps up the contents of the entire file.

4

Extract the inner CGImage object from UIImage. UIImage is essentially a convenience wrapper for CGImage. Note that CGImage is a Core Graphics class, which is shared across the Mac OS X and iPhone OS platforms; UIImage comes from UIKit and is therefore iPhone-specific.

5

Extract the image size from the inner CGImage object.

6

Generate a CFData object from the CGImage object. CFData is a core foundation object that represents a swath of memory.

Warning

The image loader in Example 5-2 is simple but not robust. For production code, I recommend using one of the enhanced methods presented later in the chapter.

Next we need to make sure that the resource manager can be accessed from the right places. It’s already instanced in the GLView class and passed to the application engine; now we need to pass it to the rendering engine as well. The OpenGL code needs it to retrieve the raw image data.

Go ahead and change GLView.mm so that the resource manager gets passed to the rendering engine during construction. Example 5-3 shows the relevant section of code (additions are shown in bold).

Example 5-3. Creation of ResourceManager, RenderingEngine, and ApplicationEngine

m_resourceManager = CreateResourceManager();

if (api == kEAGLRenderingAPIOpenGLES1) {
    NSLog(@"Using OpenGL ES 1.1");
    m_renderingEngine = ES1::CreateRenderingEngine(m_resourceManager);
} else {
    NSLog(@"Using OpenGL ES 2.0");
    m_renderingEngine = ES2::CreateRenderingEngine(m_resourceManager);
}

m_applicationEngine = CreateApplicationEngine(m_renderingEngine, 
                                              m_resourceManager);

Generating Texture Coordinates

To control how the texture gets applied to the models, we need to add a 2D texture coordinate attribute to the vertices. The natural place to generate texture coordinates is in the ParametricSurface class. Each subclass should specify how many repetitions of the texture get tiled across each axis of domain. Consider the torus: its outer circumference is much longer than the circumference of its cross section. Since the x-axis in the domain follows the outer circumference and the y-axis circumscribes the cross section, it follows that more copies of the texture need to be tiled across the x-axis than the y-axis. This prevents the grid pattern from being stretched along one axis.

Recall that each parametric surface describes its domain using the ParametricInterval structure, so that’s a natural place to store the number of repetitions; see Example 5-4. Note that the repetition counts for each axis are stored in a vec2.

Example 5-4. Texture support in ParametricSurface.hpp

#include "Interfaces.hpp"

struct ParametricInterval {
    ivec2 Divisions;
    vec2 UpperBound;
    vec2 TextureCount;
};

class ParametricSurface : public ISurface {
...
private:
    vec2 ComputeDomain(float i, float j) const;
    ivec2 m_slices;
    ivec2 m_divisions;
    vec2 m_upperBound;
    vec2 m_textureCount;
};

The texture counts that I chose for the cone and sphere are shown in bold in Example 5-5. For brevity’s sake, I’ve omitted the other parametric surfaces; as always, you can download the code from this book’s website (http://oreilly.com/catalog/9780596804831).

Example 5-5. Texture support in ParametricEquations.hpp

#include "ParametricSurface.hpp"

class Cone : public ParametricSurface {
public:
    Cone(float height, float radius) : m_height(height), m_radius(radius)
    {
        ParametricInterval interval = { ivec2(20, 20), vec2(TwoPi, 1), vec2(30,20) };
        SetInterval(interval);
    }
    ...
};

class Sphere : public ParametricSurface {
public:
    Sphere(float radius) : m_radius(radius)
    {
        ParametricInterval interval = { ivec2(20, 20), vec2(Pi, TwoPi), vec2(20, 35) };
        SetInterval(interval);
    }
    ...
};

Next we need to flesh out a couple methods in ParametericSurface (Example 5-6). Recall that we’re passing in a set of flags to GenerateVertices to request a set of vertex attributes; until now, we’ve been ignoring the VertexFlagsTexCoords flag.

Example 5-6. Texture support in ParametricSurface.cpp

void ParametricSurface::SetInterval(const ParametricInterval& interval)
{
    m_divisions = interval.Divisions;
    m_slices = m_divisions - ivec2(1, 1);
    m_upperBound = interval.UpperBound;
    m_textureCount = interval.TextureCount;
}

...

void ParametricSurface::GenerateVertices(vector<float>& vertices,
                                         unsigned char flags) const
{
    int floatsPerVertex = 3;
    if (flags & VertexFlagsNormals)
        floatsPerVertex += 3;
    if (flags & VertexFlagsTexCoords)
        floatsPerVertex += 2;

    vertices.resize(GetVertexCount() * floatsPerVertex);
    float* attribute = &vertices[0];

    for (int j = 0; j < m_divisions.y; j++) {
        for (int i = 0; i < m_divisions.x; i++) {

            // Compute Position
            vec2 domain = ComputeDomain(i, j);
            vec3 range = Evaluate(domain);
            attribute = range.Write(attribute);

            // Compute Normal
            if (flags & VertexFlagsNormals) {
                ...
            }
            
            // Compute Texture Coordinates
            if (flags & VertexFlagsTexCoords) {
                float s = m_textureCount.x * i / m_slices.x;
                float t = m_textureCount.y * j / m_slices.y;
                attribute = vec2(s, t).Write(attribute);
            }
        }
    }
}

In OpenGL, texture coordinates are normalized such that (0,0) maps to one corner of the image and (1, 1) maps to the other corner, regardless of its size. The inner loop in Example 5-6 computes the texture coordinates like this:

float s = m_textureCount.x * i / m_slices.x;
float t = m_textureCount.y * j / m_slices.y;

Since the s coordinate ranges from zero up to m_textureCount.x (inclusive), OpenGL horizontally tiles m_textureCount.x repetitions of the texture across the surface. We’ll take a deeper look at how texture coordinates work later in the chapter.

Note that if you were loading the model data from an OBJ file or other 3D format, you’d probably obtain the texture coordinates directly from the model file rather than computing them like we’re doing here.

Enabling Textures with ES1::RenderingEngine

As always, let’s start with the ES 1.1 rendering engine since the 2.0 variant is more complex. The first step is adding a pointer to the resource manager as shown in Example 5-7. Note we’re also adding a GLuint for the grid texture. Much like framebuffer objects and vertex buffer objects, OpenGL textures have integer names.

Example 5-7. RenderingEngine.ES1.cpp

class RenderingEngine : public IRenderingEngine {
public:
    RenderingEngine(IResourceManager* resourceManager);
    void Initialize(const vector<ISurface*>& surfaces);
    void Render(const vector<Visual>& visuals) const;
private:
    vector<Drawable> m_drawables;
    GLuint m_colorRenderbuffer;
    GLuint m_depthRenderbuffer;
    mat4 m_translation;
    GLuint m_gridTexture;
    IResourceManager* m_resourceManager;
};
    
IRenderingEngine* CreateRenderingEngine(IResourceManager* resourceManager)
{
    return new RenderingEngine(resourceManager);
}

RenderingEngine::RenderingEngine(IResourceManager* resourceManager)
{
    m_resourceManager = resourceManager;
    glGenRenderbuffersOES(1, &m_colorRenderbuffer);
    glBindRenderbufferOES(GL_RENDERBUFFER_OES, m_colorRenderbuffer);
}

Example 5-8 shows the code for loading the texture, followed by a detailed explanation.

Example 5-8. Creating the OpenGL texture

void RenderingEngine::Initialize(const vector<ISurface*>& surfaces)
{
    vector<ISurface*>::const_iterator surface;
    for (surface = surfaces.begin(); surface != surfaces.end(); ++surface) {

        // Create the VBO for the vertices.
        vector<float> vertices;
        (*surface)->GenerateVertices(vertices, VertexFlagsNormals
                                     |VertexFlagsTexCoords);

    // ...

    // Load the texture.
    glGenTextures(1, &m_gridTexture);1
    glBindTexture(GL_TEXTURE_2D, m_gridTexture);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);2
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

    m_resourceManager->LoadPngImage("Grid16.png");3
    void* pixels = m_resourceManager->GetImageData();
    ivec2 size = m_resourceManager->GetImageSize();
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, size.x, 
                 size.y, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels);4
    m_resourceManager->UnloadImage();

    // Set up various GL state.
    glEnableClientState(GL_VERTEX_ARRAY);
    glEnableClientState(GL_NORMAL_ARRAY);
    glEnableClientState(GL_TEXTURE_COORD_ARRAY);5
    glEnable(GL_LIGHTING);
    glEnable(GL_LIGHT0);
    glEnable(GL_DEPTH_TEST);
    glEnable(GL_TEXTURE_2D);6

    ...
}
1

Generate the integer identifier for the object, and then bind it to the pipeline. This follows the Gen/Bind pattern used by FBOs and VBOs.

2

Set the minification filter and magnification filter of the texture object. The texture filter is the algorithm that OpenGL uses to shrink or enlarge a texture; we’ll cover filtering in detail later.

3

Tell the resource manager to load and decode the Grid16.png file.

4

Upload the raw texture data to OpenGL using glTexImage2D; more on this later.

5

Tell OpenGL to enable the texture coordinate vertex attribute.

6

Tell OpenGL to enable texturing.

Example 5-8 introduces the glTexImage2D function, which unfortunately has more parameters than it needs because of historical reasons. Don’t be intimidated by the eight parameters; it’s much easier to use than it appears. Here’s the formal declaration:

void glTexImage2D(GLenum target, GLint level, GLint internalformat,
                  GLsizei width, GLsizei height, GLint border,
                  GLenum format, GLenum type, const GLvoid* pixels); 
target

This specifies which binding point to upload the texture to. For ES 1.1, this must be GL_TEXTURE_2D.

level

This specifies the mipmap level. We’ll learn more about mipmaps soon. For now, use zero for this.

internalFormat

This specifies the format of the texture. We’re using GL_RGBA for now, and other formats will be covered shortly. It’s declared as a GLint rather than a GLenum for historical reasons.

width, height

This specifies the size of the image being uploaded.

border

Set this to zero; texture borders are not supported in OpenGL ES. Be happy, because that’s one less thing you have to remember!

format

In OpenGL ES, this has to match internalFormat. The argument may seem redundant, but it’s yet another carryover from desktop OpenGL, which supports format conversion. Again, be happy; this is a simpler API.

type

This describes the type of each color component. This is commonly GL_UNSIGNED_BYTE, but we’ll learn about some other types later.

pixels

This is the pointer to the raw data that gets uploaded.

Next let’s go over the Render method. The only difference is that the vertex stride is larger, and we need to call glTexCoordPointer to give OpenGL the correct offset into the VBO. See Example 5-9.

Example 5-9. ES1::RenderingEngine::Render with texture

void RenderingEngine::Render(const vector<Visual>& visuals) const
{
    glClearColor(0.5f, 0.5f, 0.5f, 1);
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    
    vector<Visual>::const_iterator visual = visuals.begin();
    for (int visualIndex = 0; 
         visual != visuals.end(); 
         ++visual, ++visualIndex) 
    {

        // ...

        // Draw the surface.
        int stride = sizeof(vec3) + sizeof(vec3) + sizeof(vec2);
        const GLvoid* texCoordOffset = (const GLvoid*) (2 * sizeof(vec3));
        const Drawable& drawable = m_drawables[visualIndex];
        glBindBuffer(GL_ARRAY_BUFFER, drawable.VertexBuffer);
        glVertexPointer(3, GL_FLOAT, stride, 0);
        const GLvoid* normalOffset = (const GLvoid*) sizeof(vec3);
        glNormalPointer(GL_FLOAT, stride, normalOffset);
        glTexCoordPointer(2, GL_FLOAT, stride, texCoordOffset);
        glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, drawable.IndexBuffer);
        glDrawElements(GL_TRIANGLES, drawable.IndexCount, GL_UNSIGNED_SHORT, 0);
    }
}

That’s it for the ES 1.1 backend! Incidentally, in more complex applications you should take care to delete your textures after you’re done with them; textures can be one of the biggest resource hogs in OpenGL. Deleting a texture is done like so:

glDeleteTextures(1, &m_gridTexture)

This function is similar to glGenTextures in that it takes a count and a list of names. Incidentally, vertex buffer objects are deleted in a similar manner using glDeleteBuffers.

Enabling Textures with ES2::RenderingEngine

The ES 2.0 backend requires some changes to both the vertex shader (to pass along the texture coordinate) and the fragment shader (to apply the texel color). You do not call glEnable(GL_TEXTURE_2D) with ES 2.0; it simply depends on what your fragment shader does.

Let’s start with the vertex shader, shown in Example 5-10. This is a modification of the simple lighting shader presented in the previous chapter (Example 4-15). Only three new lines are required (shown in bold).

Note

If you tried some of the other vertex shaders in that chapter, such as the pixel shader or toon shader, the current shader in your project may look different from Example 4-15.

Example 5-10. SimpleLighting.vert with texture

attribute vec4 Position;
attribute vec3 Normal;
attribute vec3 DiffuseMaterial;
attribute vec2 TextureCoord;

uniform mat4 Projection;
uniform mat4 Modelview;
uniform mat3 NormalMatrix;
uniform vec3 LightPosition;
uniform vec3 AmbientMaterial;
uniform vec3 SpecularMaterial;
uniform float Shininess;

varying vec4 DestinationColor;
varying vec2 TextureCoordOut;

void main(void)
{
    vec3 N = NormalMatrix * Normal;
    vec3 L = normalize(LightPosition);
    vec3 E = vec3(0, 0, 1);
    vec3 H = normalize(L + E);

    float df = max(0.0, dot(N, L));
    float sf = max(0.0, dot(N, H));
    sf = pow(sf, Shininess);

    vec3 color = AmbientMaterial + df * DiffuseMaterial + sf * SpecularMaterial;
    
    DestinationColor = vec4(color, 1);
    gl_Position = Projection * Modelview * Position;
    TextureCoordOut = TextureCoord;
}

Note

To try these, you can replace the contents of your existing .vert and .frag files. Just be sure not to delete the first line with STRINGIFY or the last line with the closing parenthesis and semicolon.

Example 5-10 simply passes the texture coordinates through, but you can achieve many interesting effects by manipulating the texture coordinates, or even generating them from scratch. For example, to achieve a “movie projector” effect, simply replace the last line in Example 5-10 with this:

TextureCoordOut = gl_Position.xy * 2.0;

For now, let’s stick with the boring pass-through shader because it better emulates the behavior of ES 1.1. The new fragment shader is a bit more interesting; see Example 5-11.

Example 5-11. Simple.frag with texture

varying lowp vec4 DestinationColor;1
varying mediump vec2 TextureCoordOut;2

uniform sampler2D Sampler;3

void main(void)
{
    gl_FragColor = texture2D(Sampler, TextureCoordOut) * DestinationColor;4
}
1

As before, declare a low-precision varying to receive the color produced by lighting.

2

Declare a medium-precision varying to receive the texture coordinates.

3

Declare a uniform sampler, which represents the texture stage from which we’ll retrieve the texel color.

4

Use the texture2D function to look up a texel color from the sampler, then multiply it by the lighting color to produce the final color. This is component-wise multiplication, not a dot product or a cross product.

When setting a uniform sampler from within your application, a common mistake is to set it to the handle of the texture object you’d like to sample:

glBindTexture(GL_TEXTURE_2D, textureHandle);
GLint location = glGetUniformLocation(programHandle, "Sampler");

glUniform1i(location, textureHandle); // Incorrect
glUniform1i(location, 0); // Correct

The correct value of the sampler is the stage index that you’d like to sample from, not the handle. Since all uniforms default to zero, it’s fine to not bother setting sampler values if you’re not using multitexturing (we’ll cover multitexturing later in this book).

Warning

Uniform samplers should be set to the stage index, not the texture handle.

Newly introduced in Example 5-11 is the texture2D function call. For input, it takes a uniform sampler and a vec2 texture coordinate. Its return value is always a vec4, regardless of the texture format.

Warning

The OpenGL ES specification stipulates that texture2D can be called from vertex shaders as well, but on many platforms, including the iPhone, it’s actually limited to fragment shaders only.

Note that Example 5-11 uses multiplication to combine the lighting color and texture color; this often produces good results. Multiplying two colors in this way is called modulation, and it’s the default method used in ES 1.1.

Now let’s make the necessary changes to the C++ code. First we need to add new class members to store the texture ID and resource manager pointer, but that’s the same as ES 1.1, so I won’t repeat it here. I also won’t repeat the texture-loading code because it’s the same with both APIs.

One new thing we need for the ES 2.0 backend is an attribute ID for texture coordinates. See Example 5-12. Note the lack of a glEnable for texturing; remember, there’s no need for it in ES 2.0.

Example 5-12. RenderingEngine.ES2.cpp

struct AttributeHandles {
    GLint Position;
    GLint Normal;
    GLint Ambient;
    GLint Diffuse;
    GLint Specular;
    GLint Shininess;
    GLint TextureCoord;
};

...

void RenderingEngine::Initialize(const vector<ISurface*>& surfaces)
{

    vector<ISurface*>::const_iterator surface;
    for (surface = surfaces.begin(); surface != surfaces.end(); ++surface) {

        // Create the VBO for the vertices.
        vector<float> vertices;
        (*surface)->GenerateVertices(vertices, VertexFlagsNormals|VertexFlagsTexCoords);

    // ...
    
    m_attributes.TextureCoord = glGetAttribLocation(program, "TextureCoord");

    // Load the texture.
    glGenTextures(1, &m_gridTexture);
    glBindTexture(GL_TEXTURE_2D, m_gridTexture);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

    m_resourceManager->LoadPngImage("Grid16.png");
    void* pixels = m_resourceManager->GetImageData();
    ivec2 size = m_resourceManager->GetImageSize();
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, size.x, 
                 size.y, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
    m_resourceManager->UnloadImage();

    // Initialize various state.
    glEnableVertexAttribArray(m_attributes.Position);
    glEnableVertexAttribArray(m_attributes.Normal);
    glEnableVertexAttribArray(m_attributes.TextureCoord);
    glEnable(GL_DEPTH_TEST);

    ...
}

You may have noticed that the fragment shader declared a sampler uniform, but we’re not setting it to anything in our C++ code. There’s actually no need to set it; all uniforms default to zero, which is what we want for the sampler’s value anyway. You don’t need a nonzero sampler unless you’re using multitexturing, which is a feature that we’ll cover in Chapter 8.

Next up is the Render method, which is pretty straightforward (Example 5-13). The only way it differs from its ES 1.1 counterpart is that it makes three calls to glVertexAttribPointer rather than glVertexPointer, glColorPointer, and glTexCoordPointer. (Replace everything from // Draw the surface to the end of the method with the corresponding code.)

Note

You must also make the same changes to the ES 2.0 renderer that were shown earlier in Example 5-7.

Example 5-13. ES2::RenderingEngine::Render with texture

void RenderingEngine::Render(const vector<Visual>& visuals) const
{
    glClearColor(0.5f, 0.5f, 0.5f, 1);
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

    vector<Visual>::const_iterator visual = visuals.begin();
    for (int visualIndex = 0; 
         visual != visuals.end(); 
         ++visual, ++visualIndex) 
    {

        // ...

        // Draw the surface.
        int stride = sizeof(vec3) + sizeof(vec3) + sizeof(vec2);
        const GLvoid* normalOffset = (const GLvoid*) sizeof(vec3);
        const GLvoid* texCoordOffset = (const GLvoid*) (2 * sizeof(vec3));
        GLint position = m_attributes.Position;
        GLint normal = m_attributes.Normal;
        GLint texCoord = m_attributes.TextureCoord;
        const Drawable& drawable = m_drawables[visualIndex];
        glBindBuffer(GL_ARRAY_BUFFER, drawable.VertexBuffer);
        glVertexAttribPointer(position, 3, GL_FLOAT, GL_FALSE, stride, 0);
        glVertexAttribPointer(normal, 3, GL_FLOAT, GL_FALSE, stride, normalOffset);
        glVertexAttribPointer(texCoord, 2, GL_FLOAT, GL_FALSE, stride, texCoordOffset);
        glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, drawable.IndexBuffer);
        glDrawElements(GL_TRIANGLES, drawable.IndexCount, GL_UNSIGNED_SHORT, 0);
    }
}

That’s it! You now have a textured model viewer. Before you build and run it, select BuildClean All Targets (we’ve made a lot of changes to various parts of this app, and this will help avoid any surprises by building the app from a clean slate). I’ll explain some of the details in the sections to come.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required