Chapter 1. Shader 101

It seems an obvious question to ask at the beginning of an HLSL and shader book; what exactly is a shader? It’s a small program or algorithm written explicitly to run on a computer Graphics Processing Unit (GPU). It provides a way for developers to extend the rendering capabilities of the GPU. Any program that works closely with graphics will benefit from using shaders. The video game industry spins off custom shaders by the thousands; they are as vital to game projects as business entity classes are to line-of-business (LOB) applications. Nothing prohibits business programmers from experimenting with shaders in their LOB applications; in fact, recent trends in user interface (UI) design and information visualization cry out for shader use.

Because shaders run at the kernel level of the GPU, they are automatically parallelized by the GPU hardware and are extremely fast at manipulating graphic output. Typically, the GPU can process shaders several orders of magnitude faster than if the shader code is run on a CPU.

Why Should XAML Developers Learn HLSL?

If you are an XAML developer, I’ll wager you’ve heard about pixel shaders. In fact, you may be using some of these effects in your application already. WPF introduced the DropShadowEffect and BlurEffect in .NET 3.5 SP1 and both of these classes take advantage of pixel shaders. Silverlight added pixel shaders in Silverlight 3. The Windows Phone team disappointed developers by dropping support for shaders before the final release of their hardware. Microsoft had good reason to ditch phone shaders, as they caused a significant drag on performance, but their loss is still lamentable. To make up for that setback, the Silverlight 5 release includes support for XNA models and shaders.

This is awesome news, as it means that you can mix XNA and Silverlight 5 together in the same application and that gives you access to another essential shader type: the Vertex shader.

Note

XNA is a Microsoft framework that facilitates game development on the PC, the Xbox 360, and Windows Phone 7. It give you access to the power of DirectX without having to leave the comfort of your favorite .NET programming languages. To learn more about XNA, get a copy of Learning XNA 4.0 by Aaron Reed from: http://shop.oreilly.com/product/0636920013709.do

As an XAML developer, do you need to write your own shaders? No, not really; you may spend your entire career without ever using a shader. Even if you use a shader, you may never have the need to write your own, as there are free shader effects included in Microsoft Expression Blend and also in the .NET framework. While it’s nice to have these prebuilt effects, they represent only a fraction of the possibilities discovered by graphics programmers. Microsoft is not in the shader business, at least not directly. A core part of their business is building flexible programming languages and frameworks. The DirectX team follows this path and provides several shader programming languages for custom development. So if you have an idea for an interesting effect or want to modify an existing effect, you’ll need to write a custom shader. When you cross that threshold and decide to build a custom shader, you have some learning ahead of you. You need to learn a new programming language called HLSL.

Note

I’ve started using the term XAML development in the last year. Extensible Application Markup Language (XAML) is the core markup for Windows Presentation Foundation, Microsoft Surface, Silverlight, and Windows Phone applications. There are differences among these technologies, but they all share a common markup in XAML. Even the new Metro application framework for Windows 8 uses XAML as its primary markup implementation. I find that WPF and Silverlight developers have more in common with one another than they have differences. Since there is so much overlap in skills between these XAML-based systems, I think XAML developer is a suitable umbrella term that symbolizes this commonality.

The Tale of the Shader

To understand the history behind shaders, we need to go back a few decades and look inside the mind of George Lucas. Thirty years ago, George had produced the first movies in his highly successful Star Wars series. These first movies relied on using miniaturized models and special camera rigs to generate the futuristic effects. Lucas could already see the limitations of this camera-based system and he figured that generating his models in software would be a better approach. Therefore, he established a software division at LucasFilm and hired a team of smart people to build a graphics rendering system. Eventually the software division he created was sold off and became Pixar.

The engineers hired by Lucas took their responsibilities seriously and were soon generating 3D objects within software. But these computer-generated models failed when spliced into the live action, as they suffered from a lack of realism. The problem is that a raw 3D object looks stark and unnatural to the movie viewer, and won’t blend with the rest of the movie scene. In other words, it will be painfully obvious that there is a computer-generated item up on the big screen. In the quest to solve this problem, an engineer named Rob Cook decided to write a ‘shading’ processor to help make the items look more realistic. His idea was to have software analyze the 3D object and the surrounding scene and determine where the shadows fell and light reflected onto the model. Then the shader engine could modify the film output to imitate the real world placement of the artificial artifact. To be fair, there were existing shade tools available, but they were primitive and inflexible. Rob’s breakthrough idea was to make a scriptable pipeline of graphic operations. These operations were customizable and easy to string together to create a complex effect. These “shaders” eventually became part of an infamous graphics program called Renderman, which remains the primary rendering tool used for every Pixar movie. While you may not be familiar with the Renderman name, you certainly have seen the output from this phenomenal program in movies like Toy Story 3.

Pixar has an informative section devoted to Renderman on their website at http://renderman.pixar.com/products/index/renderman.html.

The beginnings of this shader revolution started back in the early 1980s and ran on specialized hardware. But the computer industry is never idle. By the late nineties, 3D graphics accelerator cards started to show up in high-end PCs. It wasn’t long before card manufacturers figured out how to combine 2D and 3D circuits into a single chip and the modern Graphics Processor Unit (GPU) was born. At this same time, the GPU manufacturers came up with their own innovative idea—real-time rendering—which allows processing of 3D scenes while the application is running. Prior to this breakthrough, the rendering was performed offline. The burgeoning game development industry embraced this graphics advance with enthusiasm and soon 3D frameworks like OpenGL and Microsoft Direct3D were attracting followers. This is the point in the story where HLSL enters the picture.

HLSL and DirectX

In the early days of GPUs, the 3D features were implemented as embedded code within the video card chips. These Fixed Functions, as they were known, were not very customizable and so chipmakers added new features by retooling the chips and throwing hardware at the problem. At some point, Microsoft decided this was solvable with software and devised an assembly language approach to address the problem. This worked and made custom shaders possible, but required developers who could work in assembly language. Assembly language is notoriously complex and hard to read. For example, here is a small sample of shader assembly code for your reading pleasure (Example 1-1).

Example 1-1. Shader written in Assembly Language

; A simple pixel shader
; Use the ps 2.0 instruction set and registers
ps_2_0
;
; Declare a sampler for the s0 register
dcl_2d s0
; Declare t0 to use 2D texture coordinates
dcl t0.xy
; sample the texture into the r1 register
texld r1, t0, s0
; move r1 to the output register
mov oC0, r1

Note

DirectX 8.0 was the first version to include programmable shaders. It first appeared in 2000 and included the assembly level APIs.

Working in assembly takes a special breed of programmer and they are in short supply. NVidia and Microsoft saw this as an opportunity to bolster the PC graphics market and collaborated to create a more accessible shader language. NVidia named their language Cg while Microsoft chose the name High Level Shader Language (HLSL) for their version. Cg and HLSL have virtually identical syntax; they are branded differently for each company. Both languages compile shaders for DirectX. Cg has the additional benefit of compiling shaders for the OpenGL framework.

Note

The Open Graphics Library, a.k.a. OpenGL, is an open source, cross-platform 2D/3D graphics API.

These higher level languages are based on the C language (in fact the name Cg stands for C for Graphics) and use curly brackets, semicolons, and other familiar C styled syntax. HLSL also brings high-level concepts like functions, expressions, named variables, and statements to the shader programmer. HLSL debuted in DirectX 9.0 in 2002 and has seen steady updates since its release.

Let’s contrast the assembly language shown in Example 1-1 with the equivalent code in HLSL (Example 1-2).

Example 1-2. Shader written in HLSL

sampler2D ourImage;

float4 main(float2 locationInSource : TEXCOORD) : COLOR
{
  return tex2D( ourImage , locationInSource.xy);
}

Here, the first line is declaring a variable name ourImage, which is the input into the shader. The next line defines a function called main that takes a single parameter and returns a value. That return value is vital, as it is the output of the pixel shader. That “float4” represents the RGBA values that are assigned to the pixel shown on the screen.

This is about the simplest pixel shader imaginable. Trust me, there are more details ahead. This is a preliminary look at shader code; there are detailed discussions of HLSL syntax throughout the remainder of this book.

Note

This is the first HLSL example in the book but it should be obvious to anyone with a modicum of programming experience that the HLSL version is easier to read and understand than the assembly language version.

Understanding the Graphics Pipeline

HLSL is the shader programming language for Direct3D, which is a part of Microsoft’s DirectX API. Appendix A contains a detailed account of Direct3D and the graphics-programming pipeline. What follows is a simplified account of the important parts of the shader workflow.

To understand pixel shaders in the XAML world requires a quick look at how they work in their original Direct3D world. Building a 3D object starts with defining a model. In DirectX, a model (a.k.a. mesh) is a mathematical representation of a 3D object. These meshes are defined as arrays of vertices. This vertex map becomes the initial input into the rendering pipeline.

Note

If you studied geometry, you’ve seen the term vertex. In solid geometry, a vertex represents a point where three planes meet.

In the DirectX realm, a vertex is more than a 3D point, however. It represents a 3D location, so it must have x, y, and z coordinate information. Vertices may also be defined with color, texture, and lighting characteristics.

The 3D model is not viewable on screen without conversion. Currently the two most popular conversion techniques are ray tracing and rasterization. Rasterization is widespread on modern GPUs because it is fast, which enables high frame rates—a must for computer games.

As I mentioned before, the DirectX graphics pipeline is complex, but for illustration purposes, I’ll whittle it down to these few components (Figure 1-1.)

Three DirectX pipeline components

Figure 1-1. Three DirectX pipeline components

DirectX injects two other important components into this pipeline. Between the model and the rasterizer lives the vertex shader (Figure 1-2). Vertex shaders are algorithms that transform the vertex information stored in the model before handoff to the rasterizer.

The vertex shader in the pipeline

Figure 1-2. The vertex shader in the pipeline

Vertex shaders get the first opportunity to change the initial model. Vertex Shaders simply change the values of the data, so that a vertex emerges with a different texture, different color, or a different position in space. Vertex shaders are a popular way to distort the original shape and are used to apply the first lighting pass to the object. The output of this stage is passed to the rasterizer. At this point in the pipeline, the rasterized data is ready for the computer display. This is where the pixel shader, if there is one, goes to work.

The pixel shader examines each rasterized pixel (Figure 1-3), applies the shader algorithm, and outputs the final color value. They are frequently used to blend additional textures with the original raster image. They excel at color modification and image distortion. If you want to apply a subtle leather texture to an image, the pixel shader is your tool.

The pixel shader in the pipeline

Figure 1-3. The pixel shader in the pipeline

XAML and Shaders

Now that you’ve learned the fundamentals of the DirectX pipeline, you’re ready to take a closer look at how Silverlight and WPF use shaders. Let’s examine what happens in the WPF world first. In WPF, the underlying graphics engine is DirectX. That means that even on a simple business form consisting of a few text controls, the screen output travels through the DirectX pipeline (the very same pipeline described above). WPF takes your XAML UI tree and works its magic on it, instantiating the elements, configuring bindings, and performing other essential tasks. Once it has the UI ready, it passes it off to DirectX which rasterizes the image as described earlier. Here’s what the process looked like in the first release of WPF (Figure 1-4).

WPF 3.0 render process

Figure 1-4. WPF 3.0 render process

As you can see, there were no vertex or pixel shaders available. It took another couple years for Microsoft to add shaders to the mix. Pixel shaders appeared in .NET 3.5 in 2007 and now the process looks like this (Figure 1-5).

WPF 3.5 adds pixel shaders to the render process

Figure 1-5. WPF 3.5 adds pixel shaders to the render process

Notice how the end of this pipeline is identical to the 3D model pipeline cited earlier. As you can see, the input data for a pixel shader is the output from the rasterizer. It really doesn’t matter to the shader whether that information is a rasterized version of a complex 3D shape or the output from an XAML visual tree. The shader works the same way for both, since it is only 2D information at this point.

You might notice that there are no Vertex shaders in the WPF pipeline. That’s not an omission on my part. Vertex shaders are not available to WPF and there are no plans to add them to WPF. The likely reason for this oversight was the release of XNA, Microsoft’s managed game development platform. XNA has a tight relationship with DirectX/Direct3D and treats 3D and models nearly the same as native DirectX.

Don’t be too sad over the loss of the vertex shader—pixel shaders are still a powerful technique and can create a variety of useful effects. In fact, since current PC hardware is so powerful, game developers often prefer using pixel shaders for lighting calculations, a job that used to be handled by vertex shaders.

Silverlight is similar to WPF in many respects when it comes to shaders. Silverlight supports pixel shaders like WPF. It doesn’t support vertex shaders directly. Instead, it uses XNA integration for 3D rendering. The Silverlight team chose to embrace the XNA framework and integrate it into their specifications, rather than write their own 3D engine. If you are an experienced XNA developer, you should have no problem adapting to the Silverlight version.

In Silverlight, pixel shaders are always executed on the CPU. In fact, the rasterizer also runs on the CPU.

WPF, on the other hand, runs the shaders on the GPU, falling back to CPU only in rare cases. Because Silverlight uses the CPU, you might worry about performance. You may suspect that Silverlight is slower when processing shaders and you’d be correct. Silverlight mitigates some of the performance woes by running shaders on multiple cores (when available) and by using the CPU’s fast SSE instruction set. Yes, Silverlight shaders are slower than their WPF counterparts. When it comes to pixel manipulation, though, Silverlight shaders are still the fastest option, beating other venues like WriteableBitmap by a substantial margin. If you want to see the performance ramifications for yourself, René Schulte has an illuminating Silverlight performance demo that you should check out when you have the time: http://kodierer.blogspot.com/2009/08/silverlight-3-writeablebitmap.html

Summary

Pixel shaders have revolutionized the computer graphics industry. The powerful special effects and digital landscapes shown in modern movies and games would not be possible without them. Adobe Photoshop and other designer applications are jammed with effects that are implemented with pixel shaders. They are a magnificent way to create your own custom effects. Granted, the HLSL syntax is a bit cumbersome and difficult to understand at first, but it’s worth learning. Once you master HLSL, you can create shaders for DirectX, XNA, WPF, Silverlight, and Windows 8 Metro. In the next chapter, I’ll show you how to create your first XAML shader project. By the end of this book, you’ll be able to add the title “HLSL ninja” to your resume.

Get HLSL and Pixel Shaders for XAML Developers now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.