The second assignment starts to work towards the single interface multiple implementation model that we will be following in the architecture of the features we will be adding to the engine. I had to take existing code out of the “Graphics” class and refactor it into two classes: “Effect” and “Sprite”. The Effect class handles all the shader related operations like loading them, binding them and any cleanup that might be required on exit. The Sprite class is similar to Effect but instead of shaders it deals with vertex related data. The general approach I took was to first setup the interface (Header file) for the classes. The header files contain Direct3D (D3D) and OpenGL (OGL) specific code which I conditionally compile using preprocessor ‘#if’ blocks, this allows any code that accesses this interface to do so without having to know which version (D3D or OGL) will be used. I didn’t want to use the same approach for the CPP files because large blocks of code in #if blocks can get unruly. Instead I created multiple CPP files containing the platform specific implementations. The code that went into the platform specific files for the Effect class where mostly the same so I ended up creating a common CPP file for this code. However, there was no such luck for the Effects class.
Below is a code snippet of the bind and draw calls.
The second part of the assignment involved adding a second triangle to create a square. This part was pretty straightforward to do. I just had to add more vertices in the correct winding order (left hand for D3D and right hand for OGL).
Here it is in all its majesty.
I also learnt how to perform GPU captures for D3D and OGL. D3D can be done from within Visual Studio but I had to use a 3rd party software called RenderDoc for OGL. GPU captures allow you to see what the GPU sees you can make sure that the data you are sending in is being interpreted correctly after passing through the shaders.
The first two pictures are screenshots for D3D using the VS debugger. You can see that in the first one, the capture show the output of the ClearRenderTargetView function call which clears the screen and the second one shows the result of the Draw call, I’ve changed the view so that the wireframe is shown (output of the vertex shader) instead of the whole sprite.
The next three are from the RenderDoc program showing the OGL versions of the same function calls.
As part of the optional challenges, I changed the timescale used in the sprite fragment shader from simulation time to system time. Since we primarily use sprites in the UI, using simulation time will directly link the UI display cycle to the one used in game. This can lead to problems with the UI animation when the simulation time is manipulated for gameplay purposes. Let’s take an example of when the game is paused, the simulation time is set to 0, this will also pause all the fragment shader animations in the UI like the main menu which is not what we want. There are times when we do want this effect, a simple example would be in 2D sprites used in the world space; like diegetic UI that is placed in world: a pick up icon hovering above an item etc.
I decided to have some fun and use my newfound knowledge to build a house using triangles.
Looking at what is left in the Graphics class implementations (D3D and OGL) after removing the Effect and Sprite code, the main differences appear to be with the render target view which is specific to D3D and some platform specific code related to clearing the screen and swap chain. I can see these also being abstracted into a platform independent interface called “RenderTools” or something like that so that the Graphics class can have a single implementation for both D3D and OGL where it calls the functions in the common interface to perform these tasks.