Final Project Write Up
Game in default state:
Game with clear color feedback:
Game with end state message:
Press Escape to quit the application.
Press the right arrow key to move the player paddle right.
Press the left arrow key to move the player paddle left.
Press the space bar to release the ball.
The main objective of the Final Project was to build a simple game or to build a new feature with the existing engine code that was developed throughout the course of the whole term. I chose to do the classic arcade game “Break Out” as I was really excited to implement this kind of game given the features our engine has. I wanted to make simple collisions amongst game objects and display feedback cues to the player. I felt that I wanted to implement these because, on previous assignments, changing the clear background color and swapping textures was required and I wanted to put this functionality into use for this project. I also wanted to focus on implementing much more user interactivity. In this project I allow the user to move the game paddle and release the ball by pressing the space bar.
For the most part, implementing the game project fit very well into my existing engine. I have vectors of tuples that hold 4 elements for a tuple entry. An entry in the vector of tuples holds the mesh, the effect, the texture and the position of an object that is rendered by the graphics project. For both the paddle and the ball object, adding the 4 required elements was straight forward. For the collection of bricks that the ball collides with I did the same but created a vector of Math::sVector elements to store the positions of each breakable brick. This vector is declared in the cExampleGame.cpp file and an associated function called FillBrickPositions() adds the positions to the vector. To check for the distance between the ball and a brick, I implemented a Distance() function in the Math::sVector class that calculates the distance between two vectors. This was necessary in order to tell if the ball and brick had to be updated due to a collision. For each element in the vector of positions, the cExampleGame::Initialize() function creates the meshes of the bricks and the cExampleGame::SubmitDataToBeRendered() function renders a brick at that position. I had to think a bit about the simplest solution to show and hide bricks that the ball had collided with and I believe this was a very simple solution that fit with the structure of the engine. Using vectors of tuples for the objects made this process very easy. However, when displaying the end state message to the player when she/he clears all the bricks or lets the ball hit the bottom of the screen was not as easy. This was due to the fact that I used separate vectors to represent sprites. I should have used the vector of tuples form I used for my meshes to represent my sprites. I did spend some time on a previous assignment to make effects alpha transparent which allowed me to show end state messages to the player the way I intended. As I mentioned before, I would change how sprite objects are represented in my engine by adding a vector of tuples for a sprite object that would hold the sprite, the effect and the texture of an object. To change the end state message of the game, I created a variable of type size_t to store the proper index values of the end state textures. This is not ideal because if I or a designer was to add more textures before the end state textures in the vector of textures, this would cause undesired textures to be displayed.
During the semester I have learned many things from completing the assignments. I understood the immediate advantages of implementing platform independent interfaces with platform dependent implementations. This has allowed me to experience the benefits of taking this approach by using it to create the interfaces for sprites, effects, textures and meshes for both OpenGL and Direct3D platforms. It has also changed the way in which I used to plan and structure classes. I now see why it is a wise strategy to split the functionality of classes by platform and keep separate function implementations per platform because this keeps code readable an easy to maintain in the long run. Having to extend this software architecture pattern in the shader files made shader programming much easier to debug and allowed the files to become less bloated and much more readable. This proved that platform independent interface programming can be extended to many different types of asset programming.
The way in which traditional graphics subjects were used to teach the rendering engine pipeline really allowed me to understand what are the differences between an effect, a sprite, a texture and a mesh and why they need to be rendered in a certain order. For example, sprites should always be rendered last because they would serve the purpose of a UI element as was illustrated in my “Break Out” implementation. I really enjoyed the threading assignment as it illustrated the precautions a developer has to take into consideration when running the graphical data of objects on separate threads. In this case, the graphical elements where used by an application thread and a render thread. Understanding why a clean-up was necessary on both threads was very enlightening as it would have never occurred to me that when a simulation is shutdown data may still be within one of the threads and has to be properly released.
Learning to use Lua for creating and reading custom vertex and index data really illustrated the convenience of making human readable files to tweak and change data for meshes. I understood how powerful this implementation is because it allows users to debug the mesh files and even gives the possibility of non-developers to change data.
I was extremely excited to have done the exercise of creating binary files from the human readable mesh files as I learned how to decipher binary files to a greater extent than what I usually could do when looking at one. Going through the process of creating binary files at build time instead of processing the Lua data at runtime of a mesh showed me just how formidable this approach is because it greatly reduced the size of the meshes in memory and decreased the time taken to read the mesh data at runtime by a significant amount. While working on this assignment I did learn an invaluable tip that I would never have come up with otherwise. The tip was that reversing the order of the array of indices for a mesh to be drawn in Direct3D was all that was needed when instead I had been swapping the indices in a “for” loop.
Working on both OpenGL and Direct3D made me understand what considerations had to be taken into account when submitting graphical data for each of them. I see now why they are different but understand how we as developers can plan and design interface to account for their differences. This does not restrict the implementation of platform independent interfaces to both these platforms as this approach can be maintained for any number of additional platforms. I chose to make the necessary function implementations for each platform even though at times the implementations were empty. This in the long run kept my engine code readable and maintainable but keeping implementations separate per platform file.
In summary and in my opinion, the assignments were designed to illustrate the rendering pipeline of a game engine and how we as developers can make strategic design decisions to create platform independent interfaces as well as make content creation and manipulation much more performant in terms of optimizing asset building and runtime rendering.
In terms of software architecture and design I tend to prefer to design and implement things as I go and only implement what is needed. However, after experiencing the Game Engineering II class, I am more apprehensive of sticking to this as I think it is wiser to spend a couple of hours to think and plan what requirements a piece of software would demand in the future. I would not however spend a great deal of time trying to design something up front because I feel that the software feature needs to be tested at some point and it is difficult to plan ahead for every single detail that the software has to account for. In my opinion, a more iterative process in which reaching a testing point of a feature can be done quickly and having the feature flexible enough to account for changes is preferred rather than over-engineering a feature or choosing to not spend any time on designing the feature beforehand. In summary, I believe there should be a mix of both where a fair amount of time is spent designing the feature and then implementation would focus on what is needed at the moment. I don’t think more features should be implemented if they are not needed because I think that the time spent on them can become a waste if the unnecessary features are not used in the end. If that were the case then the time spent on implementing extra features could have preferably been spent on fixing potential implementation issues with the high priority features or redesigning the software to be adjusted to the developer’s needs.
I don’t think that there is necessarily “good” architecture. I believe it is more a question about how well the implemented architecture allows for flexible software design and implementation. Another important consideration is how performant is the architecture with what the software is intended to do. If the architecture is causing additional processing work that is not required and a way to optimize the process exists, then the design of the architecture would be considered bad design. If the amount of work that is required to alter assets or re-implement features is cumbersome and runs the risk of causing bugs then I would consider this bad architectural design. While implementing the sprite representation for the ExampleGame project, I had to remember the indices of the arrays of sprites, effects and textures. If I made a mistake placing the indices then I would get undesired outcomes. This design is bad because it lead to bugs and decreased readability. It would have been preferable to have implemented a more iterative structure to assign the effect and texture for a sprite.
A good architecture is one that allows the developer to design and implement code or features that are easy to re-use and iterate upon. If the software is easy to read and each primary part of it is properly abstracted, the process of adding more features and assets should be seamless. Additionally, a good architecture is one that takes advantage of most if not all the ways of optimizing a process whether it be saving memory space or parsing data more effectively. As seen on the engine that was developed in Game Engineering II, the Graphics project and many others were separated and abstracted in order to improve readability and allow developers to focus on the part of the engine that required the feature. I learned how to re-calculate the mesh index data for different platforms in a much more effective way in class which simplified my code.
The time it took me to complete the final project was 10 hours.