Today’s the day! Umbrella Mondays is going to launch tonight, at 6pm, as part of the Stout Game Expo! If you’re in the area, come down to the Memorial Student Center and check us out!
The game will be coming online very soon! Keep an eye on our social media channels for when it will be available for download.
Casey here today to talk about additive scene loading. Today’s post will be a bit longer than most, but that’s because of the high level of interest revolving around this topic and the complexity of the topic itself.
Additive scene loading in Unity is much like the layers feature in Photoshop. Usually in Unity, everything is contained within one scene, but with an additive scene structure, you can have many individual scenes for your level–even separate scenes for your menu, your character, and your lighting. To make your game, these separate scenes are all combined and layered on top of one another, instead of having to include everything in one scene.
Each room is contained in its own scene, but all of the scenes share lighting data and master scripts
Additive scenes are powerful tools mainly utilized in open world games to create a seamless experience without loading screens while a player is exploring a larger level. The benefits sound attractive, but for this structure to be done well, you need to put in a lot of planning and forethought. In addition, you will also need to communicate with your team how to test and continue to develop the game, for if you are changing the entire loading structure, it can become difficult to test.
Before you start your adventure into additive scene loading, you need to ask yourself some design questions:
Is the idea of a seamless experience important to the core design?
Is my team willing to learn how to work with this new structure?
Is most of my development team’s code modular and only getting references by using Unity’s ‘GameObject.FindGameObjectWithTag’ method? (alternatively: is my team willing to change our code to do this)?
What scene will be the active scene? / How will I trigger a level segment to begin to load/unload?
What type of lighting are we using and where will the lighting data be stored?
What is the processing power of the minimum required specs to play our game?
Am I okay with small amounts of game stutter if I don’t heavily optimize the code and the scene?
If you said yes to everything, or have a plan on how to answer everything, then I recommend that you start playing around with the idea of additive scene loading. I’m not gonna teach you here, but if you want, I recommend you start by checking out this documentation:
We just recently implemented additive scenes in Umbrella Mondays with moderate levels of success. It did add a bit of temporary confusion and slowed down our pipeline for a little bit while everyone acclimated, but the additive scenes made it so we have fewer merge conflicts because everyone can work in different scenes.
Overall our game’s scene flow structure is as follows:
*This is still a work in progress, structure subject to change!
*Note that the Build Index of splash should always be 0. Unity tries to load index 0 differently than other scene indexes because it assumes index 0 is the splash scene.
Now that I’ve sold you on how awesome this structure is, I’d like to provide a couple reasons why you maybe shouldn’t build your game this way. If you’re going to use this structure, you should know some of the difficulties that we’ve run into while working with it and understand that there is a large amount of learning and work needed to implement additive scene loading functionality.
Regarding Cross Scene References:
You cannot access game objects in other scenes. The work-around for this is using ‘gameObject.FindGameObjectByTag(‘Tag’). Alternatively, you can instantiate a copy of an object into another scene via the “MoveGameObjectToScene()” method found here. You could also purchase the Advanced Multi-Scene plugin (we don’t use this for Umbrella Mondays).
Regarding Loading and Optimization:
Unity’s “LoadLevelAsync()” command doesn’t load very well and can cause stuttering. Furthermore, loading times vary depending on hardware, thus making debugging load time issues difficult and sometimes random. We found that the debugging process went easiest when we BUILT our game and then ran it on a machine that could barely run it and saw what broke. Everything needs to be highly optimized in order for additive scenes to work. In addition, after additive scene loading is implemented, problems can be harder to solve using Unity’s Profiler.
You can see how additive scene loading dynamically adds and removes scenes from the game as it runs.
It should also be noted that additive scenes are a new/underdeveloped feature in Unity, yet they’re quickly becoming the current industry standard (sort of). Some game companies currently work with Unity to modify and customize the Unity source code to make the loading work better. Thus, the high-end tools are still proprietary to the companies that created and use them, making this solution out of the reach of most small developers. Each loading scheme must be tailored and highly optimized for each project, and some optimization can only be done by higher-end studios due to the need to access Unity source code to get better a loading structure.
In summary, choosing to incorporate additive scene loading can have a very large impact on the project, as it affects how the game performs, runs, and fundamentally functions. Adding this loading functionality requires a lot of work, care, and forethought to make it work at an acceptable level, and can cause lots of strange issues (such as objects being duplicated into scenes, or scenes thinking a change has happened upon loading when nothing has changed).
If you’re willing to tackle the challenges, additive scene loading gives a professional feel to a game, but oftentimes isn’t necessary and might wind up being more trouble than it’s worth. Consider the needs of your project, as well as your own interests and ability to tackle all of the problems that may arise.
That being said, I’d like to share some useful videos I found that aid in the concepting and implementation of these ideas and structures, if you’re interested in learning more!
We’re releasing Umbrella Mondays in EXACTLY ONE WEEK! (Well, six days and nineteen hours, but who’s counting?). Come check us out at Stout Game Expo (SGX) on April 30th, from 6-9pm in the Memorial Student Center Great Hall at UW-Stout!
Similar to that of Fella, this model used concept art as the basis for design and blocking out shapes in ZBrush. After refinement and tweaking, a base sculpt is then exported as an ‘.obj’ file and brought into Maya.
In Maya, the next step is redefining the topology using the Quad-tool. Retopology at this stage leads to significantly lower polygons and better flow of geometry. Once the new topology is ready, UV mapping starts immediately.
Each one of the white lines is the edge of a UV
Hand-placed polygons produce user-friendly UV-mapping, making it easier for texture artists in the long run.
Once the UVs are finished, the model can proceed into texturing and rigging. The rig is the key to making Cornelius look and feel alive, otherwise he’s just a pile of static vertices and polygons.
Final poly count: 7137
With all the pieces coming together Model, Textures, and Animation, once in the Unity engine, Cornelius can be right at home.
An idle animation in-engine
Now Fella can approach Cornelius and talk to him about life or whatever dialogue we need. This was a big step towards a living town.
INCOMING INFORMATION ABOUT SGX 2018
Our team is in the final stretches of releasing Umbrella Mondays! We will be showcasing our game this month, April 30th in the Great Hall at University of Wisconsin-Stout for SGX 2018! The convention hosts a multitude of games, from board games to video games. It’s held from 6:00 PM to 9:00 PM. This event is free! We’ll keep updating our social media channels as we approach that time!
Happy Monday! My name is Maria, and I’m one of the programmers of Turnip Town. Today, I’m writing about the game’s audio design and implementation.
Back in September when our game design team was still considering two different game ideas, we had to prepare extensive presentations to pitch our ideas. As a part of the Umbrella Mondays pitch, I created a list of musical characteristics that would guide the creation of music assets, including both music and sound effects.
After we chose Umbrella Mondays, I further developed this document and called it the “Audio Style Guide.” Some of the core ideas that drive the audio in Umbrella Mondays are that it is therapeutic, relaxing, has melody, and is meant to be encouraging to the player. As a musician myself, I was able to define detailed characteristics regarding melody, texture, harmony, rhythm, and timbre of the intended music design.
In January, our professor contacted Berklee College of Music in Boston, MA, with the hope of starting a collaboration between our class and their composition students. Long story short, two fantastic video game music composition students joined our team! Their names are Courtney and Lisa.
As self-proclaimed Audio Lead for Umbrella Mondays, I created other documents to help our collaboration this semester: an asset list of all sound effects (SFX) and music tracks, and–most importantly–a detailed spreadsheet of all the asset descriptions, organized into categories. This spreadsheet also serves as our progress tracker and a way to figure out what assets have higher priority over others.
The Audio Asset Descriptions & Progress Tracker Spreadsheet! (Yes, there’s a lot going on…)
During weekly Skype calls with the two Berklee composers, myself, and our Design Lead, we discuss what various SFX and music tracks should sound like, give feedback to Lisa and Courtney about assets they create, and talk about how to implement audio into the game. So far, every meeting has gone well and I am continuously impressed with their enthusiasm and quick turnaround for asset creation.
A couple weeks ago, I attended the Game Developers Conference and had the privilege of meeting a few other audio programmers and composers. It was great to talk with them about implementing music and sound effects in games, which is what I will be working on until the end of the semester!
The little floating speaker is what audio sources look like in the Unity game engine
For all those fellow college students reading our blog nearing the end of another semester, keep going! We can do it! Enjoy the rest of your week!
Hey everyone! This is Spencer! We’re diving into the technical side once again; today we are talking a little bit about shaders. I mentioned them briefly the last time I wrote, so this is a good chance to go into more detail on what shaders are! Though they’re not my personal area of expertise, I have been working on a custom shader for our puddles that sends out small, colored waves when Fella steps into them, as well as making rain splashes.
The shader randomly generates points for ripples
At their most basic, shaders are powerful graphics programs that calculate how an object is rendered (i.e. how light interacts with it). Complicated shaders have expanded to non-shading purposes, but today we will just be focusing on the rendering of the object. Shaders are especially powerful due to their speed. They’re fast because they are often made to run on the GPU (Graphics Processing Unit) of the computer in parallel. This means that the shader runs many versions of itself and outputs pieces of the whole image or object, which is much faster than going one by one. In its ‘simplest’ form, shaders come with two stages, the vertex shader and the fragment shader. The vertices of the mesh are the mathematical points that connect together to make faces, and all those faces combine to make the object.
Each yellow dot is a vertex
The orange square is a face
The fragments are the rendered area in-between each of the vertices. The vertex shader comes first, and it primarily calculates data on positions, rotations normals, and more. This data is then passed onto the fragment. The fragment shader is where it takes the positions in-between vertices and interpolates data from the closest surrounding vertices to determine the value the fragment should be. Fragment shaders set the final colors based on where the final interpolated position would be on the texture map, calculate final lighting effects based on the normal data passed in, and other final details. The fragment shader lets the programmer make fine-detail effects on the mesh where vertices don’t exist due to the power of interpolation.
In the example of the puddle shader, most of the processing is done in the fragment shader. The vertex shader passes in the vertices’ position and normal data to the fragment shader, which then takes in variable data, which is when we pass the shader an impact position (like Fella’s foot colliding with a puddle) and distance. After calculating a distance using the fragment’s interpolated position, the shader decides to color the fragment bright if it is between a maximum and minimum distance. The normals are also changed to a random direction, then slowly changed back to match the actual normal of the plane to give a nice lighting effect where the wave disturbs the reflected light for a moment.
The vertex shader could also have been used to raise the vertex positions based on the wave location to make an actual physical wave in 3D, however, we refrained from this choice for our game’s aesthetic and minimizing vertices (as it would take a lot of vertices to get a smooth effect.) Shaders are running in every game with amazing effects, and are used in creating water, reflections, refractions, and more. Every nearly modern game relies on their power to define their looks and impress audiences. This is just a small example! Many shaders have several layers of effects and calculations!
Fella’s footsteps create ripples as she runs
Shaders are running in every game with amazing effects, and are used in creating water, reflections, refractions, and more. Every nearly modern game relies on their power to define their looks and impress audiences. This is just a small example! Many shaders have several layers of effects and calculations!
Thanks for reading, and we look forward to our last month of development!