Game Launch!

SGXposter.png

Today’s the day! Umbrella Mondays is going to launch tonight, at 6pm, as part of the Stout Game Expo! If you’re in the area, come down to the Memorial Student Center and check us out!

The game will be coming online very soon! Keep an eye on our social media channels for when it will be available for download.

In the meantime, check out our launch trailer!

 

Thanks so much for all of your support!

Stay dry!

cropped-umbrellaforweb.png

Advertisements

Additive Scenes Crash Course

Happy Monday everyone!

Casey here today to talk about additive scene loading. Today’s post will be a bit longer than most, but that’s because of the high level of interest revolving around this topic and the complexity of the topic itself.

Additive scene loading in Unity is much like the layers feature in Photoshop. Usually in Unity, everything is contained within one scene, but with an additive scene structure, you can have many individual scenes for your level–even separate scenes for your menu, your character, and your lighting. To make your game, these separate scenes are all combined and layered on top of one another, instead of having to include everything in one scene.

Blog20-1
Each room is contained in its own scene, but all of the scenes share lighting data and master scripts

Additive scenes are powerful tools mainly utilized in open world games to create a seamless experience without loading screens while a player is exploring a larger level. The benefits sound attractive, but for this structure to be done well, you need to put in a lot of planning and forethought. In addition, you will also need to communicate with your team how to test and continue to develop the game, for if you are changing the entire loading structure, it can become difficult to test.

Before you start your adventure into additive scene loading, you need to ask yourself some design questions:

  1. Is the idea of a seamless experience important to the core design?
  2. Is my team willing to learn how to work with this new structure?
  3. Is most of my development team’s code modular and only getting references by using Unity’s ‘GameObject.FindGameObjectWithTag’ method? (alternatively: is my team willing to change our code to do this)?
  4. What scene will be the active scene? / How will I trigger a level segment to begin to load/unload?
  5. What type of lighting are we using and where will the lighting data be stored?
  6. What is the processing power of the minimum required specs to play our game?
  7. Am I okay with small amounts of game stutter if I don’t heavily optimize the code and the scene?

If you said yes to everything, or have a plan on how to answer everything, then I recommend that you start playing around with the idea of additive scene loading. I’m not gonna teach you here, but if you want, I recommend you start by checking out this documentation:

Core to Additive Scenes: LoadSceneAsync            

General Scene Managing: SceneManager

We just recently implemented additive scenes in Umbrella Mondays with moderate levels of success. It did add a bit of temporary confusion and slowed down our pipeline for a little bit while everyone acclimated, but the additive scenes made it so we have fewer merge conflicts because everyone can work in different scenes.

Overall our game’s scene flow structure is as follows:

Blog Post #20
*This is still a work in progress, structure subject to change!

*Note that the Build Index of splash should always be 0. Unity tries to load index 0 differently than other scene indexes because it assumes index 0 is the splash scene.

Now that I’ve sold you on how awesome this structure is, I’d like to provide a couple reasons why you maybe shouldn’t build your game this way. If you’re going to use this structure, you should know some of the difficulties that we’ve run into while working with it and understand that there is a large amount of learning and work needed to implement additive scene loading functionality.

Regarding Cross Scene References:

You cannot access game objects in other scenes. The work-around for this is using ‘gameObject.FindGameObjectByTag(‘Tag’). Alternatively, you can instantiate a copy of an object into another scene via the “MoveGameObjectToScene()” method found here. You could also purchase the Advanced Multi-Scene plugin (we don’t use this for Umbrella Mondays).

Regarding Loading and Optimization:

Unity’s “LoadLevelAsync()” command doesn’t load very well and can cause stuttering. Furthermore, loading times vary depending on hardware, thus making debugging load time issues difficult and sometimes random. We found that the debugging process went easiest when we BUILT our game and then ran it on a machine that could barely run it and saw what broke. Everything needs to be highly optimized in order for additive scenes to work. In addition, after additive scene loading is implemented, problems can be harder to solve using Unity’s Profiler.

Blog20-2
You can see how additive scene loading dynamically adds and removes scenes from the game as it runs.

It should also be noted that additive scenes are a new/underdeveloped feature in Unity, yet they’re quickly becoming the current industry standard (sort of). Some game companies currently work with Unity to modify and customize the Unity source code to make the loading work better. Thus, the high-end tools are still proprietary to the companies that created and use them, making this solution out of the reach of most small developers. Each loading scheme must be tailored and highly optimized for each project, and some optimization can only be done by higher-end studios due to the need to access Unity source code to get better a loading structure.

In summary, choosing to incorporate additive scene loading can have a very large impact on the project, as it affects how the game performs, runs, and fundamentally functions. Adding this loading functionality requires a lot of work, care, and forethought to make it work at an acceptable level, and can cause lots of strange issues (such as objects being duplicated into scenes, or scenes thinking a change has happened upon loading when nothing has changed).

If you’re willing to tackle the challenges, additive scene loading gives a professional feel to a game, but oftentimes isn’t necessary and might wind up being more trouble than it’s worth. Consider the needs of your project, as well as your own interests and ability to tackle all of the problems that may arise.

That being said, I’d like to share some useful videos I found that aid in the concepting and implementation of these ideas and structures, if you’re interested in learning more!

Playdead’s talk about stutter-free 60 FPS in their game ‘Inside.’ https://youtu.be/mQ2KTRn4BMI

Unity Live Training – Additive Level Loading: https://www.youtube.com/watch?v=Drb61_C8-SQ

Multi-Scene editing in a professional setting; great ideas for how to set up your editor for your team: https://www.youtube.com/watch?v=KRmqy22z0SM

Best of luck!

-Casey

umbrellaforWeb

We’re releasing Umbrella Mondays in EXACTLY ONE WEEK! (Well, six days and nineteen hours, but who’s counting?). Come check us out at Stout Game Expo (SGX) on April 30th, from 6-9pm in the Memorial Student Center Great Hall at UW-Stout!

The Enhanced Pipeline of Character Design: Cornelius

Happy Monday!

April here to talk about the pipeline of character development once again! Last time, Margaret and I talked about the process we took to develop our protagonist character, Fella. Today will be about the pipeline used for Cornelius.

00_concept_corneliusTPose
2D concept art was used to inform modeling

Similar to that of Fella, this model used concept art as the basis for design and blocking out shapes in ZBrush. After refinement and tweaking, a base sculpt is then exported as an ‘.obj’ file and brought into Maya.

In Maya, the next step is redefining the topology using the Quad-tool. Retopology at this stage leads to significantly lower polygons and better flow of geometry. Once the new topology is ready, UV mapping starts immediately.

cornelius_22_uvs
Each one of the white lines is the edge of a UV

Hand-placed polygons produce user-friendly UV-mapping, making it easier for texture artists in the long run.

cornelius_22_uvs5

Once the UVs are finished, the model can proceed into texturing and rigging. The rig is the key to making Cornelius look and feel alive, otherwise he’s just a pile of static vertices and polygons.

cornelius_rig_02
Final poly count: 7137

With all the pieces coming together Model, Textures, and Animation, once in the Unity engine, Cornelius can be right at home.

cornelius_rig_04
An idle animation in-engine

Now Fella can approach Cornelius and talk to him about life or whatever dialogue we need. This was a big step towards a living town.

cropped-umbrellaforweb.png

INCOMING INFORMATION ABOUT SGX 2018

Our team is in the final stretches of releasing Umbrella Mondays! We will be showcasing our game this month, April 30th in the Great Hall at University of Wisconsin-Stout for SGX 2018! The convention hosts a multitude of games, from board games to video games. It’s held from 6:00 PM to 9:00 PM. This event is free! We’ll keep updating our social media channels as we approach that time!

Until next week!

Cheers,

-April

A Harmonious Collaboration

Happy Monday! My name is Maria, and I’m one of the programmers of Turnip Town. Today, I’m writing about the game’s audio design and implementation.

Back in September when our game design team was still considering two different game ideas, we had to prepare extensive presentations to pitch our ideas. As a part of the Umbrella Mondays pitch, I created a list of musical characteristics that would guide the creation of music assets, including both music and sound effects.

After we chose Umbrella Mondays, I further developed this document and called it the “Audio Style Guide.” Some of the core ideas that drive the audio in Umbrella Mondays are that it is therapeutic, relaxing, has melody, and is meant to be encouraging to the player. As a musician myself, I was able to define detailed characteristics regarding melody, texture, harmony, rhythm, and timbre of the intended music design.

In January, our professor contacted Berklee College of Music in Boston, MA, with the hope of starting a collaboration between our class and their composition students. Long story short, two fantastic video game music composition students joined our team! Their names are Courtney and Lisa.

As self-proclaimed Audio Lead for Umbrella Mondays, I created other documents to help our collaboration this semester: an asset list of all sound effects (SFX) and music tracks, and–most importantly–a detailed spreadsheet of all the asset descriptions, organized into categories. This spreadsheet also serves as our progress tracker and a way to figure out what assets have higher priority over others.

Audio Dev Log - Spreadsheet Asset Tracker (2) 2018-4-9
The Audio Asset Descriptions & Progress Tracker Spreadsheet! (Yes, there’s a lot going on…)

During weekly Skype calls with the two Berklee composers, myself, and our Design Lead, we discuss what various SFX and music tracks should sound like, give feedback to Lisa and Courtney about assets they create, and talk about how to implement audio into the game. So far, every meeting has gone well and I am continuously impressed with their enthusiasm and quick turnaround for asset creation.

A couple weeks ago, I attended the Game Developers Conference and had the privilege of meeting a few other audio programmers and composers. It was great to talk with them about implementing music and sound effects in games, which is what I will be working on until the end of the semester!

audioSphere
The little floating speaker is what audio sources look like in the Unity game engine

For all those fellow college students reading our blog nearing the end of another semester, keep going! We can do it! Enjoy the rest of your week!

-Maria

cropped-umbrellaforweb.png

Shaders and Puddles

Hey everyone! This is Spencer! We’re diving into the technical side once again; today we are talking a little bit about shaders. I mentioned them briefly the last time I wrote, so this is a good chance to go into more detail on what shaders are! Though they’re not my personal area of expertise, I have been working on a custom shader for our puddles that sends out small, colored waves when Fella steps into them, as well as making rain splashes.

UM_puddles_01
The shader randomly generates points for ripples

At their most basic, shaders are powerful graphics programs that calculate how an object is rendered (i.e. how light interacts with it). Complicated shaders have expanded to non-shading purposes, but today we will just be focusing on the rendering of the object. Shaders are especially powerful due to their speed. They’re fast because they are often made to run on the GPU (Graphics Processing Unit) of the computer in parallel. This means that the shader runs many versions of itself and outputs pieces of the whole image or object, which is much faster than going one by one. In its ‘simplest’ form, shaders come with two stages, the vertex shader and the fragment shader. The vertices of the mesh are the mathematical points that connect together to make faces, and all those faces combine to make the object.

The fragments are the rendered area in-between each of the vertices. The vertex shader comes first, and it primarily calculates data on positions, rotations normals, and more. This data is then passed onto the fragment. The fragment shader is where it takes the positions in-between vertices and interpolates data from the closest surrounding vertices to determine the value the fragment should be. Fragment shaders set the final colors based on where the final interpolated position would be on the texture map, calculate final lighting effects based on the normal data passed in, and other final details. The fragment shader lets the programmer make fine-detail effects on the mesh where vertices don’t exist due to the power of interpolation.

medium
Image from medium.com. If you’re looking for an even deeper, programming explanation of shaders, check out this article: https://medium.com/@nithstong/2d-colored-triangle-in-elm-with-webgl-2a9b2734ce77

In the example of the puddle shader, most of the processing is done in the fragment shader. The vertex shader passes in the vertices’ position and normal data to the fragment shader, which then takes in variable data, which is when we pass the shader an impact position (like Fella’s foot colliding with a puddle) and distance. After calculating a distance using the fragment’s interpolated position, the shader decides to color the fragment bright if it is between a maximum and minimum distance. The normals are also changed to a random direction, then slowly changed back to match the actual normal of the plane to give a nice lighting effect where the wave disturbs the reflected light for a moment.

The vertex shader could also have been used to raise the vertex positions based on the wave location to make an actual physical wave in 3D, however, we refrained from this choice for our game’s aesthetic and minimizing vertices (as it would take a lot of vertices to get a smooth effect.)

Shaders are running in every game with amazing effects, and are used in creating water, reflections, refractions, and more. Every nearly modern game relies on their power to define their looks and impress audiences. This is just a small example! Many shaders have several layers of effects and calculations!

UM_puddles_02
Fella’s footsteps create ripples as she runs

Shaders are running in every game with amazing effects, and are used in creating water, reflections, refractions, and more. Every nearly modern game relies on their power to define their looks and impress audiences. This is just a small example! Many shaders have several layers of effects and calculations!

Thanks for reading, and we look forward to our last month of development!

Spencer

umbrellaforWeb

Teaser Trailer

Hi all! This week, we’d like to present our teaser trailer. Enjoy!

(Or watch it on YouTube)

The music in our teaser is a result of our collaboration with Berklee College of Music in Massachusetts. We’ve been working with Lisa Jeong and Courtney Clark, and we’re looking forward to implementing more of their wonderful sound design as we keep working.

Everyone else in Turnip Town is still hard at work! We’re all leaving for spring break and GDC in the next couple of weeks, so we’ll be taking a short break from blog posts, but we’ll be back very soon with more updates.

Stay dry!

umbrellaforWeb

Making a Texture in Substance Designer

Happy Monday! I’m Evan, and I’ve been working a lot on texturing the 3D models our other artists have been making. Texturing is the process of adding color, material,  and details to what is oftentimes a very flat or simple 3D model. Most of our textures are painted in a program called Substance Painter, but sometimes we need a texture that would be very difficult to make in Substance Painter or a texture that has to tile very well with itself (which means to repeat without any visible seams). Creating these more complicated textures can be accomplished by using another program called Substance Designer!

substanceDesigner

Substance Designer works by using nodes with different functions that all connect to one another; the graphs that get made in Designer tend to look very complicated, but they can be simpler than they seem. Each node controls an aspect of the texture.

Graph
An example of a Substance Designer node tree for a single material

A lot of the nodes go into getting the proper shapes that will be used to make the needed textures.

This graph is a pretty simple one; most of the complexity of the final product comes from getting the simple shape to repeat randomly.

FX map
If it doesn’t repeat randomly, the grass texture won’t feel organic

This was a very complicated part to put together; I had to get some help on it since the setup felt closer to coding (this particular random generator was created by Vincent Gault). Once it was done, I could use it to turn my single grass blade into a field of grass that can tile together with itself, and then other artists can paint the grass texture onto the surfaces of 3D models however they like.

grass process
Even though it’s completely flat, there’s some height information in the texture that makes the grass look 3D

After the texture was made, all was left was color, which was just some additional nodes on the graph.

grass color

And that’s a simple grass texture! Then other artists can take this texture and apply it in-game however they like!

example.PNG
The grass texture painted onto the level

Have a good one,

Evan

cropped-umbrellaforweb.png