Matt Heiniger is one of our Tech Artists, and he’s here today to walk us through some really kick-ass tech we’re using right now. If you’ve followed us on Instagram, you’ve seen some of the images. Next week we’ll be posting a contest where people in Seattle for PAX West can enter and win the chance to be scanned into our world. Meanwhile, check out this fascinating technology. — Sanya
One of the biggest compliments we get on State of Decay is the diversity of the cast. It features characters with a wide variety of ages, ethnicities, abilities, and sexual preferences. This is no accident. State of Decay is first and foremost designed to simulate the survival fantasy for everyday people. Sure, the game features lots of Zeds that are terrifying, gooey, exciting, and just plain fun. But it’s really a game about people.
We want you to live out your own personal survival fantasy. Part of that is giving you a character that is relatable to you. But by “you”, we mean all of you, not just those of you who are muscular, dark haired white dudes that look like you were ripped from a Calvin Klein advertisement.
But in order to pull this off, we need to make a lot of characters. More specifically, a lot of faces. But here’s the thing: Making human faces is hard. Like, really hard. As social creatures, our brains are hard-wired to analyze tiny details of human faces in micro seconds in order to assess another human’s intent. Because of this function of our brains, any unrealistic feature of a digital face just looks wrong. The viewer may not even be aware of why it looks wrong, just that it does. This phenomenon is known as the uncanny valley. Historically, this has been a concern within the film industry. But as video games continue to increase in fidelity, we too are entering the uncanny valley. As such, we must put an increasing focus on getting facial features just right. Either that, or go highly stylized. But that’s a whole different direction.
Along with having to contend with the uncanny valley, increased fidelity means an increase in production time. Games are getting more and more detailed, but that detail doesn’t come free. Sculpting individual wrinkles on skin is painstaking and time-consuming.
Photogrammetry is an emerging technology that allows us to generate a 3D model of a real object from a series of photographs. Kinda like 3D scanning, but without the fancy and expensive hardware. So how does this wizardry work? I’m glad you asked.
We begin by having our subject (in this case Mike Estes from our QA department) sit in a chair surrounded by even, neutral-colored lighting. We take a series of photographs of his face from different angles, usually somewhere around 30 to 40 shots.
From there, we feed the photos into our photogrammetry software of choice, Agisoft Photoscan, and mask out the portion of the photographs that do not include the subject.
Then the software analyzes each photo for high contrast reference points..
Which gives us this:
The final results are pretty cool, but we aren’t done just yet. The generated mesh is somewhere around 1 million polygons, which is roughly 50 times more polygons than we have in an entire character. The mesh also contains a fair amount of noise from the scanning process that needs to be cleaned up. We treat the scanned mesh as a starting point for our final game model and process it the same way we would any other high poly source. We take the model into Zbrush to do smoothing and detailing, paint out any uneven lighting and texture artifacts, and scale the head to proper proportions. We also blend the skin to match our neutral base skin so that we can apply a range of different skin tones to the head.
Next we project the results onto a clean, rigged, low poly game mesh, separate the eyeballs, and split the mouth open. Additional tweaks are done to the texture to make them match our art style and lighting conditions.
Voilà. A highly detailed, game-ready character in 1/5 the time it would take to sculpt one from scratch.