• Skip to main content

Undead Labs

using our brains to save yours

  • The Lab
  • Games
    • State of Decay 2
    • State of Decay
  • News
  • Jobs
  • Contact
  • FAQ: State of Decay 2

Aug 16 2016

Get Your Head In The Game

Matt Heiniger is one of our Tech Artists, and he’s here today to walk us through some really kick-ass tech we’re using right now. If you’ve followed us on Instagram, you’ve seen some of the images. Next week we’ll be posting a contest where people in Seattle for PAX West can enter and win the chance to be scanned into our world. Meanwhile, check out this fascinating technology. — Sanya

One of the biggest compliments we get on State of Decay is the diversity of the cast. It features characters with a wide variety of ages, ethnicities, abilities, and sexual preferences. This is no accident. State of Decay is first and foremost designed to simulate the survival fantasy for everyday people. Sure, the game features lots of Zeds that are terrifying, gooey, exciting, and just plain fun. But it’s really a game about people.

We want you to live out your own personal survival fantasy. Part of that is giving you a character that is relatable to you. But by “you”, we mean all of you, not just those of you who are muscular, dark haired white dudes that look like you were ripped from a Calvin Klein advertisement.

But in order to pull this off, we need to make a lot of characters.  More specifically, a lot of faces. But here’s the thing: Making human faces is hard. Like, really hard. As social creatures, our brains are hard-wired to analyze tiny details of human faces in micro seconds in order to assess another human’s intent. Because of this function of our brains, any unrealistic feature of a digital face just looks wrong. The viewer may not even be aware of why it looks wrong, just that it does. This phenomenon is known as the uncanny valley. Historically, this has been a concern within the film industry. But as video games continue to increase in fidelity, we too are entering the uncanny valley. As such, we must put an increasing focus on getting facial features just right. Either that, or go highly stylized. But that’s a whole different direction.

cgijeffbridges
It’s almost Jeff Bridges…

Along with having to contend with the uncanny valley, increased fidelity means an increase in production time. Games are getting more and more detailed, but that detail doesn’t come free. Sculpting individual wrinkles on skin is painstaking and time-consuming.

Video games have come a long way in the last 16 years.
Video games have come a long way in the last 16 years.

Enter, photogrammetry.

Photogrammetry is an emerging technology that allows us to generate a 3D model of a real object from a series of photographs. Kinda like 3D scanning, but without the fancy and expensive hardware. So how does this wizardry work? I’m glad you asked.

We begin by having our subject (in this case Mike Estes from our QA department) sit in a chair surrounded by even, neutral-colored lighting. We take a series of photographs of his face from different angles, usually somewhere around 30 to 40 shots.

MikeHeads

From there, we feed the photos into our photogrammetry software of choice, Agisoft Photoscan, and mask out the portion of the photographs that do not include the subject.

Then the software analyzes each photo for high contrast reference points..

MikeReferencePoints
…matches these points in multiple images in order to determine their location in 3D space….

MikePointCloud
…connects the points to form a surface…

MikeMesh
…and projects the photos onto the surface to build the texture.

MikeTextured

Which gives us this:

 Mike Estes
by mattimus
on Sketchfab

The final results are pretty cool, but we aren’t done just yet. The generated mesh is somewhere around 1 million polygons, which is roughly 50 times more polygons than we have in an entire character. The mesh also contains a fair amount of noise from the scanning process that needs to be cleaned up. We treat the scanned mesh as a starting point for our final game model and process it the same way we would any other high poly source. We take the model into Zbrush to do smoothing and detailing, paint out any uneven lighting and texture artifacts, and scale the head to proper proportions. We also blend the skin to match our neutral base skin so that we can apply a range of different skin tones to the head.

MikeWIP

Next we project the results onto a clean, rigged, low poly game mesh, separate the eyeballs, and split the mouth open. Additional tweaks are done to the texture to make them match our art style and lighting conditions.

mike in game2_blurred

 

Voilà. A highly detailed, game-ready character in 1/5 the time it would take to sculpt one from scratch.

Mike is ready to kick some zombie ass
Mike is ready to kick some zombie ass
Join the Discussion
Matt Heiniger
Tech Artist

Written by Matt Heiniger · Categorized: Dev Blog, News, Research, State of Decay

Privacy and cookiesTerms of useTrademarks

About our ads© 2023 Microsoft© 2023 Undead Labs LLC.