.
M

ost people would view my childhood neighborhood in Jacksonville, Florida as a stereotypical element in the suburban America story. Not true! In the mind's eye of my 12-year-old self, it was anything but plain. Instead of trees, I saw rocket ships waiting to blast off. My neighbor mowing his lawn appeared to me as a robotic sentry patrolling the spaceport. Best of all, and long before the movie, my toys seemed to come to life as living companions that accompanied me and my friends as we embarked on our epic adventures. I lived in the most exciting neighborhood in the United States!

Today, as a 44-year-old adult, I find the realm of imagination even more exciting. We are approaching a turning point in technology that will allow the characters that I imagined as a child to step outside of my head and into the real world, ready to join me on my next adventure. How is this possible? Several sophisticated technologies that amplify each other are all simultaneously reaching maturity. Tech developers will soon deliver augmented reality ubiquitously using see-through glasses that utilize light field displays and computer vision for tracking, localization, and 3D reconstruction. New human-computer interfaces will allow natural interfaces for digital content creation. Rendered in the cloud, developers will deliver this content using high-bandwidth low-latency 5G networks. Machine learning methods will process sensor data to provide semantic scene understanding, while deep neural networks will empower speech recognition for natural language interfaces. Maturing artificial intelligence will allow interaction with intelligent digital characters, as if they were real beings. Finally, intelligent story technology will support narrative driven stories, blended into real-world surroundings that adapt to a user's actions.

Together with my team at the ETH Game Technology Center, we are doing our part to contribute to this vision. Our work on "Sketch Abstractions" allows a novice to create 3D animated characters from simple sketches. After drawing stick-figure legs, a rounded body, a pointy tail, triangular wings, and a rough head, our system understands and interprets the crude drawing and transforms it into an animated, flying dragon. Draw the legs extra-long, add two heads, and perhaps five triangles, and the system will immediately deliver a two-headed, extra tall, five-winged beast! Artists can create new character templates by authoring the 3D models and defining the associated sketching language.

With "PuppetPhone,"we focus on novel interactions in augmented reality by transforming digital characters into animated puppets that users can manipulate naturally using the physical movement of a smart phone. We embed a digital character into the real world using augmented reality on a smart phone. With our new puppet-interaction metaphor, when the user taps and holds on the screen, the digital character virtually attaches to the phone with an invisible rigid bar. The character directly mimics any physical movement of the phone. However, rather than simply moving like a rag doll, we interpret the user's gestures and enhance the character movement with semantically relevant animation. Thus, the character walks or runs across the table as the phone is moved. By tilting the phone forward, the character crouches, and by moving it upward, the character jumps. We even created a snowy scene where the user can encourage the puppet character to roll large balls of snow and assemble them into a snowman—all with simple, natural gestures. Of course, the snowman also comes to life to join the user on their next adventure.

Our work on "Emergent Play" goes even further to explore character intelligence and the interaction between real-world play and intelligent virtual characters. We created a digital character visualized in the real world using augmented reality. Our system uses machine learning to recognize and semantically identify objects in the world. A natural language interfaces allows the user to speak to the character, and speech synthesis allows the character to speak back. We show the potential of novel play patterns enabled by this technology with a simple safari animal scene. Our augmented reality system visualizes a 3D animated character standing amongst a collection of plastic safari animal toys. A child can speak to the digital character and ask him to point to the giraffe. Because of the machine learning and artificial intelligence of the system, the character will immediately comply. The child can also ask the character to walk over to the elephant. The character walks there. However, when asking to walk to the lion, the character refuses, saying, "No, I'm afraid of the lion!" Of course, he is afraid, what intelligent digital character wouldn't be!

These examples are just a few of the ways we are moving toward an era of intelligent augmented reality characters. Our group is not alone. Academic research labs as well as the world's biggest tech giants are all working at a staggering pace to build the tools, algorithms, and applications that will contribute to this vision. In short, we should all brace ourselves for an amazing future where our imaginations create the reality around us.

About
Robert Sumner
:
Robert Sumner is the Associate Director of Disney Research Zurich and Scientific Director of the Game Technology Center at ETH Zurich where he teaches and conducts research in game technology, computer animation, and augmented reality for science and education.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.