Interaction Design and Emerging Technology: Sparking Empathy Across Undefined Territories

Eric Arenson
With Intent
Published in
9 min readOct 16, 2018

--

The field of interaction design has its roots in Human-Computer Interaction, the discipline focused on how people interact with computers: the keyboard, the mouse, the stylus. Interaction design, equally user-centered, looks at what a computer, application, or system does to help a user accomplish
their goals.

As mainframe computers gave way to desktops, desktops gave way to laptops, and laptops gave way to mobile and tablet devices, interaction designers have had to broaden the metaphors and mental models they use to help users reach understanding simply and easily.

Augmented reality, virtual reality, and machine learning give us new tools to use in designing experiences to meet human needs. We’re past the point of proving their utility as technologies, now we’re trying to understand their value as human tools.

These specific emerging technologies each have unique impact on the work of the interaction designer. At Uncorked we’re considerate of their similarities as well as their differences.

Our goal is to help the broader design community to avoid solving for challenges that don’t really exist, and get to delivering human meaning with these technologies more efficiently.

Augmented Reality

At its core, augmented reality allows us to add a digital layer on top of the real world.

Magic Leap’s AR headset with controller.

Today most people experience AR through mobile and tablet devices, which is limiting. The position we hold our arms in as we explore an augmented space is cramped and awkward, and with a field of view so narrow, content is difficult to navigate.

As more hands-free wearable
AR devices, like Magic Leap and Microsoft’s HoloLens, come to market, more natural and fluid experiences will come along with them.

The most popular applications for AR have been in gaming, but there are a number of other examples that put it to practical use. IKEA Place allows you to virtually test products from the IKEA catalog in your home. The Smithsonian National Museum of Natural History’s Skin & Bones app brings skeletons in the Bone Hall exhibition to life for visitors, adding a richness not possible before.

At Uncorked, we partnered with Google to create Just a Line, enabling anyone to draw a line in augmented reality. For many this wasn’t only their first AR creation experience, but also their first experience with AR at all.

Through prototyping and observation, we learned a great deal about how people think about and navigate an augmented landscape, and continue to evaluate their creations.

Drawing in Just A Line, designed and developed by Google & Uncorked.

There are two key conceptual challenges with AR we’ve found at Uncorked that we think designers should consider:

Beyond introducing people to the fundamentals of augmented reality, how might we blend the digital and physical worlds to push past novelty, and truly enhance an experience?

The Google Translate app is a great example. Previously, to translate some text from a sign or menu, you’d have to type it in manually. If you were trying to quickly figure out which train to take before it left, this could take time you don’t have. With AR, you can simply point your camera at some text, and see the translation appear as if it were right on the same surface as the original, saving you potentially precious time.

How might we use AR to add context to data and other information that might be difficult to understand by itself, as a chart or graph?

The night sky, over Portland, Oregon’s day sky.

iCandi’s NightSky app unlocks the secrets of the planets, stars, constellations, and more in ways that a 2D paper planisphere can’t. Being able to explore the sky in context offers a much more seamless experience when stargazing with children, who are full of questions. Point the camera at a celestial object in question, and the label appears right on top of it. Here, AR helps you focus on the time together, instead of trying to decipher tiny text in the dark.

The real power in AR is in illuminating that which can’t be seen, and the experiences we design should apply this as meaningfully as possible. At Uncorked, we’re most excited about helping pull these novel experiences into spaces of true utility and everyday application.

Virtual Reality

While AR allows us to layer data onto reality, virtual reality allows us to create an immersive experience entirely apart from it. From the Oculus Quest to the low-tech Google Cardboard, we have a growing number of ways to experience VR.

Oculus Quest multi-player demo (from Reddit)

The home of VR has always been in escapism and entertainment. Dave & BustersJurassic World VR is in locations around the globe. The Void’s Star Wars: Secrets of the Empire is at Downtown Disney in Anaheim, CA. And IMAX’s VR arcades are opening worldwide. In 2017, Facebook announced the goal of getting “a billion people in virtual reality”, but, beyond souped-up movies and games, what will await them when they get there?

For interaction designers hoping to build useful experiences, VR introduces new possibilities, pushing us towards a blend of skills similar to the field of environmental design.

We’ve learned from environmental design that by engaging sight, sound, and in some cases even smell and touch, we can help people navigate unfamiliar spaces, know where to find information, or give them cues to understand how to react in different situations. We can leverage this way of thinking to craft meaningful, effective, and valuable experiences in VR.

Walmart recently announced a partnership with VR startup STRIVR to deploy VR-based training across almost 5,000 locations, training associates in difficult-to-recreate situations like Black Friday, and on new-to-market features like customer “pickup towers” before they’re even installed in stores.

Taking a trip to Jupiter in VR with NASA’s Juno Spacecraft, via Google Expeditions

In education, Google Expeditions hosts content from a number of creators enabling virtual field trips to Jupiter with NASA’s Juno spacecraft or Galapagos Island with XL Catlin Seaview Survey.

As virtual reality hardware proliferates into the classroom and beyond, understanding how to craft experiences like these will be an essential skill for interaction designers in the near future.

Here are some of the key conceptual challenges with VR we’ve found.

How can we design VR experiences that people value for what it helps them achieve?

NASA explored visualization, creation, and testing within NVIDIA’s Holodeck

GPU-maker NVIDIA recently announced its efforts to bring science fiction to life with its aptly-named Holodeck, an immersive VR space for teams to collaborate in from anywhere in the world. Designers from NASA, Toyota, and architecture firm KPF have given praise for Holodeck’s ability to strengthen the collaborative process.

In mobile apps, dark UX patterns are used to increase the amount of time a person spends in an application. How can we design experiences in ways that give people immersive, magical experiences, but ushers them gently back into reality when their task is done?

In mobile and web design, there are a number of techniques used to “increase user engagement”. In-app rewards, social scores, and completeness meters can all be used to lengthen the amount of time someone spends in an app or system. This is fine when used to gently nudge people to accomplish goals, but these techniques are also put to use to trick our brains into constantly scrolling, refreshing, or coming back.

By engaging more of the senses in VR, there’s great potential for these same techniques to be abused. As designers, we should be encouraging users to accomplish their goals and then return to the real world as efficiently as possible.

Machine Learning

For designers, machine learning may be the least understood of these three emerging technologies, mostly because it is the least visible.

Data scientists and engineers have advanced ML to where it’s more accessible than ever to a broad audience, and interaction designers need to consider how to use it to create richer, smarter experiences.

Like AR, machine learning has an opportunity to deliver something magical. But instead of adding more, the solutions we design in machine learning can remove extraneous information, leaving only the right answers, and become smart, predictive, and contextual.

Machine intelligence drives this example of trilingual input on Google’s Gboard

We already interact with machine learning over and over again, every single day. Search engines, predictive text, recommendations on social media, and streaming video services all use machine learning to help narrow our options to the most likely choices.

As Android and iOS make machine learning frameworks available to developers, we should see mobile apps begin to take advantage of this directly on a person’s device.

Applications today can have the ability to understand images, language, faces, and objects in the real world without an internet connection. What could you build to help solve problems in places with little-to-no connectivity? How could you create solutions to help enable quick decision-making where it can have the most impact?

The Stanford Artificial Intelligence Laboratory worked with the Stanford Cancer Institute to create a diagnosis algorithm for skin cancers with performance on-par with board-certified dermatologists; it’s likely that technology will soon be available on mobile devices.

In Space X’s reusable Falcon 9 rockets, machine learning algorithms are used to evaluate the environments the rockets passes through and make flight adjustments, from launch to re-entry, to make impossible-seeming vertical landings possible:

Machine learning algorithms are used to guide Space X rockets safely back to Earth.

Designers should be looking for ways to leverage personal and predictive data in their work to provide people with quicker, smarter ways to achieve their goals. The key conceptual challenges with ML we’ve found? There are two:

How might we design dynamic interfaces that adapt to a particular time, place, or mood?

In 2017, two researchers published Instagram photos reveal predictive markers of depression, a study where machine learning was used to “successfully identify the markers of depression” by analyzing people’s Instagram posts. Their approach looked at a number of aspects of people’s photographs to determine markers of depression: frequency of posts, presence of people, time of day, filters used, and more. The results showed a number of surprising correlations between depression and filters were used, how many comments a post received, and the likelihood of faces being present in the photos. Information like this could be used to assist someone through rough emotional times, or notify others to reach out to them.

How can we think differently about information available to us through mobile and worn devices that can put ML to use in new ways?

In September, Apple announced a new Apple Watch that included a new feature: fall detection. To determine what characterized a fall, they carried out a study with over 2,500 participants gathering what amounted to 250,000 days of data. Through analyzing the real-world sensor data over this time period, they were able to build a profile of what constituted a fall, and built the feature into the watch to help give seniors and their family members greater peace of mind.

At Uncorked, we’re hoping to help use ML to push into areas in ways people don’t even notice. Credit card fraud detection is a perfect example: We never think too much about how it works, and even after we get a phone call, we still don’t wonder. We simply enjoy the peace of mind knowing that our privacy is being guarded, and developing solutions with that level of invisibility is what intrigues us most.

It’s Still All About Solving Needs

From the crafting of the first stone tool that went on to change early human diets to learning about space travel by observing the far reaches of our solar system in an entirely immersive virtual world, technology has consistently given us new ways of engineering change.

It doesn’t have to be like this, and we’re the ones who can make it so. (Art by https://www.simonstalenhag.se/)

As a studio, we’re always working to advance our understanding of these three emerging technologies, but also keeping an eye on what’s a bit further over the horizon.

As interaction designers, we must remember that while new technologies will always provide new ways to interact, the user always comes first. It’s our responsibility to play and prototype, understand the characteristics of new tech, and use them to create meaningful solutions that solve real human needs.

We’re looking for technical, design, and production talent in our Portland office, if you’re interested in helping build the future of interaction around emerging technology together.

--

--