Why you should care
Because these trends are redefining how we interact with gadgets.
OZY and Giant Spoon are excited to partner on special live coverage from CES 2017 — where the most forward-looking technology and media come together. Rather than cover just the latest gadgets, though, we’re taking you deeper with key takeaways, little-known rising stars, unconventional trends and, yes, the coolest sh*t from the convention. Tune in to our special Facebook Live tonight at 8pm EST / 5pm PST, for fresh perspectives on the future of artificial intelligence and new ways that let us interact with tech. On Friday, we’ll explore the future of data and robotics.
Erin Hauswirth and Nathalie Con are associate directors at Giant Spoon, an innovation-driven marketing agency.
Ask someone to do something and they’re likely to respond with physical cues: maybe pat you on the back and tell you they’re on it, make eye contact, nod their head or, at the very least, turn their body to face you. Now, as we ask machines to do more and more for us, tech companies are using our everyday social tics to ensure robo-helpers are properly communicating with people. Because, let’s face it, people don’t feel like action is actually being taken unless they sense a response.
Over the years, as Giant Spoon has weaved through the annual Consumer Electronics Show, we’ve seen consumers evolve from typing into a keyboard to talking to (alright, shouting at) their devices. But with only 7 percent of communication being verbal, tech companies are increasingly experimenting with other forms of feedback — gestural, biometric and even neural — to better understand how we interact with our gadgets and to make those moments feel more and more familiar. Here’s where our future is headed in this area:
Sound Gets Physical
Our smartphones have featured haptic feedback for some time now, whether they buzz in our back pocket to let us know that we’re receiving a call, or rumble in our palms when we type in the wrong password. But now we’re starting to see a new form of haptic feedback take shape in vehicles, where car enthusiasts can feel the invisible as well as the physical. Look no further than Ultrahaptics from Bristol — which enables users to physically touch virtual objects — or at Bosch’s new feedback system, which uses ultrasound waves to transform the air into a tangible dash, and where users press buttons or turn dials made purely out of sound. Then there’s BMW’s HoloActive Touch tech, which uses ultrasonic sensors to project buttons into thin air that pulse when pressed.
These next-gen interactions not only keep drivers from getting distracted (we hope, anyway), but also allow automakers to update the physical form factor of their car’s dash just like software on a screen. In other industries, using sound to simulate the feeling of a product could make it possible for retailers to allow shoppers to touch clothes that aren’t there or transform the push-button faces of laptops, refrigerators and appliances into blank sonic slates that product designers update like they would a phone’s operating system.
Move Over AI — and Welcome E.I.
Artificial intelligence (AI) has evolved past tracking key terms and tags or simulating social interaction to demonstrating emotional intelligence (E.I.) and acuity. Your once-detached machines can now read if you’re frustrated, happy or melancholy — and respond accordingly. Emotech’s revamped Olly personal assistant responds to a person’s mood and uses color and movement to express its own personality and emotion. Meanwhile, with Amazon Alexa, researchers have explored the ability to gauge sentiment in someone’s voice, so your speaker might soon be able to figure out when you’re, say, hangry. And Honda’s concept vehicle NeuV features an AI emotion engine, which can pick up on your road rage and suggest songs or driving tips to help calm you down. (Cue Leonard Cohen.)
— Giant Spoon (@Giant_Spoon) January 5, 2017
With all of these objects in our environment better able to gauge our feeling, brands will have to start manufacturing entire personalities above and beyond their brand voice. After all, consumers might begin to buy products based on disposition as much as aesthetics, allowing us to choose if our car is more like a surrogate therapist, a sassy friend, a wise mentor or that snarky sidekick who might come in handy during rush hour. Imagine the moment when brands become better positioned to dynamically serve up experiences based on our moods, even offering products or services to regulate our emotions.
Robots Need Space, Too
Knowing the right amount of space to give a person is an important human trait (ask any New Yorker). So when it comes to the adoption of new tech, from robots to mixed reality, spatial recognition could play a key factor in building trust. That’s why Kuri, Mayfield Robotics’ home robo which debuted at CES this week, uses movements and expressions to communicate instead of words. Designed by a long-time Pixar animator, Kuri’s spins and wiggles were carefully crafted to be more approachable and, most importantly, never creep out its owners.
— OZY (@ozy) January 5, 2017