In a sponsored project with Carnegie Mellon Design, General Motors asked us to research and design future interaction scenarios with semi-autonomous vehicles. I worked with Renee Yang to do a brief research phase, develop preliminary ideas, and refine a final concept.
We began by researching and creating models of the existing driving model to inform our concepts. We used a school survey, contextual inquiries, and make tools to diagram relevant parts of the driving experience. From our research we were able to draw conclusions about attention demands, social interactions inside of and between other cars, rituals and habitual behavior, and dangerous situations. We also familiarized ourselves with the state of autonomy, AI, and display technology.
The driver is no longer confined to the "driving position" and is free to engage with passengers in a communal space.
Computers could delegate direction to human drivers but manage lower-level driving tasks itself.
The control device no longer needs a physical connection to the car's movement.
Advancements in display tech and computer vision allow for a heads up display experience.
Our final concept pulled the ideas above into a cohesive interaction scenario. A tablet would serve as both a high-level steering device when mounted in a pilot position, and detach to become a monitoring & control device so the driver could move about the vehicle. When autonomous mode is engaged, the tablet helps the driver monitor the car with a camera feed of the vehicle, controls and diagnostics, and provides entertainment via a web browser.