Available Interfaces

Adam Di Angelo
2 min readJan 15, 2019

Using a touchscreen carries with it the complete dependency on visual interaction. As these devices are increasingly ubiquitous we find them in places where the users’ eyes may be otherwise engaged. The result is a compromised interaction and potentially dangerous attention-dividing. The solution is in designing interfaces that can be acted upon with (while providing feedback to) the available senses.

Perhaps the most significant misapplication of touchscreens in in the automotive industry. It is impossible to effectively use a touchscreen while safely driving a car. In 2017 researchers at the University of Utah conducted a study of drivers and 30 automobile touchscreen interfaces. The study determined all of the interfaces placed high visual, cognitive, and manual demands on drivers. Odds of an accident double when a driver’s eyes are off the road for 2 seconds. Drivers averaged 24 seconds with their eyes off the road to complete each of a series of directed tasks (REFERENCE: https://unews.utah.edu/caution-ahead-the-growing-challenge-for-drivers-attention/).

There is a clear division of attention between the primary and secondary interfaces in automobiles (i.e. the steering wheel and pedals vs. the interior controls, respectively). Hands, eyes, and feet are busy with the primary activity. Controls around secondary activities should recognize this and work with what the user has available.

BMW 3-series circa 1989. Different kind of controls for different functions — rotary dial for temperature, sliders for intensity, buttons of various shapes and sizes for other controls. Good for drivers with their eyes on the road for its touch-only interaction.
Tesla Model S 2019. A single monolithic pane of glass with various controls often occupying the same place. Not good for drivers. Very good for insurance companies.

In other scenarios this may be inverted. It is the rare individual who takes a telephone call with rapt attention — head back, eyes closed, hands folded in front of her (presuming speakerphone or hands-free here) — just listening. Mouth and ears are engaged in the primary activity of taking the call eliminating voice-control and minimizing availability of aural cues. Secondary activities and their feedback, from adjusting the volume to taking notes, go to the hands and eyes.

In multi-tasking scenarios there is a balance between the primary and secondary activity. From this we can determine what user resources (cognitive, visual, manual, etc.) are free and what can be diverted from one activity to another. While the scale is moveable efforts should be made to understand what interfaces are appropriate for each particular circumstance.