human-computer-partnerships

September 2nd, 2019
Human-Computer Partnerships

I am interested in moving beyond the traditional 'human-in-the-loop' perspective, which focuses on using human input to improve algorithms. Instead, I argue that we need the 'computer-in-the-loop', where intelligent algorithms serve to enhance human capabilities. My goal is to create interactive systems that are discoverable, appropriable, and expressive, always under human control.

My thesis at MIT introduced the concept of co-adaptive systems, supported with empirical data from a five-month study of software customization at MIT and a two-year study at Xerox PARC of the Information Lens, an electronic mail filter Projects included: Argus, a generalized mail filtering and annotation system and Pygmalion, multi-media message system that manages the trade-off between sending and receiving public and private multimedia messages. Over the years I have developed these concepts in many different contexts: CPN2000, developed at U. Aarhus, and A-book, PageLinker and Paperoles, developed at Inria, all provide users with feedback about previous actions that can be used to help them co-adapt their future use of these systems, as does Prism, a hybrid paper-electronic notebook from the Inria-MSR Reactivity project field-tested with biologists over seven months.

Octopocus uses a progressive algorithm to offer users a combination of feedforward and feedback to assist learning gestures and Musink and Knotty Gestures offer users a method of defining their own form of interaction with the computer. Scotty allows end users to customize existing applications at run-time. The series of papers on adding gesture-based interaction to mobile devices by leveraging gesture typing, including Expressive Keyboard, Fieldward and Pathword, and CommandBoard, let users discover how to issue commands as they make gestures, create their own personal gestures associated with parameterizable commands, execute commands from a gesture-typing keyboard, and produce expressive output based on the detailed characteristics of their gestures.
More recently, I have addressed co-adaptive systems in the context of interaction with AI systems. Her unique perspective is that AI should empower users rather than deskill them by revealing what the system is doing and giving users proper means to control it. Gesture-based interaction: learning new gestures (Octopocus), letting users define their own interactions (Musink, Knotty Gestures, Fieldward) or leveraging gesture typing on mobile devices (Expressive Keyboards, CommandBoard).