Home
Technical Program
ICMI-MLMI Pocket Guide (pdf)
Registration
People
Workshops
Special Sessions
Social Program
Conference Calendar
Local Information
Important Dates
Related Sites

Call for Papers
For Authors
Sponsorship Opportunities

Keynote Speakers

Living Better with Robots
Cynthia Breazeal

The emerging field of Human-Robot Interaction is undergoing rapid growth, motivated by important societal challenges and new applications for personal robotic technologies for the general public. In this talk, I highlight several projects from my research group to illustrate recent research trends to develop socially interactive robots that work and learn with people as partners. An important goal of this work is to use interactive robots as a scientific tool to understand human behavior, to explore the role of physical embodiment in interactive technology, and to use these insights to design robotic technologies that can enhance human performance and quality of life. Throughout the talk I will highlight synergies with HCI and connect HRI research goals to specific applications in healthcare, education, and communication.

Dr. Cynthia Breazeal is an Associate Professor of Media Arts and Sciences at the Massachusetts Institute of Technology where she founded and directs the Personal Robots Group at the Media Lab and directs the Center Future Storytelling. She is a pioneer of Social Robotics and Human Robot Interaction (HRI). Her research program focuses on developing personal robots that interact with humans in human-centric terms, work with humans as partners, and learn from people via tutelage. More recent work investigates the impact of long term HRI applied to communication, quality of life, health, and educational goals. She has authored the book ?Designing Sociable Robots? and has published over 100 peer-reviewed articles in autonomous robotics, artificial intelligence, HRI, and robot learning. She has been awarded an ONR Young Investigator Award, honored as finalist in the National Design Awards in Communication, and recognized as a prominent young innovator by the National Academy of Engineering?s Gilbreth Lecture Award. She received her Sc.D. in Electrical Engineering and Computer Science from MIT in 2000.


Head-up interaction: Can we break our addiction to the screen and keyboard?
Stephen Brewster

Mobile user interfaces are commonly based on techniques developed for desktop computers in the 1970s, often including buttons, sliders, windows and progress bars. These can be hard to use on the move, which then limits the way we use our devices and the applications on them. This talk will look at the possibility of moving away from these kinds of interactions to ones more suited to mobile devices and their dynamic contexts of use where users need to be able to look where they are going, carry shopping bags and hold on to children. Multimodal (gestural, audio and haptic) interactions provide us new ways to use our devices that can be eyes and hands free, and allow users to interact in a 'head up' way. These new interactions will facilitate new services, applications and devices that fit better into our daily lives and allow us to do a whole host of new things.

Brewster will discuss some of the work being done on input using gestures done with fingers, wrist and head, along with work on output using non-speech audio, 3D sound and tactile displays in applications such as for mobile devices such as text entry, camera phone user interfaces and navigation. He will also discuss some of the issues of social acceptability of these new interfaces.

Stephen Brewster has been a professor of human-computer interaction in the department of computing science at the University of Glasgow since 2001. He was an EPSRC Advanced Research Fellow from 2003-2008. His research focuses on multimodal human computer interaction, or using multiple sensory modalities (particularly hearing, touch and smell) to create richer interactions between human and computer. His work has a strong experimental focus, applying perceptual research to practical situations. He has shown that novel use of multimodality can significantly improve usability in a wide range of situations, for mobile users, visually-impaired people, older users and in medical applications.


Are Gesture-based Interfaces The Future of Human Computer Interaction?
Frédéric Kaplan

The historical evolution of human machine interfaces shows a continuous tendency towards more physical interactions with computers. Nevertheless, the mouse and keyboard paradigm is still the dominant one and it is not yet clear whether there is among recent innovative interaction techniques any real challenger to this supremacy. To discuss the future of gesture-based interfaces, I shall build on my own experience in conceiving and launching QB1, probably the first computer delivered with no mouse or keyboard but equipped with a depth-perceiving camera enabling interaction with gestures. The ambition of this talk is to define more precisely how gestures change the way we can interact with computers, discuss how to design robust interfaces adapted to this new medium and review what kind of applications benefit the most from this type of interaction. Through a series of examples, we will see that it is important to consider gestures not as a way of emulating a mouse pointer at a distance or as elements of a "vocabulary" of commands, but as a new interaction paradigm where the interface components are organized in the user's physical space. This is a shift of reference frame, from a metaphorical virtual space (e.g. the desktop) where the user controls a representation of himself (e.g. the mouse pointer) to a truly user-centered augmented reality interface where the user directly touches and manipulates interface components positioned around his body. To achieve this kind of interactivity, depth-perceiving cameras can be relevantly associated with robotic techniques and machine vision algorithms to create a "halo" of interactivity that can literally follow the user while he moves in a room. In return, this new kind of intimacy with a computer interface paves the ways for innovative machine learning approaches to context understanding. A computer like QB1 knows more about its user than any other personal computer so far. Gesture-based interaction is not a mean for replacing the mouse with cooler or more intuitive ways of interacting but leads to a fundamentally different approach to the design human-computer interfaces.

Frédéric Kaplan worked ten years as a researcher in Sony Computer Science Laboratory in Paris. Since 2006, he leads a research group at the CRAFT laboratory at EPFL in Switzerland focusing on interactive furniture, robotic objects and novel interfaces. He also founded OZWE, a spin-off of this laboratory commercializing new kinds of computers. He is the author of about 100 scientific articles and several popular science books. His robots and interfaces have been exhibited in several museums including the Centre Pompidou in Paris and the Museum of Modern Art in New York. His website is http://www.fkaplan.com