Technical Program

Home
Technical Program
ICMI-MLMI Pocket Guide (pdf)
Registration
People
Workshops
Special Sessions
Social Program
Conference Calendar
Local Information
Important Dates
Related Sites

Call for Papers
For Authors
Sponsorship Opportunities

Sunday, 1 November 2009
19:00-21:00 Reception
 
Monday, 2 November 2009
9:00-10:00 Living Better with Robots, Cynthia Breazeal, MIT
Session Chair: Yuri Ivanov, MERL
 
10:00-10:30Coffee
 
10:30-12:30Multimodal Communication Analysis (Oral)
Session Chair: Steve Renals, University of Edinburgh
 
  • Discovering group nonverbal conversational patterns with topics (DSS)
     Dinesh Babu Jayagopi, Daniel Gatica-Perez
  • Agreement Detection in Multiparty Conversation
     Sebastian Germesin, Theresa Wilson
  • Multimodal Floor Control Shift Detection
     Lei Chen, Mary Harper
  • Static vs. Dynamic Modeling for Human Nonverbal Behavior Analysis from Multiple Cues and Modalities
     Stavros Petridis, Sebastian Kaltwang, Hatice Gunes, Maja Pantic
 
12:30-14:00Lunch provided for conference attendees
 
14:00-15:30Multimodal Dialog (Oral)
Session Chair: Alexandros Potamianos, Technical University of Crete
 
  • Dialog in the Open World: Platform and Applications
     Dan Bohus, Eric Horvitz
  • Towards Adapting Fantasy, Curiosity and Challenge in Multimodal Dialog Systems for Preschoolers
     Theofanis Kannetis, Alexandros Potamianos
  • Building Multimodal Applications with EMMA
     Michael Johnston
 
15:30-16:00Coffee
 
16:00-17:30Multimodal Communication Analysis and Dialog (Poster)
Session Chair: Kenji Mase, Nagoya University
 
  • A Speaker Diarization Method based on the Probabilistic Fusion of Audio-Visual Location Information
     Kentaro Ishizuka, Shoko Araki, Kazuhiro Otsuka, Tomohiro Nakatani, Masakiyo Fujimoto
  • Dynamic Robot Autonomy: Investigating the Effects of Robot Decision-Making in a Human-Robot Team Task
     Paul Schermerhorn, Matthias Scheutz
  • A speech mashup framework for multimodal mobile services
     Giuseppe Di Fabbrizio, Jay Wilpon, Thomas Okken
  • Detecting, tracking and interacting with people in a public space
     Yoav Freund, Evan Ettinger, Sunsern Cheamanunkul, Matt Jacobsen, Patrick Lai
  • Cache-based Language Model Adaptation using Visual Attention for ASR in Meeting Scenarios
     Neil Cooke, Martin Russell
  • Multimodal End-of-Turn Prediction in Multiparty Meetings
     Iwan de Kok, Dirk Heylen
  • Recognizing Communicative Facial Expressions for Discovering Interpersonal Emotions in Group Meetings
     Shiro Kumano, Kazuhiro Otsuka, Dan Mikami, Junji Yamato
  • Classification of Patient Case Discussions Through Analysis of Vocalisation Graphs
     Saturnino Luz, Bridget Kane
  • Learning from Emotional Preferences and Selected Multimodal Features of Players
     Georgios Yannakakis
  • Detecting User Engagement with a Robot Companion Using Task and Social Interaction-based Features
     Ginevra Castellano, André Pereira, Iolanda Leite, Ana Paiva, Peter W. McOwan
  • Multi-Modal Features for Real-Time Detection of Human-Robot Interaction Categories
     Ian Fasel, Masahiro Shiomi, Philippe-Emmanuel Chadutaud, Takayuki Kanda, Hiroshi Ishiguro, Norihiro Hagita
  • Modeling Culturally Authentic Style Shifting with Virtual Peers
     Justine Cassell, Kathleen Geraghty, Berto Gonzalez, John Borland
  • Between Linguistic Attention and Gaze Fixations in Multimodal Conversational Interfaces
     Rui Fang, Joyce Chai, and Fernanda Ferreira
 
18:30-22:30Banquet: MIT Museum
 
Tuesday, 3 November 2009
9:00-10:00PASCAL Keynote Address: Head-up interaction: Can we break our addiction to the screen and keyboard? Stephen Brewster, University of Glasgow
Session Chair: Chris Wren, Google
 
10:00-10:30Coffee
 
10:30-12:30Fusion Engines for Multimodal Interfaces (Special Session)
Session Chair: Philippe Palanque, University of Toulouse
 
  • Fusion Engines for Input Multimodal Interfaces: a Survey
     Denis Lalanne, Laurence Nigay, Philippe Palanque, Peter Robinson, Jean Vanderdonckt, Jean-François Ladr
  • A Fusion Framework for Multimodal Interactive Applications
     Hildeberto Mendonça, Lionel Lawson, Olga Vybornova, Benoit Macq, Jean Vanderdonckt
  • Benchmarking Fusion Engines of Multimodal Interactive Systems
     Bruno Dumas, Rolf Ingold and Denis Lalanne
  • Temporal Aspects of CARE-based Multimodal Fusion: From a Fusion Mechanism to Composition Components and WoZ Components
     Marcos Serrano and Laurence Nigay
  • Formal Description Techniques to Support the Design, Construction and Evaluation of Fusion Engines for SURE (Safe, Usable, Reliable and Evolvable) Multimodal Interfaces
     Jean-François Ladry, David Navarre, Philippe Palanque
  • Multimodal Inference for Driver-Vehicle Interaction
     Tevfik Metin Sezgin, Ian Davis, Peter Robinson
 
12:30-14:00Lunch provided for conference attendees
 
14:00-15:30Gaze, Gesture, and Reference (Oral)
Session Chair: Louis-Philippe Morency, University of Southern California
 
  • Multimodal integration of natural gaze behavior for intention recognition during object manipulation
     Thomas Bader, Matthias Vogelgesang, Edmund Klaus
  • Salience in the Generation of Multimodal Referring Acts
     Paul Piwek
  • The Role of Communicative Gestures in Reference Resolution in Multiparty Meetings
     Tyler Baldwin, Joyce Chai, Katrin Kirchhoff
 
15:30-16:00Coffee
 
16:00-17:30Demonstration Session
Demo Chairs: Denis Lalanne, University of Fribourg
                         Enrique Vidal, Polytechnic University of Valencia
 
  • Realtime Meeting Analysis and 3D Meeting Viewer Based on Omnidirectional Multimodal Sensors
     Kazuhiro Otsuka, Shoko Araki, Dan Mikami, Kentaro Ishizuka, Masakiyo Fujimoto, Junji Yamato
  • Guiding Hand: A Teaching Tool for Handwriting
     Nalini Vishnoi, Cody Narber, Naomi Lynn Gerber, Zoran Duric
  • A Multimedia Retrieval System Using Speech Input
     Andrei Popescu-Belis, Peter Poller, Jonathan Kilgour, Erik Boertjes, Jean Carletta, Sandro Castronovo, Michal Fapso, Mike Flynn, Alexandre Nanchen, Theresa Wilson
  • Navigation With a Passive Brain Based Interface
     Jan B.F. van Erp, Peter J. Werkhoven, Marieke E. Thurlings, Anne-Marie M. Brouwer
  • A Multimodal Predictive-Interactive Application for Computer Assisted Transcription and Translation
     Vicent Alabau, Daniel Ortiz, Veronica Romero, Jorge Ocampo
  • Multi-Modal Communication
     Victor Finomore, Dianne Popik, Douglas Brungart, Brian Simpson
  • HephaisTK: A Toolkit for Rapid Prototyping of Multimodal Interfaces
     Bruno Dumas, Denis Lalanne, Rolf Ingold
  • State, an Assisted Document Transcription System
     David Llorens, Andreas Marzal, Federico Prat, Juan Miguel Vilar
  • First steps in emotional expression of the humanoid robot Nao
     Jerome Monceaux, Joffrey Becker, Caroline Boudier, Alexandre Mazel
  • WiiNote: Multimodal Application Facilitating Multi-User Photo Annotation Activity
     Elena Mugellini, Maria Sokhn, Stefano Carrino, Omar Abou Khaled
 
Wednesday, 4 November 2009
9:00-10:00 PASCAL Keynote Address: Are Gesture-based Interfaces The Future of Human Computer Interaction? Frédéric Kaplan, EPFL
Session Chair: James Crowley, INRIA Grenoble Rhones-Alpes Research Centre
 
10:00-10:30Coffee
 
10:30-12:30Doctoral Spotlight Oral Session
Session Chair: Michael Johnston, AT&T Labs
 
  • Providing Expressive Eye Movement to Virtual Agents (DSS)
     Zheng Li, Xia Mao, Lei Liu
  • Mediated Attention with Multimodal Augmented Reality (DSS)
     Angelika Dierker, Christian Mertes,Thomas Hermann, Marc Hanheide, Gerhard Sagerer
  • Grounding Spatial Prepositions for Video Search (DSS)
     Stefanie Tellex, Deb Roy
  • Multi-Modal and Multi-Camera Attention in Smart Environments (DSS)
     Boris Schauerte, Jan Richarz, Thomas Plötz, Christian Thurau, Gernot Fink
 
12:30-14:00Lunch provided for conference attendees
 
14:00-15:30Multimodal Devices and Sensors (Oral)
Session Chair: David Demirdjian, Toyota Research Institute
 
  • RVDT: A Design Space for Multiple Input Devices, Multiple Views and Multiple Display Surfaces
     Rami Ajaj, Christian Jacquemin, Frédéric Vernier
  • Learning and Predicting Multimodal Daily Life Patterns from Cell Phones (DSS)
     Katayoun Farrahi, Daniel Gatica-Perez
  • Visual Based Picking Supported By Context Awareness
     Hendrik Iben, Carmen Ruthenbeck, Tobias Klug
 
15:30-16:00Coffee
 
16:00-17:30Multimodal Applications and Techniques (Poster)
Session Chair: Rainer Stiefelhagen, Karlsruhe Institute of Technology & Fraunhofer IITB
 
  • Adaptation from Partially Supervised Handwritten Text Transcriptions
     Nicolás Serrano, Daniel Pérez, Albert Sanchis, Alfons Juan
  • Recognizing Events with Temporal Random Forests
     David Demirdjian, Chenna Varri
  • Activity-aware ECG-based patient authentication for remote health monitoring
     Janani Sriram, Minho Shin, Tanzeem Choudhury, David Kotz
  • GaZIR: Gaze-based Zooming Interface for Image Retrieval
     Laszlo Kozma, Arto Klami, Samuel Kaski
  • Computing for Next Billion: Multimodal Indic Text Input
     Prasenjit Dey, Ramachandrula Sitaram, Rahul Ajmera, Kalika Bali
  • Evaluating the Effect of Temporal Parameters for Vibrotactile Saltatory Patterns
     Jukka Raisamo, Roope Raisamo, Veikko Surakka
  • Mapping Information to Audio and Tactile Icons
     Eve Hoggan, Roope Raisamo, Stephen Brewster
  • Augmented Reality Target Finding Based on Tactile Cue
     Teemu Ahmaniemi, Vuokko Lantz
 
 Doctoral Spotlight Posters
Session Chair: Daniel Gatica-Perez, Idiap Research Institute & Ecole Polytechnique Fédérale de Lausanne
 
  • Speaker Change Detection with Privacy-Preserving Audio Cues (DSS)
     Sree Hari Krishnan Parthasarathi, Mathew Magimai.-Doss, Daniel Gatica-Perez, Herve Bourlard
  • Providing Expressive Eye Movement to Virtual Agents (DSS)
     Zheng Li, Xia Mao, Lei Liu
  • Mediated Attention with Multimodal Augmented Reality (DSS)
     Angelika Dierker, Mertes Christian, Hermann Thomas, Hanheide Marc, Sagerer Gerhard
  • Grounding Spatial Prepositions for Video Search (DSS)
     Stefanie Tellex, Deb Roy
  • Multi-Modal and Multi-Camera Attention in Smart (DSS) Environments
     Boris Schauerte, Jan Richarz, Thomas Plötz, Christian Thurau, Gernot Fink
  • A Framework for Continuous Multimodal Sign Language Recognition (DSS)
     Daniel Kelly, Jane Reilly Delannoy, John Mc Donald, Charles Markham
  • MirrorTrack - Tracking with Reflection - Comparison with Top-Down Approach (DSS)
     Yannick Verdie, Bing Fang, Francis Quek
  • Discovering group nonverbal conversational patterns with topics (DSS)
     Dinesh Babu Jayagopi, Daniel Gatica-Perez
  • Learning and Predicting Multimodal Daily Life Patterns from Cell Phones (DSS)
     Katayoun Farrahi, Daniel Gatica-Perez
 
17.30-18.30Town Hall Meeting