Material Detail

VACE Multimodal Meeting Corpus

VACE Multimodal Meeting Corpus

This video was recorded at 2nd Joint Workshop on Multimodal Interaction and Related Machine Learning Algorithms, Edinburgh 2005. In this paper, we report on the infrastructure we have de- veloped to support our research on multimodal cues for understanding meetings.With our focus on multimodality, we investigate the interaction among speech, gesture, posture, and gaze in meetings. For this purpose, a high quality multimodal corpus is being produced.


  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material


Log in to participate in the discussions or sign up if you are not already a MERLOT member.