• Home
  • Multimodal QE SIG

Multimodal QE SIG


The Multimodal QE SIG focuses on research on advancing research in the application of QE to understand multimodal interactions within learning and teaching contexts. This encompasses both verbal and nonverbal interactions, such as speech, gestures, body movements, actions, digital trace log data, physiological data, and more. Areas of interest include, but are not limited to, the development of methods for operationalization, segmentation, and coding of multimodal data in QE, analytic approaches to modeling multimodal interactions, and multimodal discourse analysis.


Hanall Sung: hanallsung@utk.edu

Initial members

Hanall Sung

Yeyu Wang