Multimedia research focuses on technologies that enable the use and exchange of content integrating multiple perspectives of digital modalities, including images, text, video, audio (speech, music etc.), and other sensor data. ACM Multimedia Asia 2023 calls for short research papers presenting novel theoretical and algorithmic solutions addressing problems across the domain of multimedia and related applications. The conference also calls for short papers presenting novel ideas and promising (preliminary) results in realizing these ideas.
Short papers provide an opportunity to describe significant novel work in progress or multimedia research. Compared to full papers, their contribution may be narrower in scope, be applied to a narrower set of application domains, or have weaker empirical support than that expected for a full paper. Submissions that are likely to generate discussions in new and emerging areas of multimedia are especially encouraged.
Submissions are encouraged in all areas related to multimedia, as described in ACM MMAsia 2023 call for regular papers.
The short papers should be submitted through https://cmt3.research.microsoft.com/MMAsia2023
Submissions will be peer-reviewed, and accepted short papers will be published in the conference proceeding. Submissions of short research papers (pdf format) must use the ACM Article Template
, and be at most 4 pages
(including figures, appendices, etc.) in length + unrestricted space for references. Please remember to add Concepts and Keywords.
Word users please use "interm-layout.docx"
, and Latex users please use "sample-sigconf.tex"
to format your submissions.
Paper submissions must conform with the “double-blind” review policy. Please prepare your paper in a way that preserves anonymity of the authors.
For submissions and questions, please email the Program Chairs:
>   Min-Chun Hu (email@example.com, National Tsing Hua University)
>   Jiaying Liu (firstname.lastname@example.org, Peking University)
>   Munchurl Kim (email@example.com, Korea Advanced Institute of Science and Technology)
>   Wei Zhang (firstname.lastname@example.org, JD AI Research)