May 17, 2022: Eight papers are accepted. Congratulations to the authors.
July 6, 2022: We are honored to invite Prof. Cewu Lu, Prof. Feng Xu and Prof. Minhyuk Sung to give keynotes.
July 7, 2022: The topic of Prof. Cewu Lu's talk is ''3D Semantics in Points''
July 7, 2022: The topic of Prof. Feng Xu's talk is ''Interaction Motion Reconstruction Based on Deep Learning''
July 19, 2022: The topic of Prof. Minhyuk Sung's talk is ''Language-Driven Shape Analysis and Manipulation''
July 19, 2022: The details of three keynotes can be found at here
Today, ubiquitous multimedia sensors and large-scale computing infrastructures are producing at a rapid velocity of 3D multi-modality data, such as 3D point cloud acquired with LIDAR sensors, RGB-D videos recorded by Kinect cameras, meshes of varying topology, and volumetric data. 3D multimedia combines different content forms such as text, audio, images, and video with 3D information, which can perceive the world better since the real world is 3-dimensional instead of 2-dimensional. For example, the robots can manipulate objects successfully by recognizing the object via RGB frames and perceiving the object size via point cloud. Researchers have strived to push the limits of 3D multimedia search and generation in various applications, such as autonomous driving, robotic visual navigation, smart industrial manufacturing, logistics distribution, and logistics picking. The 3D multimedia (e.g., the videos and point cloud) can also help the agents to grasp, move and place the packages automatically in logistics picking systems.
Therefore, 3D multimedia analytics is one of the fundamental problems in multimedia understanding. Different from 3D vision, 3D multimedia analytics mainly concentrate on fusing the 3D content with other media. It is a very challenging problem that involves multiple tasks such as human 3D mesh recovery and analysis, 3D shapes and scenes generation from real-world data, 3D virtual talking head, 3D multimedia classification and retrieval, 3D semantic segmentation, 3D object detection and tracking, 3D multimedia scene understanding, and so on. Therefore, the purpose of this workshop is to: 1) bring together the state-of-the-art research on 3D multimedia analysis; 2) call for a coordinated effort to understand the opportunities and challenges emerging in 3D multimedia analysis; 3) identify key tasks and evaluate the state-of-the-art methods; 4) showcase innovative methodologies and ideas; 5) introduce interesting real-world 3D multimedia analysis systems or applications; and 6) propose new real-world or simulated datasets and discuss future directions. We solicit original contributions in all fields of 3D multimedia analysis that explore the multi-modality data to generate the strong 3D data representation. We believe this workshop will offer a timely collection of research updates to benefit researchers and practitioners in the broad multimedia communities.
We invite submissions for ICME 2022 Workshop, 3D Multimedia Analytics, Search and Generation (3DMM2022), which brings researchers together to discuss robust, interpretable, and responsible technologies for 3D multimedia analysis. We solicit original research and survey papers that must be no longer than 6 pages (including all text, figures, and references). Each submitted paper will be peer-reviewed by at least three reviewers. All accepted papers will be presented as either oral or poster presentations, with the best paper award. Papers that violate anonymity, do not use the ICME submission template will be rejected without review. By submitting a manuscript to this workshop, the authors acknowledge that no paper substantially similar in content has been submitted to another workshop or conference during the review period. Authors should prepare their manuscript according to the Guide for Authors of ICME available at Author Guidelines. The paper submission website is available at here. Please make sure your paper is submitted to the correct track. The latex template is available at here and the word template is available at here.
The scope of this workshop includes, but is not limited to, the following topics:
Fast Review for Rejected Regular Submissions of ICME 2022
We set up a Fast Review mechanism for the regular submissions rejected by the ICME main conference. We strongly encourage the rejected papers to be submitted to this workshop. In order to submit through Fast Review, authors must write a front letter (1 page) to clarify the revision of the paper and attach all previous reviews. All the papers submitted through Fast Review will be directly reviewed by meta-reviewers to make the decisions.
Description | Date (UTC +8) |
---|---|
Paper Submission Deadline | March 20, 2022 |
Notification of Acceptance | April 25, 2022 |
Camera-Ready Due Date | May 2, 2022 |
Workshop Date | July 22, 2022 |
Date (UTC +8) | Description |
---|---|
13:00 - 13:10 | Opening |
13:10 - 13:50 | Keynote 1: 3D Semantics in Points |
13:50 - 14:30 | Keynote 2: Interaction Motion Reconstruction Based on Deep Learning |
14:30 - 15:10 | Keynote 3: Language-Driven Shape Analysis and Manipulation |
15:10 - 15:15 | Tea Break |
15:15 - 16:20 | 8 Oral Presentations(~8min*8) |
16:20 - 16:30 | Discussion and Closing |
Time | Paper Title |
---|---|
15:15-15:23 | Dual-Neighborhood Deep Fusion Network for Point Cloud Analysis |
15:23-15:31 | FoldingNet-based Geometry Compression of Point Cloud with Multi Descriptions |
15:31-15:39 | Multi-attribute Joint Point Cloud Super-Resolution with Adversarial Feature Graph Networks |
15:39-15:47 | Pyramid-Context Guided Feature Fusion for RGB-D Semantic Segmentation |
15:47-15:55 | Local to Global Transformer for Video based 3D Human Pose Estimation |
15:55-16:03 | Unsupervised Severely Deformed Mesh Reconstruction (DMR) from a Single-View Image for Longline Fishing |
16:03-16:11 | 3DSTNet: Neural 3D Shape Style Transfer |
16:11-16:19 | 3D-DSPNet: Product Disassembly Sequence Planning |
If you have any questions, feel free to contact < liukun167@jd.com >