2023-10-02, Full-day, Paris, France (Room E8 at ICCV)
YouTube Playback Interactive 360 View:
https://youtu.be/ukqTcxPt1zI
ICCV 2023 Workshop on AI for
Creative Video Editing and Understanding
This workshop is for the 3rd installment of the AI for Creative Video Editing and Understanding (CVEU), which follows its success on the previous launch at ECCV2022 and ICCV2021.
The workshop brings together researchers, artists and entrepreneurs working on computer vision, machine learning, computer graphics, human computer interaction, and cognitive research.
It aims to bring awareness of recent advances in machine learning technologies to enable assisted creative-video creation and understanding.
Discuss recent advances in creative video understanding, creation and editing. Some of the topics that we plan to discuss are:
A special film session showcasing submissions to AI ShortFest, exciting new creative Film Festival. Get a sneak peek of this year's trailer:
Schedule | Paris Time 2022-10-02 8:30 AM |
|
---|---|---|
Warm-up Session | 08:30 AM - 09:00 AM | - |
Open Remarks and Organizers' Spotlight | 09:00 AM - 09:20 AM | - |
Academia Keynote I by Marc Christie Understanding Style in Movies |
09:20 AM - 09:45 AM | - |
Academia Keynote II by Ivan Laptev Video Understanding in the Era of Large Language Model |
09:45 AM - 10:10 AM | - |
Academia Keynote III by Maneesh Agrawala Unpredictable Black Boxes are Terrible Interfaces |
10:10 AM - 10:35 AM | - |
Artistic Keynote I by Jorge Caballero Ramos and Anna Giralt Gris Does AI Cinema Truly Exist? |
10:35 AM - 11:00 AM | - |
Roundtable Discussion | 11:00 AM - 11:40 AM | - |
Poster Session & Launch Break | 11:40 AM - 01:45 PM | - |
Film/Art Session | 01:45 PM - 02:50 PM | - |
Oral Paper Presentation 2: Is there progress in activity progress prediction? 8: PAT: Position-Aware Transformer for Dense Multi-Label Action Detection 9: Expressive Talking Head Video Encoding in StyleGAN2 Latent Space Best Paper Award 27: Enhancing Text-to-Video Editing with Motion Map Injection 34: LUSE: Using LLMs for Unsupervised Step Extraction in Instructional Videos |
02:50 PM - 03:30 PM | - |
Coffer break | 03:30 PM - 04:00 PM | - |
Industry Keynote I by Yogesh Balaji Lights, Camera, Diffusion: Video Content Creation with Diffusion Models |
04:05 PM - 04:20 PM | - |
Industry Keynote II by Kfir Aberman Generating Personalized Content with Text-to-Image Diffusion Models |
04:20 PM - 04:35 PM | - |
Artistic Keynote II by Hugo Caselles-Dupré Obvious: Bridging Art and Research through artificial Intelligence |
04:35 PM - 04:50 PM | - |
Closing Remarks | 04:50 PM - 05:05 PM | - |
In-proceeding Track | Resources |
---|---|
Is there progress in activity progress prediction? | Paper |
Are current long-term video understanding datasets long-term? | Paper |
VAST: Vivify Your Talking Avatar via Zero-Shot Expressive Facial Style Transfer | Paper |
PAT: Position-Aware Transformer for Dense Multi-Label Action Detection | Paper |
Expressive Talking Head Video Encoding in StyleGAN2 Latent Space | Paper |
Benchmarking Data Efficiency and Computational Efficiency of Temporal Action Localization Models | Paper |
InFusion: Inject and Attention Fusion for Multi Concept Zero Shot Text based Video Editing | Paper |
LEMMS: Label Estimation of Multi-feature Movie Segments | Paper |
Extended Abstract Track | Resources |
Dubbing for Extras: High-Quality Neural Rendering for Data Sparse Visual Dubbing | Paper |
Emotionally Enhanced Talking Face Generation | Paper |
Learning and Verification of Task Structure in Instructional Videos | Paper |
Enhancing Text-to-Video Editing with Motion Map Injection | Paper |
Can we predict the Most Replayed data of video streaming platforms? | Paper |
EVA-VOS: Efficient Video Annotation for Video Object Segmentation | Paper |
Representation Learning of Next Shot Selection for Vlog Editing | Paper |
Text-Based Video Generation With Human Motion and Controllable Camera | Paper |
Knowledge-Guided Short-Context Action Anticipation in Human-Centric Videos | Paper |
LUSE: Using LLMs for Unsupervised Step Extraction in Instructional Videos | Paper |
Dawit Mureja (KAIST), Jiaju Ma (Stanford University), Liming Jiang (NTU), Marc Christie (INRIA), Mattia Soldan (KAUST), Max Bain (Oxford), Sharon Zhang (Stanford University), Yixuan Li (CUHK), Yue Zhao (UT Austin), Yunzhi Zhang (Stanford University), Ziqi Huang (NTU)
Best Short Award: Kiss Crash, Adam Cole
Frontier Award: Idle Hands, Dr Formalyst & Irina Angles
Viewer’s Award: Ossature, Derek Bransombe
This workshop brings together researchers working on computer vision, machine listening, computer graphics, human-computer interaction, and cognitive research. It aims to bring awareness of recent advances in machine learning technologies to enable assisted creative-video creation and understanding. The workshop will include invited talks by experts in the area and give the community opportunities to share their work via oral and poster presentations. We encourage practitioners, designers, students, post-docs, and researchers to submit work describing new ideas, work-in-progress, and previously or concurrently published research. Topics of interest include but are not limited to:
Like in previous years, we will recognize outstanding submissions and reviewers with awards!
The CVEU workshop welcomes works in three tracks: 1) [NEW] AI ShortFest, our brand-new creative Film Festival and 2) In-proceeding Track and and 3) Extended Abstract Track. All the submissions will go through a double-blind peer-review process with no rebuttal or second review cycle. Please submit your work to our CVEU CMT Console at the appropriate track. Please use the ICCV template and follow ICCV 2023 Author Instruction. Authors of all accepted submissions will be asked to present their work in a poster session. A few authors will be selected to give 10–15 minute oral presentations and get awards.
For our first edition of CVEU in person, we are thrilled to announce the first edition of AI ShortFest, a film festival that celebrates the intersection of AI and filmmaking.
As technology continues to reshape the creative landscape, AI ShortFest aims to embrace the pioneering spirit of artists and filmmakers who push the boundaries of storytelling with the power of artificial intelligence.
We invite filmmakers, AI practitioners, and all enthusiasts from around the globe to submit their short films, where AI techniques play a significant role in the production process.
Selected films will be screened and creators will be invited to present their works at CVEU. In addition, category winners will receive one-year Adobe subscriptions.
Submit your AI-infused short films to the AI ShortFest on FilmFreeway
and learn more about the festival by following this link.
The papers are limited to 8 pages excluding references and will be included in the official ICCV workshop proceedings. Supplementary materials are allowed.
The articles submitted to this track will not be published in conjunction with ICCV proceedings. So that you could submit it to other conferences or journals. The papers are limited to 4 pages and additional pages containing only cited references are allowed. Supplementary materials are allowed.
The Invited Submission Track welcomes published papers on the topics summarized above. We will invite the accepted papers to be presented during the workshop.
The papers submitted to the Invited Submission Track must be published work (e.g., accepted by ICCV 2023) related to topics of the CVEU workshop. All the submitted papers will have the opportunity to be presented during the workshop.
Please complete this Google Form to submit your paper.