The 7th IEEE Workshop on
Artificial Intelligence for Art Creation


Nantes, France
June 30 - July 04, 2025
Jointly with ICME 2025

Call for Papers


Recent advances brought by AI-Generated Content (AIGC) have been an innovative engine for digital content generation, drawing more and more attention from both academia and industry. Across creative fields, AI has sparked new genres and experimentations in painting, music, film, storytelling, fashion and design. Researchers explore the concept of co-creation with AI systems as well as the ethical implications of AI generated images and texts. AI has been applied to art historical research and media studies. The aesthetic value of AI generated content and AI’s impact on art appreciation have also been a contended subject in recent scholarship. AI has not only exhibited creative potential, but also stimulated research from diverse perspectives of neuroscience, cognitive science, psychology, literature, art history, media and communication studies. Despite all these promising features of AI for Art, we still have to face the many challenges such as the biases in AI models, lack of transparency and explainability in algorithms, and copyright issues of training data and AI art works.

This is the 7th AIART workshop to be held in conjunction with ICME 2025 in Nantes, France, and it aims to bring forward cutting-edge technologies and most recent advances in the area of AI art as well as perspectives from neuroscience, cognitive science, psychology, literature, art history, media and communication studies.

The theme topic of AIART 2025 will be AI and Human Co-creativity. We plan to invite 5 keynote speakers to present their insightful perspectives on AI art.

We sincerely invite high-quality papers presenting or addressing issues related to AI art, including but not limited to the following topics:

  • Affective computing for AI Art
  • AI in media studies
  • AI, literature, and art
  • AI and social justice
  • Theory and practice of AI creativity
  • Neuroscience, cognitive science and psychology for AI Art
  • AI Art for metaverse
  • AI for painting generation
  • AI for 3D content generation
  • AI for cultural heritage
  • AI for sound synthesis, music composition, performance, and instrument design
  • AI for poem composing and synthesis
  • AI for typography and graphic design
  • AI for fashion, makeup, and virtual hosting
  • AI for multimodal and cross-modal art generation
  • AI for art style transfer
  • AI for aesthetics understanding, analysis, assessment and prediction
  • Authentication and copyright issues of AI artworks

Additionally, Best Paper Award will be given.

AIART 2025 is also launching a demo track for artists to showcase their creative artworks in the form of in-person art gallery. The demo track will provide a great opportunity for people to experience interactive artworks and communicate creative ideas. The submission guideline for the demo track follows that of the main ICME conference: https://2025.ieeeicme.org/author-information-and-submission-instructions/.


Paper Submission

Authors should prepare their manuscript according to the Guide for Authors of ICME available at Author Information and Submission Instructions: https://2025.ieeeicme.org/author-information-and-submission-instructions/

Submission address: https://cmt3.research.microsoft.com/ICMEW2025


Submit link

Important Dates


Submissions due
March 25, 2025
Workshop date
TBD

Keynotes (1/5)


Keynote 1


TBD

Keynotes (2/5)


Keynote 2


TBD

Keynotes (3/5)


Keynote 3


TBD

Keynotes (4/5)


Keynote 4


TBD

Keynotes (5/5)


Keynote 5


TBD

Conference Program


TBD

Technical Program Committee (Tentative)


  • Ajay Kapur, California Institute of the Arts, USA
  • Alan Chamberlain, University of Nottingham, UK
  • Alexander Lerch, Georgia Institute of Technology, USA
  • Alexander Pantelyat, Johns Hopkins University, USA
  • Bahareh Nakisa, Deakin University, Australia
  • Baoqiang Han, China Conservatory of Music, China
  • Baoyang Chen, Central Academy of Fine Arts, China
  • Bing Li, King Abdullah University of Science and Technology, Saudi Arabia
  • Björn W. Schuller, Imperial College London, UK
  • Bob Sturm, KTH Royal Institute of Technology, Sweden
  • Borou Yu, Harvard University, USA
  • Brian C. Lovell, The University of Queensland, Australia
  • Carlos Castellanos, Rochester Institute of Technology, USA
  • Changsheng Xu, Institute of Automation, Chinese Academy of Sciences, China
  • Chunning Guo, Renmin University, China
  • Cong Jin, China University of Communication, China
  • Dong Liu, University of Science and Technology of China, China
  • Dongmei Jiang, Northwestern Polytechnical University, China
  • Emma Young, BBC, UK
  • Gus Xia, New York University Shanghai, China & Mohamed bin Zayed University of Artificial Intelligence, United Arab Emirates
  • Haifeng Li, Harbin Institute of Technology, China
  • Haipeng Mi, Tsinghua University, China
  • Han Zhang, University of Chinese Academy of Sciences, China
  • Hanli Wang, Tongji University, China
  • Haonan Chen, China University of Communication, China
  • Honghai Liu, Harbin Institute of Technology, China
  • Hongxun Yao, Harbin Institute of Technology, China
  • Jesse Engel, Google, USA
  • Jiafeng Liu, Central Conservatory of Music, China
  • Jia Jia, Tsinghua University, China
  • Jiajian Min, Harvard University, USA
  • Jian Zhang, Peking University, China
  • Jian Zhao, China Telecom, China
  • Jianyu Fan, Microsoft, Canada
  • Jing Huo, Nanjing University, China
  • Jing Wang, Beijing Institute of Technology, China
  • Jingjing Chen, Fudan University, China
  • Jingting Li, Institute of Psychology of the Chinese Academy of Sciences, China
  • Jingyuan Yang, Shenzhen University, China
  • Jinshan Pan, Nanjing University of Science and Technology, China
  • Joanna Zylinska, King’s College London, UK
  • John See, Multimedia University, Malaysia
  • Juan Huang, Johns Hopkins University, USA
  • Jufeng Yang, Nankai University, China
  • Junping Zhang, Fudan University, China
  • Kang Zhang, Hong Kong University of Science and Technology (Guangzhou), China
  • Kate Crawford, University of Southern California, USA
  • Ke Lv, University of Chinese Academy of Sciences, China
  • Kenneth Fields, Central Conservatory of Music, China
  • Lai-Kuan Wong, Multimedia University, Malaysia
  • Lamberto Coccioli, Royal Birmingham Conservatoire, UK
  • Lamtharn Hanoi Hantrakul, ByteDance, USA
  • Lei Xie, Northwestern Polytechnical University, China
  • Leida Li, Xidian University, China
  • Li Liu, Hong Kong University of Science and Technology (Guangzhou), China
  • Li Song, Shanghai Jiao Tong University, China
  • Li Zhou, China University of Geosciences (Wuhan), China
  • Lianli Gao, University of Electronic Science and Technology of China, China
  • Lin Gan, Tianjin University, China
  • Long Ye, China University of Communication, China
  • Maosong Sun, Tsinghua University, China
  • Mei Han, Ping An Technology Art institute, USA
  • Mengjie Qi, China Conservatory of Music, China
  • Mengshi Qi, Beijing University of Posts and Telecommunications, China
  • Mengyao Zhu, Huawei Technologies Co., Ltd, China
  • Ming Zhang, Nanjing Art College, China
  • Mohammad Naim Rastgoo, Queensland University of Technology, Australia
  • Na Qi, Beijing University of Technology, China
  • Nancy Katherine Hayles, University of California Los Angeles, USA
  • Nick Bryan-Kinns, Queen Mary University of London, UK
  • Nina Kraus, Northwestern University, USA
  • Pengtao Xie, University of California, San Diego, USA
  • Pengyun Li, Wuhan Conservatory of Music, China
  • Philippe Pasquier, Simon Fraser University, Canada
  • Qi Mao, China University of Communication, China
  • Qin Jin, Renmin University, China
  • Qiuqiang Kong, The Chinese University of Hong Kong, China
  • Rebecca Fiebrink, University of the Arts London, UK
  • Rick Taube, University of Illinois at Urbana-Champaign, USA
  • Roger Dannenberg, Carnegie Mellon University, USA
  • Rongfeng Li, Beijing University of Posts and Telecommunications, China
  • Rui Wang, Institute of Information Engineering, Chinese Academy of Sciences, China
  • Ruihua Song, Renmin University, China
  • Sarah Wolozin, Massachusetts Institute of Technology, USA
  • Shangfei Wang, University of Science and Technology of China, China
  • Shasha Mao, Xidian University, China
  • Shen Li, Henan University, China
  • Shiguang Shan, Institute of Computing Technology, Chinese Academy of Sciences, China
  • Shiqi Wang, City University of Hong Kong, China
  • Shiqing Zhang, Taizhou University, China
  • Shuai Yang, Peking University, China
  • Shun Kuremoto, Uchida Yoko Co.,Ltd, Japan
  • Si Liu, Beihang University, China
  • Sicheng Zhao, Tsinghua University, China
  • Simon Colton, Queen Mary University of London, UK
  • Simon Lui, Huawei Technologies Co., Ltd, China
  • Siwei Ma, Peking University, China
  • Steve DiPaola, Simon Fraser University, Canada
  • Tiange Zhou, NetEase Cloud Music, China
  • Wei Chen, Zhejiang University, China
  • Weibei Dou, Tsinghua University, China
  • Weiming Dong, Institute of Automation, Chinese Academy of Sciences, China
  • Wei-Ta Chu, National Chung Cheng University, Taiwan, China
  • Wei Li, Fudan University, China
  • Weiwei Zhang, Dalian Maritime University, China
  • Wei Zhong, China University of Communication, China
  • Wen-Huang Cheng, National Chiao Tung University, Taiwan, China
  • Wenli Zhang, Beijing University of Technology, China
  • Wenming Zheng, Southeast University, China
  • Xi Shao, Nanjing University of Posts and Telecommunications, China
  • Xi Yang, Beijing Academy of Artificial Intelligence, China
  • Xiaohong Liu, Shanghai Jiao Tong University, China
  • Xiaohua Sun, Tongji University, China
  • Xiaolin Hu, Tsinghua University, China
  • Xiaojing Liang, NetEase Cloud Music, China
  • Xiaopeng Hong, Harbin Institute of Technology, China
  • Xiaoyan Sun, University of Science and Technology of China, China
  • Xiaoying Zhang, China Rehabilitation Research Center, China
  • Xihong Wu, Peking University, China
  • Xin Jin, Beijing Electronic Science and Technology Institute, China
  • Xinfeng Zhang, University of Chinese Academy of Sciences, China
  • Xinyuan Cai, Huazhong University of Science and Technology, China
  • Xu Tan, Microsoft Research Asia, China
  • Ya Li, Beijing University of Posts and Telecommunications, China
  • Yan Yan, Xiamen University, China
  • Yanchao Bi, Beijing Normal University, China
  • Yi Jin, Beijing Jiaotong University, China
  • Yi Qin, Shanghai Conservatory of Music, China
  • Ying-Qing Xu, Tsinghua University, China
  • Yirui Wu, Hohai University, China
  • Yuan Yao, Beijing Jiaotong University, China
  • Yuanchun Xu, Xiaoice, China
  • Yuanyuan Liu, China University of Geosciences (Wuhan), China
  • Yuanyuan Pu, Yunnan University, China
  • Yun Wang, Beihang University, China
  • Zhaoxin Yu, Shangdong University of Arts, China
  • Zheng Lian, Institute of Automation of the Chinese Academy of Sciences, China
  • Zhi Jin, Sun Yat-Sen University, China
  • Zhiyao Duan, University of Rochester, USA
  • Zichun Guo, Beijing University of Chemical Technology, China
  • Zijin Li, Central Conservatory of Music, China

Organizing Team


Luntian Mou

Beijing University of Technology

Beijing, China

ltmou@bjut.edu.cn


Dr. Luntian Mou is an Associate Professor with the School of Information Science and Technology, Beijing University of Technology, and also with Beijing Institute of Artificial Intelligence (BIAI). He received the Ph.D. degree in computer science from the University of Chinese Academy of Sciences, China in 2012. He served as a Postdoctoral Fellow at Peking University, from 2012 to 2014. And he was a Visiting Scholar with the University of California, Irvine, from 2019 to 2020. He initiated the IEEE Workshop on Artificial Intelligence for Art Creation (AIART) in 2019, and published a book titled Artificial Intelligence for Art Creation and Understanding in 2024. His current research interests include artificial intelligence, machine learning, multimedia computing, affective computing, and brain-like computing. He is the recipient of Beijing Municipal Science and Technology Advancement Award, and the recipient of China Highway Society Technology Invention Award, IEEE Outstanding Contribution to Standardization Award, and AVS Outstanding Contribution on 15th Anniversary Award. He serves as a Guest Editor for Machine Intelligence Research, and a Reviewer for many important international journals and conferences such as TIP, TAFFC, TMM, TCSVT, TITS, CVPR, AAAI, etc. And he serves as a Co-Chair of System subgroup in AVS workgroup. He is a Senior Member of IEEE, CCF, and CSIG, and a Member of ACM, and CAAI, and an Expert of MPEG China.

Feng Gao

Peking University

Beijing, China

gaof@pku.edu.cn


Dr. Feng Gao is an Assistant Professor with the School of Arts, Peking University. He has long researched in the disciplinary fields of AI and art, especially in AI painting. He co-initiated the international workshop of AIART. Currently, he is also enthusiastic in virtual human. He has demonstrated his AI painting system, called Daozi, in several workshops and drawn much attention.

Kejun Zhang

Zhejiang University

Hangzhou, China

zhangkejun@zju.edu.cn


Dr. Kejun Zhang is a Professor with Zhejiang University, joint PhD supervisor on Design and Computer Science, Dean of Department of Industrial Design at College of Computer Science of Zhejiang University. He received his PhD degree from College of Computer Science and Technology, Zhejiang University in 2010. From 2008 to 2009, He was a visiting research scholar of University of Illinois at Urbana-Champaign, USA. In June 2013, he became a faculty of the College of Computer Science and Technology at Zhejiang University. His current research interests include Affective Computing,Design Science, Artificial Intelligence, Multimedia Computing and the understanding, modelling and innovation design of products and social management by computational means. He is now the PI of National Science Foundation of China, Co-PI of National Key Research and Development Program of China, and PIs of ten more other research programs. He has authored 4 books, more than 40 scientific papers.

Zeyu Wang

Hong Kong University of Science and
Technology (Guangzhou)

Guangzhou, China

zeyuwang@ust.hk


Dr. Zeyu Wang is an Assistant Professor of Computational Media and Arts (CMA) in the Information Hub at the Hong Kong University of Science and Technology (Guangzhou) and an Affiliate Assistant Professor in the Department of Computer Science and Engineering at the Hong Kong University of Science and Technology. He received a PhD from the Department of Computer Science at Yale University and a BS from the School of Artificial Intelligence at Peking University. He leads the Creative Intelligence and Synergy (CIS) Lab at HKUST(GZ) to study the intersection of Computer Graphics, Human-Computer Interaction, and Artificial Intelligence, with a focus on algorithms and systems for digital content creation. His current research topics include sketching, VR/AR/XR, and generative techniques, with applications in art, design, perception, and cultural heritage. His work has been recognized by an Adobe Research Fellowship, a Franke Interdisciplinary Research Fellowship, a Best Paper Award, and a Best Demo Honorable Mention Award.

Gerui Wang

Stanford University

California, USA

grwang@stanford.edu


Dr. Gerui Wang is a Lecturer at Stanford University Center for East Asian Studies, where she teaches classes on contemporary art, AI and posthumanism. Her research interests span arts, public policy, environment, and emerging technologies. She is a member of the Alan Turing Institute AI&Arts Research Group. With her background in art history, she has published in the Journal of Chinese History and Newsletter for International China Studies. Gerui's book Sustaining Landscapes: Governance and Ecology in Chinese Visual Culture is forthcoming in 2025. Her research briefs on AI, robotics, media, and society are frequently featured in public venues including Forbes, Alan Turing Institute's AI and Art Forum, Asia Times, and South China Morning Post. Gerui holds a doctorate in art history from the University of Michigan.

Ling Fan

Tezign.com

Tongji University Design Artificial Intelligence Lab

Shanghai, China

lfan@tongji.edu.cn


Dr. Ling Fan is a Scholar and Entrepreneur to bridge machine intelligence with creativity. He is the founding chair and professor of Tongji University Design Artificial Intelligence Lab. Before, he held teaching position at the University of California at Berkeley and China Central Academy of Fine Arts. Dr. Fan co-founded Tezign.com, a leading technology start-up with the mission to build digital infrastructure for creative contents. Tezign is backed by top VCs like Sequoia Capital and Hearst Ventures. Dr. Fan is a World Economic Forum Young Global Leader, an Aspen Institute China Fellow, and Youth Committee member at the Future Forum. He is also a member of IEEE Global Council for Extended Intelligence. Dr. Fan received his doctoral degree from Harvard University and master's degree from Princeton University. He recently published From Universality of Computation to the Universality of Imagination, a book on how machine intelligence would influence human creativity.

Nick Bryan-Kinns

University of the Arts London

London, UK

n.bryankinns@arts.ac.uk


Dr. Nick Bryan-Kinns is a Professor of Creative Computing at the Creative Computing Institute, University of the Arts London. His research explores new approaches to interactive technologies for the Arts and the Creative Industries through Creative Computing. His current focus is on Human-Centered AI and eXplainable AI for the Arts. His research has made audio engineering more accessible and inclusive, championed the design of sustainable and ethical IoT and wearables, and engaged rural and urban communities with physical computing through craft and cultural heritage. Products of his research have been exhibited internationally including Ars Electronica (Austria) the V&A and the Science Museum (UK), made available online and as smartphone apps, used by artists and musicians in performances and art installations, and have been reported in public media outlets including the BBC and New Scientist. He is a Fellow of the Royal Society of Arts, Fellow of the British Computer Society (BCS), and Senior Member of the Association of Computing Machinery (ACM). He is a recipient of the ACM and BCS Recognition of Service Awards, and chaired the ACM Creativity and Cognition conference 2009, and the BCS international HCI conference 2006.

Ambarish Natu

Australian Government

Australian Capital Territory, Australia

ambarish.natu@gmail.com


Dr. Ambarish Natu is with the Australian Government. After graduating from University of New South Wales, Sydney, Ambarish has held positions as a visiting researcher in Italy and Taiwan, worked for industry in United Kingdom and the United States of America and for the past ten years has been working in the Australian Government. For the past 17 years, Ambarish has led the development of five international standards under the auspices of the International Standards Organization (ISO) popularly known as JPEG (Joint Photographic Experts Group). He is the recipient of the ISO/IEC certificate for contributions to technology standards. Ambarish is highly active in the area of international standardization and voicing Australian concerns in the area of JPEG and MPEG (Motion Pictures Experts Group) standardization. He previously initiated an effort in the area of standardization relating to Privacy and Security in the Multimedia Context both within JPEG and MPEG standard bodies. In 2015, Ambarish was the recipient of the prestigious Neville Thiele Award and the Canberra Professional Engineer of the Year by Engineers Australia. Ambarish currently works as an ICT Specialist for the Australian Government. Ambarish is a Fellow of the Australian Computer Society and Engineers Australia. Ambarish also serves on the IVMSP TC and the Autonomous Systems Initiative of the IEEE Signal Processing Society. Ambarish has also been General Chair of DICTA 2018, ICME 2023 and TENSYMP 2023 in the past. Ambarish has keen interest in next generation data and analytics technologies that will change the course of the way we interact with in the world.

Partners


Partner1: Association d'intelligence Artificielle France-Chine
L’association d'Intelligence Artificielle France-Chine (AIFC), headquartered in Paris, is a professional organization dedicated to fostering in-depth collaboration between France and China in the field of artificial intelligence. By establishing a multidimensional platform that integrates research, investment, industry, and education, AIFC aims to create a comprehensive ecosystem that bridges academia and industry.





Partner2: Machine Intelligence Research
Machine Intelligence Research (original title: International Journal of Automation and Computing), published by Springer, and sponsored by Institute of Automation, Chinese Academy of Sciences, is formally released in 2022. The journal publishes high-quality papers on original theoretical and experimental research, targets special issues on emerging topics and specific subjects, and strives to bridge the gap between theoretical research and practical applications. The journal has been indexed by ESCI, EI, Scopus, CSCD, etc.
Impact Factor=6.4, JCR Q1

Sponsorship


TBD