IEEE PRDC 2025 Workshop on AI Dependability (AID’25)

AID’25—A workshop to bring together researchers and industry practitioners in the field of AI and LLM-based systems. This workshop will be an in-person event at PRDC 2025, taking place on November 4 from 9:00am to 5:00pm KST.

[click here to submit your manuscript]

About the workshop

The AID’25 workshop provides a forum for researchers and industry practitioners to present and discuss the latest advancements in ensuring the dependability of AI and LLM-based systems. As AI becomes integral to critical infrastructure, its reliability, security, and robustness are paramount.

This workshop will bring together experts from the dependable computing and AI communities to address new challenges in building trustable AI systems. We welcome submissions that explore innovative solutions, case studies, and fundamental research in this rapidly evolving field.

Topics of interest

We invite submissions on, but not limited to, the following topics:

  • AI-based Software Vulnerability Detection and Secure Code Generation. Leveraging AI techniques (e.g., LLMs) to automatically identify vulnerabilities in software and to generate secure, robust code including patches.
  • AI-based Attack Detection. Using AI models for detecting attacks and anomalies, such as network intrusion, adversarial attacks, model stealing queries, and so on.
  • LLM-based Rust Translation. Exploring the use of LLMs to translate code into memory-safe languages like Rust to enhance system security and reliability.
  • AI-based Attack/Defense Simulation. (1) Graph-based simulation for attack and defense simulation with AI (attack/defense simulation methods with Deep learning, Q learning and Graph based learning). (2) Simulating adversarial attacks against AI systems and developing effective defense strategies.
  • AI based Red Teaming. Methodologies and practices for systematically testing and breaking AI models to uncover hidden flaws, biases, and safety risks.
  • AI Model Attacks and Defenses. Techniques to exploit vulnerabilities in AI models (e.g., adversarial, model stealing, data extraction attacks) and the ways to mitigate them.
  • AI dependability, Reliability, and Safety. Measures to ensure AI systems remain robust, fault-tolerant, and safe across diverse conditions.
  • Responsible and Trustworthy AI. Approaches to building AI that is transparent, fair, and accountable, addressing ethical concerns and fostering user and societal trust.
  • XAI. Methods and frameworks that provide transparency and interpretability of AI models, enabling users to understand, trust, and effectively validate AI-driven decisions.

Submission Guidelines

All submissions must be original, unpublished, and not under review for another conference or journal. Papers should be formatted according to the IEEE conference template and must not exceed 6 pages, including figures, tables, and references.

All papers will undergo a single-blind peer-review process. Submissions should be made through the PRDC hotcrp submission system.

Submission link: https://prdc2025.hotcrp.com/

Important. Please ensure that your submission addresses the core theme of AI Dependability and its relevance to the PRDC community. We look forward to your contributions.

Important Dates

  • Abstract submission: September 30, 2025 (AoE)
  • Paper submission: October 7, 2025 (AoE)
  • Author notification: October 21, 2025 (AoE)
  • Camera ready submission: October 25, 2025 (AoE)

Workshop Organizers

Organization Chairs


Program Chairs

  • Prof. Seonghoon Jeong (Sookmyung Women’s University, Republic of Korea)

Program Committee

  • Prof. Hyoungshick Kim (Sungkyunkwan University, Republic of Korea)
  • Prof. Taekyoung Kwon (Yonsei University, Republic of Korea)
  • Prof. Ah Reum Kang (Pai Chai University, Republic of Korea)
  • Prof. Byung Il Kwak (Korea University, Republic of Korea)
  • Prof. Hyun Min Song (Dankook University, Republic of Korea)
  • Prof. Mee Lan Han (Korea University, Republic of Korea)
  • Prof. Sanghoon Jeon (Kookmin University, Republic of Korea)
  • Dr. Janet Hyunjae Kang (University of Queensland, Australia)
  • Dr. Jeonghyun Lee (National Security Research Institute)
  • Dr. Seungjin Ryu (National Security Research Institute)
  • and to be updated.