1st International Workshop on Patterns and Practices of Reliable AI Engineering and Governance (AI-Pattern'24)

October 28th, 2024, in Tsukuba, Japan, co-located with the 35th IEEE International Symposium on Software Reliability Engineering (ISSRE 2024)

The tentative program can be viewed below.

  

AI-Pattern'24 was successfully over!

Thanks to the guest speaker Dr. Qinghua and paper authors, all organizers, supporters, and attendees, AI-Pattern'24 was successfully over! The workshop attracted around 25-30 attendes at peak. The opening slides with the voting results during the discussion are available here. Thank you all, and see you next time!

Outline and Goal

The popularity of artificial intelligence (AI), including machine learning (ML) techniques, has increased in recent years. AI is used in many domains, including cybersecurity, the Internet of Things, and autonomous cars, and is expanding its impact in scientific research, consumer assistants, and enterprise services through advancements in Generative AI (GenAI). Many works have investigated the mathematics and algorithms on which the AI techniques and models are built, but few have examined system engineering as well as their governance, which ensures AI systems are built, used, and managed to maximize benefits and prevent harms. AI engineering and governance needs to bring together diverse stakeholders across AI algorithms, data science, software/system engineering, compliance, legal, and business teams.

In AI software engineering and governance, there is often a gap between high-level abstract principles and low-level concrete tools and rules. Patterns encapsulating recurrent problems and corresponding solutions under particular contexts and pattern languages as organized and coherent patterns can fill such gaps, resulting in a common ``language'' for various stakeholders involved in often interdisciplinary AI software systems development and governance. Researchers and practitioners study best practices for engineering and governing reliable AI/ML systems to address issues in AI and ML techniques as well as processes, policies, and tools for trustworthy, responsible and safe AI system development and management. Such practices are often formalized as patterns and pattern languages. Major examples are

While reliable AI engineering and governance patterns have been documented, there's still much to uncover in this landscape. This limited understanding hampers adoption, preventing the realization of their full potential. This workshop seeks to improve understanding of the theoretical, social, technological, and practical advances and issues related to patterns and practices in reliable AI engineering and governance. It will provide the opportunity to bring together researchers and practitioners and discuss the future prospects of this area. The workshop will have accepted position paper presentations to expose the latest research and practices in the area, as well as invited talks, discussions and panels, and collaborative activities.

This workshop is supported by JST MIRAI engineerable AI (eAI) project's framework team.

Program (subject to change)

Each accepted paper presentation will take 20 minutes, followed by a 10-minute discussion.

9:00-10:30 Session I

Opening

Invited talk: A Pattern-Oriented Approach for Engineering Safe and Responsible AI Systems
Qinghua Lu (Data61, CSIRO)
Abstract: The rapid evolution and widespread adoption of AI, particularly generative AI, have led to significant advancements in productivity and efficiency across various domains. However, the fast-growing capabilities and autonomy of AI also bring growing concerns about AI safety and responsible AI. Recent global initiatives have introduced standards and regulations on AI safety and responsible AI to guide the development and use of AI systems AI systems. Despite these efforts, these standards and regulations often remain abstract, making them difficult for practitioners to implement in real-world scenarios. On the other hand, significant efforts have been put on model-level solutions, which fail to capture system-level challenges, as AI models need to be integrated into software systems that are deployed and have real-world impact. To close the gap in operationalising responsible and safe AI, this talk presents a pattern-oriented approach to provide concrete guidance for engineering safe and responsible AI systems.
Biography: Dr Qinghua Lu is a principal research scientist and leads the Responsible AI science team at CSIRO’s Data61. She is the winner of the 2023 APAC Women in AI Trailblazer Award and is part of the OECD.AI’s trustworthy AI metrics project team. She received her PhD from University of New South Wales in 2013. Her current research interests include responsible AI, software engineering for AI, and software architecture. She has published 150+ papers in premier international journals and conferences. Her recent paper titled “Towards a Roadmap on Software Engineering for Responsible AI” received the ACM Distinguished Paper Award. Her new book, “Responsible AI: Best Practices for Creating Trustworthy AI Systems”, was published by Pearson Addison-Wesley in December 2023.

Toward Pattern-Oriented Machine Learning Reliability Argumentation
Takumi Ayukawa, Jati H. Husen, Nobukazu Yoshioka, Hironori Washizaki and Naoyasu Ubayashi (Waseda University)

11:00-12:30 Session II

A Process Pattern for Cybersecurity Assessment Automation: Experience and Futures
James Cusick (Ritsumeikan University)

Toward Extracting Learning Pattern: A Comparative Study of GPT-4o-mini and BERT Models in Predicting CVSS Base Vectors
Sho Isogai, Shinpei Ogata (Shinshu University), Yutaro Kashiwa (Nara Institute of Science and Technology), Satoshi Yazawa (Voice Research, Inc.), Kozo Okano (Shinshu University), Takao Okubo (Institute of Information Security), Hironori Washizaki (Waseda University)

Discussion

Closing Remarks


Call for Papers

We solicit contributions on the patterns, practices, and related topics in the area of reliable AI engineering and governance. Topics of interest include but are not limited to:

Important Dates

Paper Categories

Paper Formatting and Submission

All submissions must adhere to the IEEE Computer Society Format Guidelines as implemented by the following LaTeX/Word templates:

Paper submission will be done electronically through EasyChair, selecting International Workshop on Patterns and Practices of Reliable AI Engineering and Governance. Every paper submission will be peer-reviewed by reviewers. Emphasis will be given on originality, usefulness, practicality, and/or new problems to be tackled. Papers must have overall quality and not have been previously published or be currently submitted elsewhere. Accepted papers will be published in a supplemental volume of the ISSRE conference proceedings by the IEEE Computer Society, and will appear on IEEE Xplore. At least one author of each accepted paper registers for the ISSRE conference and presents the paper in-person at the workshop.

Organizing Committee

Contact us at: aipattern2024 [at] easychair.org

Program Committee

References