Blog Post Unifying AI Safety. Global tendency

At the AI Action Summit 2025 in Paris, Singapore’s Minister for Digital Development and Information, Josephine Teo, introduced three key AI safety initiatives to advance global AI governance, risk mitigation, and multilingual AI security.

These initiatives focus on standardising AI testing, addressing cultural biases in AI models, and strengthening AI security frameworks.

1. Global AI Assurance Pilot (AI Pilot)

  • Establishes best AI testing and risk assessment practices in Generative AI (GenAI) applications.
  • Connects AI assurance providers with organisations deploying AI to enhance trust, transparency, and accountability.

2. Joint Testing Report with Japan

  • Evaluates the safety of AI models across multiple languages by testing Mistral Large and Gemma 2 (27B) models.
  • Key findings highlight that human expertise improves AI accuracy, standardised evaluation rubrics enhance fairness, and global alignment in AI testing increases efficiency.

3. Singapore AI Safety Red Teaming Challenge

  • First multicultural and multilingual AI safety evaluation in the Asia-Pacific region.
  • Aims to identify AI risks, develop a taxonomy of AI bias, and establish a baseline for measuring bias in AI models.

The Future of AI Regulation and Security

For years, AI safety concerns lacked clear regulatory solutions, but we now see structured efforts to establish global AI risk management standards.
These initiatives mark a step toward unifying AI governance and ensuring that AI security frameworks keep pace with rapid technological advancements.

The attached Singapore AI Safety Red Teaming Challenge Evaluation Report provides valuable insights for those involved in AI assurance, testing protocols, and security measures.
This report offers a detailed analysis that can help refine your own AI testing frameworks and risk assessment strategies.