The AI Standards Every Manufacturer Should Know

Technology | Matt Minner| December 11, 2025

From predictive maintenance to smarter robotics, AI is everywhere, and it’s moving
fast. But along with the opportunity comes a new set of responsibilities: fairness, safety,
data integrity, and trust. That’s where ISO and IEC step in. They’ve built a global
framework of AI standards designed to keep innovation safe, consistent, and
transparent. Think of these as your roadmap for using AI responsibly without slowing
down progress.


Here’s your guide: what each standard does, why it matters, and how it helps
manufacturers like you stay ahead.


ISO/IEC 22989 (2022) — AI Concepts & Terminology
This one gets everyone speaking the same language. It defines what “AI,” “machine
learning,” and “autonomy” actually mean. It is essentially a jargon glossary that cuts
through all the buzzwords and confusion. By putting everyone on the same page, it
creates a shared foundation for every other AI standard and helps teams, vendors, and
partners communicate clearly when tackling AI-related projects.


ISO/IEC 23894 (2023) — AI Risk Management
This is a guide for identifying and managing AI-specific risks, from safety and quality
issues to ethical impacts. It plugs right into your existing risk or quality-management
systems. Perfect if you’re using AI for inspection, scheduling, or maintenance. It’s
adaptable to any plant size or setup. It also encourages transparency with partners
and customers.


ISO/IEC 42001 (2023) — AI Management System
This is the AI version of ISO 9001. It helps you set up a management system that keeps
AI projects accountable, auditable, and aligned with business goals. When AI is
deployed across multiple facilities, this gives leadership a single playbook for oversight
and continuous improvement. It aligns with existing standards like ISO 9001 or 27001.


ISO/IEC 38507 (2022) — AI Governance
This standard provides guidance for boards and executives. It’s about how to oversee
AI responsibly, making sure it supports the company mission and meets regulatory
expectations. It gives leadership the right questions to ask before approving any AI
rollout. This ensures accountability stays at the top, and decisions are made before AI
is implemented.


ISO/IEC 5338 (2023) — AI Life Cycle Processes
This lays out how to plan, design, build, test, and maintain AI systems. It’s ideal if you’re
developing or buying AI tools for robotics or quality inspection. It ensures they’re
managed like any other engineered system. This integrates with proven software and
systems-engineering practices.


ISO/IEC 42005 (2025) — AI Impact Assessment
This unique assessment focuses on how AI affects people, workplaces, and society.
Think of it as a “social safety check” for your AI systems. If your products or operations
use AI that could influence safety, privacy, or fairness, this helps you catch risks early.
It adds visibility around potential misuse or unintended effects.


ISO/IEC 42006 (2025) — AI Audit & Certification Requirements
This sets the rules for organizations that audit and certify AI management systems
(based on 42001). It ensures your AI certification actually means something, and
auditors are qualified and consistent. It gives customers and partners confidence in
your systems.


Summary
These seven standards are the guardrails of trustworthy AI. Together, they cover the
full journey from defining what AI is to proving that you’re using it responsibly. For
manufacturers, adopting them isn’t just about compliance, it’s about confidence.
Confidence in having some tangible guardrails and structure around AI. Along with
confidence that your AI is safe, ethical, and built to last.


By, Matt Minner
Director of Technical Services
Catalyst Connection