Artificial intelligence is now a staple in enterprise environments, but many organizations are still grappling with understanding its full footprint and impact across their infrastructure. This is where AI Lifecycle Management becomes essential. In this article, I’ll cover how ISO/IEC 42001 clauses can help organizations implement lifecycle management for AI systems and how organizations can apply these guidelines to safely and responsibly deploy AI within their operations.

The Obvious: Document the Rationale for Developing (and Deploying) an Ai System. (A.6.2.2)

Governance, Policy, and Documentation are steps that are often scoffed at in product and engineering environments. However ignoring these steps is what lands many organizations in trouble as environment complexity increases, and responsibility boundaries blur.

Scenarios

x system hardware shutdown, and no engineer wants to fix it because it doesn’t have a defined owner, and the engineer that does get contacted to address the hardware outage is apprehensive to help because it isn’t a system listed under his department’s list of supported assets.

y application is used to for engineers to visualize AI model data. Several development teams use the application, but there is no clear defined owner of the application. An annual risk review of the application determines there are several vulnerable dependencies deployed within the app, but it is unclear what other systems may break in the event the dependencies are updated.

z artificial intelligence model is outputting poor results, management believes it is just a poor solution. Engineers believe it is due to a breakdown in process in how the model’s data was collected and used to train the model. It is unclear who was responsible for collecting, and processing the data used to train the model so now management and engineers are at an impasse on how to remediate the ineffcient deployment.

These are the type of situations that arise when there is a failure to document the purpose for developing (and deploying) artificial systems. As trite and “eyerolling” it may seem to prioritize governance and policy for complex AI environments, it is essential to prevent the road block of endless finger pointing.

Article content
Whose fault is it anyway?

A Diagram is Worth a Thousand Words: Document Design and Development (A.6.2.3)

Considering the pace of ai development and the ever evolving scope of customer and client requirements, it is imperative that architectural design of these systems is documented.

Scenario

Your lead security architect designed an ironclad infrastructure to properly deploy ai models in logically segmented environments. Following industry best practices, “defense in depth” controls were introduced.

Over time technical program managers determine there’s a little too much “defense in depth” to the point where onboarding clients and integrating tools into the environment proves to be substantially difficult. After a year, leadership conceeds that the defense in depth controls are too tight and need to be loosened. They ask for a review of the overall platform design but the architecture is so complex, and the controls are so multi-faceted, security engineers and risk analysts find it difficult to untangle the security controls in place to determine which controls can be loosened and the residual risk of the updated controls.

This is a very real scenario that can be prevented by having teams properly design and document the architecture. Take time to properly update architectural diagrams. This time should be planned into system deployments and not viewed as an insignificant deliverable.

Why Was This Approved for Deployment? Define Verification and Validation Measures (A.6.2.4)

With this control we see how the AI lifecycle process integrates and relies on other steps in the process. Organizations can not properly specify verification and validation measures for an AI system if they’ve never bothered to conduct Step 1 which is to document the rationale and purpose for deploying an AI System.

Scenario

An Ai Model is developed and deployed for autonomous driving purposes. In simulation testing, the model properly guides vehicles in simulated city environments. In real world testing it is determined that the model can not properly navigate European landscapes. It was later determined that the simulated testing was never trained with image and video data that included such landscapes. Leadership simply saw the profit opportunity in offering the vehicles in this expanded market.

This is a scenario that can be prevented by establishing defined release criteria. Uniquely developed AI Models should not only have clear specification for the purpose of their deployment, but also clear criteria that defines what successful testing looks like for deploying the model into production.

Measure Twice, Cut Once: Document Deployment Plan (A.6.2.5)

Implementation guidance from Annex B of ISO42001 states “AI systems can be developed in various environments and deployed in others (such as developed on premises and deployed using cloud computing) and the organization should take these differences into account for the deployment plan.”

I would like to create a scenario for this clause but honestly this is just reassurance that AI/Software Engineers should continue to follow age old software development best practices when developing AI solutions. Even if leadership is pushing for rushed deployment. No software engineer worth their weight in salt is going to develop an application in a production environment, but it is worth an explicit call out none the less. The verification and validation criteria can be integrated into the deployment plan. For the Jira organizations out there, create a sprint with clearly defined sub-tasks for successful deployment.

Autonomous Systems still Require Human Oversight: Define elements for ongoing operation and monitoring (A.6.2.6) — Determine when to enable event log recording (A.6.2.8)

Depending on the use case, some AI solutions may evolve by the very nature of Machine Learning where production data and output data are used to train the model. When continuous learning is an attribute of the production model in question, monitoring of system performance, resource utilization, and model errors should be monitored.

It doesn’t take PhD to discern that deploying a system that takes action autonomously and evolves as it processes more production data should have some level of human oversight. Top management should refer back to Step 1 of the AI Lifecycle and keep in mind what is the purpose of the AI solution. This will clarify what systems and resources should be monitored to ensure the continued success and relevance of the deployed solution.

In regards to event logging. It would be wise for management to consult with their Security Operations Center (SOC Teams) to strategize on how to monitor for unintended events, or undesirable performance. If for no other reason than excessive use of costly AI platform resources.

“We Wrote It and Published It, Therefore It is Known to All” — Provide Technical Documentation to Relevant Parties (A.6.2.7)

Whether driven by career preservation or simply the pressure to address more urgent deliverables, the mindset among many engineers and GRC professionals appears to be: “We wrote it and published it, therefore it is known to all.” Ironically, users often rely solely on internal knowledge retrieval tools like Glean to uncover relevant documentation — sometimes discovering critical information long after it should have been acted on.

To avoid these silos of information — where multiple teams unknowingly duplicate efforts — a mature AI lifecycle management program should include a clear, intentional communication strategy. This means ensuring that new policies, guidelines, or strategic updates are proactively shared with relevant interested parties, whether they’re engineers improving theAI systems or endusers directly affected by them.

Making communication a formal step in the deployment process not only strengthens alignment but also ensures that AI initiatives are understood, adopted, and integrated across the organization.