Solving AI by Solving AI processes
Everybody in the AI community knows that data is the fuel for the AI motor. Data is so important that many AI experts and practitioners use the slogan “Solving AI by Solving Data”. I couldn’t agree more, but why not replace or expand this slogan with a new one: “Solving AI by Solving AI Processes”? Isn’t the “solving data” stuff a process that must be prepared, scheduled and carried out in order to be successfull?
If you are a practitioner in AI with a minimum experience you should recognize that your work is made out of many processes that for simplicity we can call “AI processes”. Some examples:
- a preliminary study to understand how data must be collected
- a research to understand which method is more suitable for data cleaning or selection
- a risk analysis to understand how to deal with eventual risks related to your AI system
- a policy to update an outdated AI model
- a policy on how to manage non-explainable or non-transparent AI systems
- a study of the infrastructure needed for deployment at scale
These are only few cases, the list is longer and the level of abstraction can be lower. I think most of the AI practioners or experts today are facing specific challenges that are implicity made of many AI processes. Just an example: “The model doesn’t have the expected accuracy!”. What can we do? Well, react with a list of AI processes to fix the problem, such as:
- collect meaningful data
- change the model
- implement the risk treatment plan that manages this case
Some successful companies have been created to efficiently manage these AI processes for other organizations. Some examples:
- v7labs.com labelling and data management processes
- credo.ai governance processes
- wandb.ai model related processes
Also in this case the list is long and evolving along with the needs of the ecosystem.
We all see what is happening out there, AI evolves in a way that systems are even more powerful and complex. At the same time, this complexity probably needs more complex, structured and managed AI processes. I suggest organizations that want to start with AI or that struggles with AI projects to think in terms of AI processes. It is not mandatory to always use third-party services or tools (even though they might be time-saving) if you have resources, a good and managed strategy, with processes well defined..otherwise, hard times ahead.
All the stuff above is highly inspired from my expertise with AI and a really recent experience, since I became an ISO 42001 AIMS Lead Implementer. The ISO 42001 standard is super young (published in December 2023) and many of you AI practitioners probably don’t know anything about it. I bet on it! I state this because while writing I am probably still the first Lead Implementer in my country (Italy) and among the first in the EU. You could think that ISO stuff is boring, well, for some aspects you are right, but if you take a look at this standard it can open your mind to the next level. Or at least, it can help your organization to be effective with complex AI processes.
One of the most interesting part of this standard deals with the people inside an organization and -obviously- AI. Let me explain. If you work in a organization of a certain size, the standard states as mandatory the creation of a document called “AI policy”, which contains a declaration of objectives regarding AI and plans on how to reach them. This document must be aligned with the organization strategy for AI and shared internally so that people are aware of it. Well, if you are building any complex AI system that requires strong and extended human cooperation, you should recognize that awareness is a key factor. What if awareness is missing? Maybe an awareness plan in this case is an AI process!
Maybe the next post will be focused on this ISO 42001!
If I didn't quote you or if you want to reach out feel free to contact me.
© [Simone Brazzo] [2025] - Licensed under CC BY 4.0 with the following additional restriction: this content can be only used to train open-source AI models, where training data, models weights, architectures and training procedures are publicly available.