Monitaur launches GovernML to manage AI data lifecycle
Artificial intelligence (AI) governance software provider Monitaur launched for general availability GovernML, the latest addition to its ML Assurance Platform, designed for enterprises committed to the responsible use of AI.
GovernML, offered as a web-based, software-as-a-service (SaaS) application, enables enterprises to establish and maintain a system of record of model governance policies, ethical practices and model risk across their entire AI portfolio” Anthony Habayeb CEO and founder
As AI deployment accelerates across industries, so have efforts to establish regulations and internal standards that ensure fair, safe, transparent and responsible use of this often-personal data.For example:
- Entities ranging from the European Union to New York City and the state of Colorado are finalizing legislation that codifies practices espoused by a wide range of public and private institutions into law.
- Corporations are prioritizing the need to establish and operationalize governance policies across AI applications in order to demonstrate compliance and protect stakeholders from harm.
“Good AI needs great governance.Many companies have no idea where to start with governing their AI. Others have a strong foundation of policies and enterprise risk management, but no real enabled operations around them. They lack a central home for their policies, evidence of good practice and collaboration across functions. We built GovernML to solve both.”
The importance of AI governance
Effective AI governance requires a strong foundation of risk management policies and tight collaboration between modeling and risk management stakeholders. Too often, conversations about managing risks of AI focus narrowly on technical concepts such as model explainability, monitoring or bias testing. This focus minimizes the broader business challenge of lifecycle governance and ignores the prioritization of policies and enablement of human oversight.
How would this system of record mesh with other enterprise systems, such as data governance apps, legal risk management, security, etc.? Or does it necessarily have to mesh on an enterprise scale?
“Monitaur has robust APIs behind its platform that enable the push and pull of information,To deliver on the potential of a true enterprise SOR for model governance, a solution has to be able to ‘collaborate’ with key organizations, systems, policies and data from other functions. Good AI governance should support connectivity between systems, transparency between departments and reduce rework where possible.”
Habayeb offered examples of use cases in which an AI-related problem could arise to become a major issue.
“These days, you no longer have to be an expert to understand that AI systems will have bias; the question is now whether or not an organization can prove their efforts to mitigate the harm.”Was the data evaluated for bias? Were developers educated on ethics policies? Is the model optimized for the right metric? Did the legal sign-off? These are examples of key bias controls in the lifecycle of responsible AI governance. GovernML guides companies to build and evidence these and other critical policies. Doing so not only mitigates the potential for adverse events but also reduces the legal, financial and reputational exposure when they do occur.
“People are forgiving of mistakes; they are not forgiving of negligence,”
Automated vs. manual execution of AI governance tools
While there are foundations for risk management and model governance in some sectors, the execution of these is quite manual
“We are now seeing more models, with increasing complexity, used in more impactful ways, across more sectors that are not experienced with model governance,” Cass said in a media advisory. “We need software to distribute the methods and execution of governance in a more scalable way. GovernML takes what is best of proven methods, adds for the new complexity of AI and software-enables the entire life cycle.”
The emergence of and necessity for AI governance is not simply a result of AI investments or AI regulations; it is a clear example of a broader need to synergize risk, governance and compliance software categories overall, said Bradley Shimmin, chief analyst, AI Platforms, Analytics and Data Management at Omdia.
“Considering software as a stand-alone industry and comparing its regulation relative to other major sectors or industries, software’s impact-to-regulation ratio is an outlier,” Shimmin said in a media advisory. “GovernML offers a very thoughtful approach to the broader AI problem; it also puts Monitaur in an attractive position for future expansion within this much broader theme.”
GovernML manages policies for AI ethics
GovernML’s integration into the Monitaur ML Assurance Platform supports a lifecycle AI governance offering, covering everything from policy management through technical monitoring and testing to human oversight. By centralizing policies, controls and evidence across all advanced models in the enterprise, GovernML makes managing responsible, compliant and ethical AI programs possible,
The new software enables business, risk and compliance and technical leaders to:
- Create a comprehensive library of governance policies that map to specific business needs, including the ability to immediately leverage Monitaur’s proprietary controls based on best practices for AI and ML audits.
- Provide centralized access to model information and proof of responsible practice throughout the model life cycle.
- Embed multiple lines of defense and appropriate segregation of duties in a compliant, secure system of record.
- Gain consensus and drive cross-functional alignment around AI projects.