AI’s growing enterprise gaps explain why AWS SageMaker is growing

There are troubling gaps revealed in a new report showing that enterprises are not prioritizing security, compliance, fairness, bias and ethics. The study, conducted by O’Reilly, shows AI’s adoption is struggling to reach maturity today and lacking prioritization in these areas may be, in part, a reason why.

O’Reilly’s annual survey of enterprise AI adoption found that just 26% of organizations have AI projects in production, the same percentage as last year. In addition, 31% of enterprises report not using AI in their business today, a figure that is up from 13% last year.

Enterprises rely on their software vendors to integrate new AI functionality into their applications, platforms, and toolkits as well as to internally grow their teams to aid in gaining value from AI integration. According to Gartner, the challenge with AI adoption is clear for many enterprises: only 53% of projects make it out of pilot into production, taking — on average — eight months or longer to create scalable models.

What’s holding AI projects back?  

AI project growth is flat this year. According to O’Reilly’s findings, many enterprises with AI projects in production don’t have dedicated AI specialists or developers overseeing the projects. CIOs of financial services and insurance firms VentureBeat interviewed via email say that AI projects built on a well-defined business case and designed to work around data quality challenges have the highest survival rate. However, those CIOs also caution that it’s essential to keep other C-level executives and board members’ initial enthusiasm for projects on track with updates and short design reviews. O’Reilly’s survey found that 37% of retailers and 35% of financial services firms have AI applications in production.

Financial Services CIOs also say real-time risk management models that capitalize on supervised machine learning algorithms and random forest techniques are being pushed to the front of the devops queue today. “We’re seeing the immediate impact of price increases and it’s making AI- and ML-based financial modeling an urgent priority today,” the CIO of one leading financial services and insurance firm said in an e-mail.

To motivate  ITteams to learn AI and ML modeling, some companies offer  tuition reimbursement as an incentive. The goal is to build internal teams familiar with the existing IT, database and systems infrastructure thatcan help create, test and promote models into production. Based on a survey of CIOs (see chart below) overcoming bottlenecks takes a commitment to larger IT budgets, too..

CIOs tell VentureBeat that the best way to improve AI project survival rates is to combine a solid business case with internally trusted, well-vetted data sources. Both create greater credibility, according to the CIOs interviewed.

How data science and machine learning platforms reduce risks  

Nearly seven out of 10 enterprises interviewed (68%) believe unexpected outcomes and predictions from models are their greatest risk. Following that,  the next greatest risks reported are model interpretability and transparency and model degradation (both at 61%). Meanwhile, security vulnerabilities are considered a risk by just 42% of respondents, safety by 46% and fairness, bias and ethics by 51%.

Trusting model outcomes take so much time and focus that security vulnerabilities are considered secondary risks and priorities.Devops teams need DSML platforms that support the full scope of the machine learning development lifecycle (MLDLC) with AutoPilot functionality. O’Reilly’s study refers to AutoPilot and its rapid advances in AI-generated coding. However, there’s also the need for an AutoPilot to automatically inspect raw data, select the most relevant features and identify the best algorithms. For example, Amazon SageMaker Autopilot, a built-in component of SageMaker Studio, is used in devops teams today to improve model tuning and accuracy.

SageMaker’s architecture is designed to adapt and flex to changing model building, training, validating and deployment scenarios. SageMaker integrates across AI Services, ML frameworks and infrastructure in the middle of the AWS ML Stack. CIOs tell VentureBeat SageMaker provides greater flexibility in managing notebooks, training, tuning, debugging and deploying models. In short, it provides the model interpretability and transparency enterprises need to see AI as less of a risk.

DSML platforms, including AWS SageMaker, can help enterprises gain greater visibility across machine learning development lifecycles, reducing the risks of unexpected outcomes and predictions and better model interpretability and transparency. 

SageMaker relies on the AWS Shared Responsibility Model, an AWS framework, to define the extent of its security support versus what customers need to provide. AWS secures up to the software level, as the graphic below shows. Customers are responsible for securing client-side data, server-side encryption and network traffic protection.

Amazon provides an introductory level of support for Identity and Access Management (IAM) as part of their AWS instances. AWS’ IAM support includes Config Rules and AWS Lambda to create alerts. In addition, AWS’ native IAM has APIs that can integrate into corporate directories and restrict access to users who leave the company or violate access policies. While the Shared Responsibility model is just a starting point, it’s a useful framework for planning an enterprise-wide cybersecurity strategy. CIOs VentureBeat spoke with say they supplement native IAM support with Privileged Access Management (PAM) and build out their cybersecurity initiatives using the framework as a reference point

AWS’ Shared Responsibility Model delivers a baseline Identity and Access Management (IAM) module that gets enterprises up and running with secure identity management. 

How AI adoption bridges gaps

O’Reilly’s latest survey of AI adoption identifies troubling gaps in the importance enterprises place on security, compliance, fairness, bias and ethics. For example, just 53% of AI projects move from pilot to production, reflecting the lack of integration, visibility and transparency across MLDLCs. Improving how efficient devops, data scientists and researchers are creating, testing, validating and releasing models is one of the key design goals for SageMaker. It’s an example of how a DSML platform can help reduce model risks and enable AI to deliver more business value over time.