Why 2022 is only the beginning for AI regulation

As the world becomes increasingly dependent on technology to communicate, attend school, do our work, buy groceries and more, artificial intelligence (AI) and machine learning (ML) play a bigger role in our lives. Living through the second year of the COVID-19 pandemic has shown the value of technology and AI. It has also revealed a dangerous side and regulators have responded accordingly.

In 2021, worldwide, governing bodies are working to regulate how AI and ML systems are used. From the UK to the EU to China, the rules of how industries should monitor their algorithms, the best practices of audit and framework for more transparent AI systems are growing. The U.S. has made little progress on regulating artificial intelligence compared to other geographic countries. Over the past year, however, the federal government has begun taking steps to regulate artificial intelligence across industries.

The threat to civil rights, civil liberties, and privacy is one of the biggest considerations for regulation of AI in the US. Discussions this year on how AI should be controlled focus on three areas of interest: Europe and the UK, the individual US states, and the US federal authorities.

Europe and the UK: pave the way for AI regulation

Europe is rapidly moving towards comprehensive legislation regulating how AI can be used across industries. In April, the European Commission announced a framework to help enterprises monitor their AI systems. In the UK, a number of steps have been taken to create more rules around AI auditing practice, AI assurance and algorithmic transparency.

Recently, Germany included in the framework for the life cycle management of AI the world’s first guideline for specific criteria published by the public authorities. The AI ​​Cloud Service Compliance Criteria Catalog (AIC4) responds to calls for AI regulation by clearly outlining the requirements necessary to promote robust and secure AI practice.

Movement at the state and local level in the U.S.

Meanwhile, the United States has adopted a less-focused approach to AI regulation. State legislatures have taken steps to control this agile technology, but the federal government has made little progress compared to Europe. The federal action taken by the United States this year, while promising, is largely binding.

In the U.S., states and local governments are moving toward more responsible and enforceable AI regulation. In Colorado, the state legislature enacted SB21-169, prohibiting insurers from using external customer data to ensure insurers are held accountable for the discriminatory practices of their AI systems.

Meanwhile, local governments in New York City and Detroit have enacted rules to reduce the bias and discriminatory practices of algorithms. In New York, the City Council passed the nation’s first attempt to curb harmful, AI-based recruitment practices. Earlier in 2021, the City Council of Detroit passed a city ordinance requiring greater accountability and transparency in the city’s surveillance systems.

U.S. federal agencies target decentralized AI governance

A year ago, the National Security Commission and the Office of Public Accountability (GAO) submitted their final report to Congress, recommending that the government take local legal action to protect privacy, civil rights and civil liberties in AI development by government agencies. Highlighting the lack of public trust in AI for national security, intelligence community and law enforcement, the report advocates for the private sector to be a guide for promoting more credible AI. In June, the GAO published a report on key practices by federal agencies to guarantee accountability and responsibility in AI use.

In April, the Federal Trade Commission (FTC) issued guidelines on how to responsibly design AI and ML systems. Through lifecycle monitoring to identify biased and discriminatory outcomes, adopting streamlined auditing and setting clear expectations of what AI systems can accomplish, the FTC hopes to promote greater confidence in these complex systems. Basically, the FTC believes that current legislation is sufficient to enforce and will apply to AI systems when needed.

The FTC has also taken a firm stand advocating for more transparent and fair recruitment processes, including restrictions on AI systems and clear expectations. Through greater accountability and transparency, the FTC believes that there will be more trust and confidence in AI, which will make the US more competitive.

Under the guidance of the National Defense Authorization Act 2021, the Commerce Department directed the National Institute of Science and Technology (NIST) to develop a voluntary risk management framework for AI systems.

In addition, the Commerce Department set up the National Artificial Intelligence Advisory Committee (NAIAC) in September on the advice of the National AI Initiative Act 2020. The committee will advise the president and other federal agencies on various issues related to AI. , The science surrounding AI, the issues that come with AI in the workplace and how AI can raise issues of social justice.

Earlier this year, the Food and Drug Administration (FDA) released Artificial Intelligence / Machine Learning-based software as a Medical Device (SaMD) Action Plan. The plan outlines how the FDA intends to monitor the use and development of AI- and ML-based SaMD devices, which are used to treat, diagnose, treat, reduce or prevent disease and other medical conditions. The plan updates the original proposal written in 2019. In November, the plan was renewed in a partnership between Canada and the UK.

In October, the Equal Employment Opportunity Commission (EEOC) launched an initiative to reduce AI bias and promote algorithmic fairness. No formal process has yet to be published, but thanks to industry leaders, enterprises and consumers, plans are afoot to develop a framework in the near future. This guidance will be used to promote greater transparency and fairness in the artificial intelligence in the recruitment process.

Last summer, the National Institute of Science and Technology (NIST) requested information from enterprises and technical experts to help them report their proposed Artificial Intelligence Risk Management Framework. In an effort to promote greater transparency and trust in how the enterprise uses artificial intelligence, this RFI generated a number of responses from various stakeholders working to promote innovation and security.

The White House addresses concerns over privacy, civil liberties

When it comes to AI, the Biden administration has, so far, focused primarily on protecting customer privacy. In July, the White House – as part of the National Artificial Intelligence Initiative – began gathering information from enterprises, academics and experts on how to build a comprehensive AI risk management framework. This framework aims to address concerns about trust and transparency in AI systems. In addition, it will work towards the implementation of more responsible and equal artificial intelligence.

In September, the US-EU Trade and Technology Council (TTC) released its first joint statement. In it, the Council pledges to develop “AI systems that are innovative and credible and that respect universal human rights and shared democratic values.” To achieve this, both the EU and the US pledged to support the OECD recommendation on Artificial Intelligence for more reliable AI and assessment tools. In addition, TTC wants to conduct a joint economic study examining the impact of AI on the future of the labor market.

In October 2021, the White House Office of Science and Technology Policy expanded the discussion on regulating AI, protecting consumer privacy, and guaranteeing security. Working with experts from across industries, academics and government agencies, the office accepted public interest representations to report on creating a Bill of Rights for the AI-driven world.

In Congress: Facebook whistleblower is a catalyst for change

In October, questions related to Facebook’s data practice came up for discussion when Congress asked former staff member Francis Hojen. The hearing revealed ways in which Facebook knowingly continues its business practices to the detriment of vulnerable groups. After the initial hearing, members of Congress began legislating to prevent large technology companies from harming vulnerable communities.

In an effort to implement security measures to manage how AI is used in the United States, Rep. Frank Pallon (D-NJ) introduced the Justice Against Malicious Algorithms Act of 2021. This comes after Facebook announced its intention to put new guards on algorithms to protect children from harm. It will remove the existing security of the website under Section 230 of the Communication Density Act of 1996 which will remove them from the liability posted on their platform.

In a second attempt to regulate how algorithms are used on online platforms, a bipartisan group of members of Congress sponsored the Filter Bubble Transparency Act in November. The bill would force major corporations to give their users options on how their data is used by opaque algorithms.

As concerns about privacy and trust in artificial intelligence continue to grow, this type of legislation is likely to continue over the next few years. Legislators want to give Americans more autonomy in how their data is used through platforms that have become so integrated into everyday life.