Discussions about regulating artificial intelligence will ramp up next year, followed by actual rules the following years, forecasts Deloitte.

Robot adding check marks

Image: Alexander Limbach/Shutterstock

So far, artificial intelligence (AI) is a new enough technology in the business world that it’s mostly evaded the long arm of regulatory agencies and standards. But with mounting concerns over privacy and other sensitive areas, that grace period is about to end, according to predictions released on Wednesday by consulting firm Deloitte.

SEE: Artificial intelligence ethics policy (TechRepublic Premium)

More about AI

Looking at the overall AI landscape, including machine learning, deep learning and neural networks, Deloitte said it believes that next year will pave the way for greater discussions about regulating these popular but sometimes problematic technologies. These discussions will trigger enforced regulations in 2023 and beyond, the firm said.

Fears have arisen over AI in a few areas. Since the technology relies on learning, it’s naturally going to make mistakes along the way. But those mistakes have real-world implications. AI has also sparked privacy fears as many see the technology as intrusive, especially as used in public places. Of course, cybercriminals have been misusing AI to impersonate people and run other scams to steal money.

The ball to regulate AI has already started rolling. This year, both the European Union and the US Federal Trade Commission (FTC) have created proposals and papers aimed at regulating AI more stringently. China has proposed a set of regulations governing tech companies, some of which encompass AI regulation.

There are a few reasons why regulators are eyeing AI more closely, according to Deloitte.

First, the technology is much more powerful and capable than it was a few years ago. Speedier processors, improved software and bigger sets of data have helped AI become more prevalent.

Second, regulators have gotten more worried about social bias, discrimination and privacy issues almost inherent in the use of machine learning. Companies that use AI have already bumped into controversy over the embarrassing snafus sometimes made by the technology.

In an August 2021 paper (PDF) cited by Deloitte, US FTC Commissioner Rebecca Kelly Slaughter wrote: “Mounting evidence reveals that algorithmic decisions can produce biased, discriminatory, and unfair outcomes in a variety of high-stakes economic spheres including employment, credit, health care, and housing.”

And in a specific example described in Deloitte’s research, a company was trying to hire more women, but the AI tool insisted on recruiting men. Though the business tried to remove this bias, the problem continued. In the end, the company simply gave up on the AI tool altogether.

Third, if any one country or government sets its own AI regulations, businesses in that region could gain an advantage over those in other countries.

However, challenges have already surfaced in how AI could be regulated, according to Deloitte.

Why a machine learning tool makes a certain decision is not always easily understood. As such, the technology is more difficult to pin down compared with a more conventional program. The quality of the data used to train AI also can be hard to manage in a regulatory framework. The EU’s draft document on AI regulation says that “training, validation and testing data sets shall be relevant, representative, free of errors and complete.” But by its nature, AI is going to make errors as it learns, so this standard may be impossible to meet.

SEE: Artificial intelligence: A business leader’s guide (free PDF) (TechRepublic)

Looking into its crystal ball for the next few years, Deloitte offers a few predictions over how new AI regulations may affect the business world.

  • Vendors and other organizations that use AI may simply turn off any AI-enabled features in countries or regions that have imposed strict regulations. Alternatively, they may continue their status quo and just pay any regulatory fines as a business cost.
  • Large regions such as the EU, the US and China may cook up their own individual and conflicting regulations on AI, posing obstacles for businesses that try to adhere to all of them.
  • But one set of AI regulations could emerge as the benchmark, similar to what the EU’s General Data Protection Regulation (GDPR) order has achieved. In that case, companies that do business internationally might have an easier time with compliance.
  • Finally, to stave off any type of stringent regulation, AI vendors and other companies might join forces to adopt a type of self-regulation. This could prompt regulators to back off, but certainly not entirely.

“Even if that last scenario is what actually happens, regulators are unlikely to step completely aside,” Deloitte said. “It’s a nearly foregone conclusion that more regulations over AI will be enacted in the very near term. Though it’s not clear exactly what those regulations will look like, it is likely that they will materially affect AI’s use.”

Also see