
The UK government published recommendations for the artificial intelligence industry on Wednesday, outlining a comprehensive approach to regulating the technology at a time when it has reached frenzied levels of hype.
– Advertisement –
In a white paper to be tabled in Parliament, the Department for Science, Innovation and Technology (DSIT) will outline five principles it wants companies to follow. They are: safety, security and strength; transparency and interpretability; objectivity; accountability and governance; and competitiveness and redress.
Instead of setting new rules, the government is asking regulators to enforce existing rules and inform companies about their obligations under the white paper.
– Advertisement –
It tasked the Health and Safety Executive, the Equality and Human Rights Commission, and the Competition and Markets Authority to come up with “tailored, context-specific approaches to what AI is actually being used in their areas.”
“Over the next twelve months, regulators will issue practical guidance to organizations as well as other tools and resources, such as risk assessment templates, to determine how best to apply these principles in their areas,” the government said.
– Advertisement –
“When parliamentary time permits, legislation may be introduced to ensure that regulators consider the principles consistently.”
Maya Pindius, CEO and co-founder of AI startup Humanizing Autonomy, said the government’s move is a “first step” towards regulating AI.
“There needs to be a bit of a strong narrative,” she said. “I look forward to seeing it. It’s like planting the seed for it.”
However, she added, “It’s incredibly difficult to regulate technology as technology. You want to advance it; you don’t want to hinder any progress when it affects us in some way.”
Recommendations arrive on time. ChatGPT, the popular AI chatbot developed by Microsoft-backed company OpenAI has sparked a wave of demand for the technology, and people are using the tool for everything from writing school essays to drafting legal opinions.
ChatGPT has already become one of the fastest growing consumer applications, attracting 100 million monthly active users as of February. But experts have raised concerns about the technology’s negative effects, including the potential for plagiarism and discrimination against women and ethnic minorities.
AI ethicists are concerned about bias in the data used to train AI models. the algorithm is shown male bias – White men in particular – put women and minorities at a disadvantage.
Due to automation, there is also a possibility of loss of jobs. on Tuesday, Goldman Sachs warned that more than 300 million jobs could be at risk of being wiped out by generative AI products.
The government wants companies that incorporate AI into their businesses to ensure they provide an adequate level of transparency about how their algorithms are developed and used. The DSIT said, “Organizations must be able to communicate when and how it is used and explain the system’s decision-making process at an appropriate level of detail that matches the risks posed by the use of AI. “
DSIT said companies should also offer users a way to challenge decisions taken by AI-based tools. user-generated platforms such as FacebookTikTok and YouTube often use automated systems to remove content flagged as being against their guidelines.
AI, which is believed to contribute £3.7 billion ($4.6 billion) to the UK economy every year, must be used “in a way that complies with existing UK laws, for example The Equality Act 2010 or the UK GDPR, and must not discriminate against individuals or create unfair business consequences,” DSIT added.
On Monday, Foreign Minister Michelle Donnellan visited the offices of AI startup DeepMind in London, a government spokesman said.
“Artificial intelligence is no longer the stuff of science fiction, and the pace of AI development is staggering, so we need regulations to ensure it develops safely,” Donelan said in a statement on Wednesday.
“Our new approach is based on strong principles so that people can trust businesses to unlock this technology of tomorrow.”
Leila Ibrahim, DeepMind’s chief operating officer and member of the UK’s AI Council, said AI is a “transformative technology”, but it “can only reach its full potential if it is trusted, which requires public and private partnerships”. emotion is needed.” To lead responsibly.”
“The UK’s proposed context-driven approach will help to keep up with the growth of AI while regulating, supporting innovation and mitigating future risks,” Ibrahim said.
This comes after other countries have come up with their own regimes to regulate AI. In China, the government is requiring tech companies to hand over details on their prized recommendation algorithms, while the European Union has proposed its own rules for the industry.
Not everyone is convinced by the UK government’s approach to regulating AI. John Byers, head of AI at law firm Osborne Clark, said the move to assign responsibility for monitoring the technology amid regulatory risks is creating a “complex regulatory patchwork full of holes”.
“The risk with the current approach is that a problematic AI system would need to present itself in the right format to trigger jurisdiction of a regulator, and in addition the right enforcement powers for the regulator to take decisive and effective action.” Buyers told CNBC via email, “to offset the harm that could be done and generate a sufficient deterrent effect to encourage compliance across the industry.”
In contrast, the EU has proposed a “top down regulatory framework” when it comes to AI.
Watch: Three decades after inventing the web, Tim Berners-Lee has some ideas on how to fix it