AI Act – Dataconomy https://dataconomy.ru Bridging the gap between technology and business Tue, 19 Jul 2022 16:01:45 +0000 en-US hourly 1 https://dataconomy.ru/wp-content/uploads/2022/12/DC-logo-emblem_multicolor-75x75.png AI Act – Dataconomy https://dataconomy.ru 32 32 UK prepares AI rulebook two months after EU AI Act https://dataconomy.ru/2022/07/19/uk-ai-rulebook-and-eu-ai-act/ https://dataconomy.ru/2022/07/19/uk-ai-rulebook-and-eu-ai-act/#respond Tue, 19 Jul 2022 14:44:07 +0000 https://dataconomy.ru/?p=26095 The British government has frequently emphasized its goals of becoming an “AI superpower” in the world. Today, it revealed a new “AI rulebook” that it hopes would help regulate the industry, spur innovation, and increase public confidence in the technology. According to the AI rulebook, artificial intelligence regulation will be less centralized than it is […]]]>

The British government has frequently emphasized its goals of becoming an “AI superpower” in the world. Today, it revealed a new “AI rulebook” that it hopes would help regulate the industry, spur innovation, and increase public confidence in the technology. According to the AI rulebook, artificial intelligence regulation will be less centralized than it is in the EU and will instead give existing regulatory authorities the freedom to make choices based on the circumstances at hand.

AI rulebook seeks to support innovation

With the intention of giving regulators flexibility to implement these in ways that meet the usage of AI in particular industries, the UK’s approach is built on six basic principles that regulators must adhere to.

Damian Collins, the digital minister, provided a comment on the measures:

“We want to make sure the UK has the right rules to empower businesses and protect people as AI and the use of data keeps changing the ways we live and work.

It is vital that our rules offer clarity to businesses, confidence to investors and boost public trust. Our flexible approach will help us shape the future of AI and cement our global position as a science and tech superpower.”

Today, UK revealed a new "AI rulebook" that it hopes would help regulate the industry and spur innovation, just 2 months after EU AI Act.
The AI rulebook portrays artificial intelligence technology as a “general purpose technology,” similar to electricity.

At the moment, it might be challenging for businesses to navigate and comprehend the extent to which existing regulations apply to AI. The government is also worried that innovation may be impeded and that it will be more challenging for regulators to uphold public safety if AI legislation do not keep up with the rate of technological advancement.

The AI rulebook portrays artificial intelligence technology as a “general purpose technology,” similar to electricity or the internet, that will have a significant impact on many aspects of our life and vary significantly based on the context and application. Many of which, at this time, we probably even cannot predict.

According to the plans unveiled today, the UK’s approach to regulation appears to seek to give regulators and their industries as much latitude as possible. It remains to be seen if this gives organizations the clarity they require or not, but the expectation is that this strategy will provide them more investment freedom.

Today, UK revealed a new "AI rulebook" that it hopes would help regulate the industry and spur innovation, just 2 months after EU AI Act.
The AI rulebook outlines the fundamental aspects of artificial intelligence (AI) to help define the framework’s breadth.

Contrary to the EU AI Act, which will be overseen by a single regulatory agency and seeks to standardize EU regulation across all member states, this AI rulebook is a different approach. The EU’s regulatory strategy is described in the rulebook as having a “relatively fixed definition in its legislative proposals.”

“Whilst such an approach can support efforts to harmonize rules across multiple countries, we do not believe this approach is right for the UK. We do not think that it captures the full application of AI and its regulatory implications. Our concern is that this lack of granularity could hinder innovation,” the AI rulebook states.

Instead, in what the UK refers to as a “Brexit seizing moment,” the AI rulebook outlines the fundamental aspects of artificial intelligence (AI) to help define the framework’s breadth while allowing regulators to develop more particular definitions of AI for their individual domains or industries.

“This is in line with the government’s view that we should regulate the use of AI rather than the technology itself – and a detailed universally applicable definition is therefore not needed. Rather, by setting out these core characteristics, developers and users can have greater certainty about scope and the nature of UK regulatory concerns while still enabling flexibility – recognising that AI may take forms we cannot easily define today – while still supporting coordination and coherence,” the AI rulebook adds.

Today, UK revealed a new "AI rulebook" that it hopes would help regulate the industry and spur innovation, just 2 months after EU AI Act.
In light of this, the AI rulebook suggests creating a “pro-innovation framework” for regulating artificial intelligence technologies.

In light of this, the AI rulebook suggests creating a “pro-innovation framework” for regulating artificial intelligence technologies. This framework would be supported by a set of principles that are:

  • Context-specific: They suggest regulating AI in accordance with its application and the effects it has on people, communities, and enterprises within a specific environment, and giving regulators the task of creating and enacting suitable legislative responses. This strategy will encourage innovation.
  • Pro-innovation and risk-based: They suggest concentrating on problems where there is demonstrable proof of actual risk or lost opportunities. And they want regulators to pay more attention to real threats than imagined or minor ones related to AI. They aim to promote innovation while avoiding erecting pointless obstacles in its path.
  • Coherent: A set of cross-sectoral principles customized to the unique properties of AI are proposed, and regulators are requested to understand, prioritize, and apply these principles within their respective sectors and domains. They will search for ways to assist and encourage regulatory cooperation in order to create coherence and boost innovation by making the framework as simple to use as possible.
  • Proportionate and adaptable: In order to make their approach adjustable, they want to first lay out the cross-sectoral ideas on a non-statutory basis, however they’ll keep this under review. They will request that regulators first take a light hand with options like voluntary actions or guidelines.

“We think this is preferable to a single framework with a fixed, central list of risks and mitigations. Such a framework applied across all sectors would limit the ability to respond in a proportionate manner by failing to allow for different levels of risk presented by seemingly similar applications of AI in different contexts.

This could lead to unnecessary regulation and stifle innovation. A fixed list of risks also could quickly become outdated and does not offer flexibility. 

Today, UK revealed a new "AI rulebook" that it hopes would help regulate the industry and spur innovation, just 2 months after EU AI Act.
The AI rulebook summarizes cross-sectoral principle under 6 headlines.

A centralized approach would also not benefit from the expertise of our experienced regulators who are best placed to identify and respond to the emerging risks through the increased use of AI technologies within their domains,” the government stated. 

Cross-sectoral principles

The guideline acknowledges that the UK’s strategy does have risks and challenges. Compared to a centralized model, the context-driven approach delivers less uniformity, which could cause confusion and less assurance for enterprises. As a result, the UK wants to make sure that it handles “common cross-cutting challenges in a coherent and streamlined way” by adding a set of overarching principles to this strategy.

The cross-sectoral principles in the rules outline how the UK believes well-regulated AI use should behave and build on the OECD Principles on AI. Existing regulators will interpret and put the ideas into reality, and the government is looking into how it may strongly encourage the adoption of a “proportionate and risk-based approach.”

Today, UK revealed a new "AI rulebook" that it hopes would help regulate the industry and spur innovation, just 2 months after EU AI Act.
EU AI Act’s approach will probably have an impact on how AI regulation is handled globally, much like GDPR did.

The AI rulebook summarizes cross-sectoral principle under 6 headlines:

  • Ensure that AI is used safely
  • Ensure that AI is technically secure and functions as designed
  • Make sure that AI is appropriately transparent and explainable
  • Embed considerations of fairness into AI
  • Define legal persons’ responsibility for AI governance
  • Clarify routes to redress or contestability

The principles will be put into practice by regulators, including Ofcom, the Competition and Markets Authority, the Information Commissioner’s Office, the Financial Conduct Authority, and the Medicine and Healthcare Products Regulatory Agency. They will be urged to take into account “lighter touch” methods, such as counseling, voluntary action, and setting up sandboxes.

Conclusion

In this case, the EU AI Act’s approach will probably have an impact on how AI regulation is handled globally, much like GDPR did. While context is crucial, there are numerous concerns linked with AI that might significantly affect people’s life. Flexibility is desirable, but the general public and consumers require clear channels for reporting or contesting the use of AI and access to information about the decision-making processes used. The UK clearly wants to adopt a hands-off strategy that encourages investment, but one must hope that this won’t come at the expense of decency, clarity, and equity.

]]>
https://dataconomy.ru/2022/07/19/uk-ai-rulebook-and-eu-ai-act/feed/ 0
The EU AI Act: Regulating the future of artificial intelligence https://dataconomy.ru/2022/05/13/eu-ai-act-regulates-artifical-intelligence/ https://dataconomy.ru/2022/05/13/eu-ai-act-regulates-artifical-intelligence/#respond Fri, 13 May 2022 14:01:34 +0000 https://dataconomy.ru/?p=24049 The European Union is disturbed by the lack of comprehensive regulation of artificial intelligence. The EU AI Act is an important step that will determine the future of artificial intelligence in the context of personal data protection. It’s a lawless world for artificial intelligence in today’s society. The European Union has a proposed solution called […]]]>

The European Union is disturbed by the lack of comprehensive regulation of artificial intelligence. The EU AI Act is an important step that will determine the future of artificial intelligence in the context of personal data protection.

It’s a lawless world for artificial intelligence in today’s society. The European Union has a proposed solution called AI Act. Critical decisions about people’s lives are increasingly being made by AI programs without any regulation or accountability.

This can result in the imprisonment of innocent people, poor academic performance among students, and even financial crises. It is the first law in the world designed to regulate the entire sector and avert these harms. If EU succeeds, it could establish a new global standard for AI governance across the world.

What the EU AI Act proposes?

Here’s a brief summary of everything you need to know about the EU’s AI Act. Members of the European Parliament and EU member countries are currently amending the legislation.

The AI Act is very aggressive in its goals. It would need more checks on “high-risk” applications of AI, which have the most potential to cause damage to people. This might include systems for grading exams, recruiting workers, or assisting judges in making legal and judicial decisions. The bill’s first draft also contains restrictions on the use of AI deemed “unacceptable,” such as computing people’s trustworthiness based on their reputation.

AI Act is an important step that will determine the future of artificial intelligence in the context of personal data protection.
The EU AI Act might ban facial recognition systems in public places.

The proposed legislation would also ban law enforcement agencies’ use of facial recognition in public spaces. There is a vocal group of influencers, including members of the European Parliament and nations like Germany, who want a total prohibition on its usage by both government and corporate bodies because they claim it allows for massive surveillance.

If the EU is able to execute this, it would be one of the most stringent bans yet on facial recognition technology. San Francisco and Virginia have imposed limits on facial recognition, but the EU’s prohibition would apply to 27 nations with a population of over 447 million people.

How EU AI Act will affect people?

By requiring that algorithms receive human review and approval, the bill should prevent humans from being harmed by AI in the event of an accident. According to Brando Benifei, an Italian member of the European Parliament who is a key player in preparing amendments for the bill, people may trust that they will be safeguarded from the most harmful forms of AI.

The AI Act also requires people to be notified if they encounter deepfakes, biometric recognition technologies, or AI applications that claim to be able to read emotions. Lawmakers are also discussing whether the legislation should include a system for individuals to file complaints and seek compensation if they have been damaged by an AI system.

One of the EU bodies working on amending the bill is also calling for a prohibition on predictive policing technologies. Predictive policing systems employ artificial intelligence to evaluate massive data sets in order to proactively deploy police to high-crime areas or try to forecast whether someone will become criminal. These algorithms are highly contentious, with critics alleging that they are frequently racial and lack transparency.

Are there any examples of such legislation outside of EU?

The GDPR, or the European Union’s data protection regulation, is one of the most well-known tech exports from the EU. It has been emulated in California to India. The EU’s approach to AI, which focuses on the riskiest AI, is a model that other advanced nations embrace. If Europeans can figure out how to regulate technology effectively, it might serve as a template for other countries wanting to do so as well.

“US companies, in their compliance with the EU AI Act, will also end up raising their standards for American consumers with regard to transparency and accountability,” explained Marc Rotenberg, the Center for AI and Digital Policy head.

The bill is also being watched closely by the Biden administration. The US is home to some of the world’s biggest AI labs, such as those at Google AI, Meta, and OpenAI, and leads multiple different global rankings in AI research, so the White House wants to know how any regulation might apply to these companies. For now, influential US government figures such as National Security Advisor Jake Sullivan, Secretary of Commerce Gina Raimondo, and Lynne Parker, who is leading the White House’s AI effort, have welcomed Europe’s effort to regulate AI. 

AI Act is an important step that will determine the future of artificial intelligence in the context of personal data protection.
US companies, in their compliance with the EU AI Act, will also end up raising their standards for American consumers.

The debate over the EU AI Act is also being closely monitored by the Biden administration. The United States has several of the world’s largest AI laboratories, including those at Meya, OpenAI, and Google AI and it leads many different global rankings in AI research. As a result, the White House is seeking information on how any legislation would apply to these firms. For the time being, prominent government figures in Washington, such as National Security Advisor Jake Sullivan and Secretary of Commerce Gina Raimondo, have praised Europe’s efforts to regulate artificial intelligence.

“This is a sharp contrast to how the US viewed the development of GDPR, which at the time people in the US said would end the internet, eclipse the sun, and end life on the planet as we know it,” said Rotenberg.

Despite some unavoidable wariness, the United States has compelling reasons to embrace the bill. It is extremely concerned about China’s rising tech influence. According to Raimondo, for America, maintaining a Western edge in technology is still a question of “democratic values” prevailing. It wants to keep close ties with the EU, a “like-minded ally,” and prevent it from drifting away, Fedscoop reports.

What kind of obstacles are there?

Some of the requirements in the bill are physically impossible to fulfill right now. The bill’s initial draft stated that data sets should be free of errors and that humans should be able to completely comprehend how AI systems operate. If a human checked for completeness, it would take hundreds of hours to ensure that data sets are completely error-free. Even today’s neural networks are so complicated that their creators don’t know why they reach their judgments.

Regulators and external auditors are also wary of the mandates that tech businesses must implement in order to comply with legislation.

“The current drafting is creating a lot of discomfort because people feel that they actually can’t comply with the regulations as currently drafted,” says Miriam Vogel, The CEO of a nonprofit organization called EqualAI. She also heads the newly formed National AI Advisory Committee, which advises the White House on AI policy. There are also those saying that lawyers are at risk of losing their jobs to AI by 2030.

AI Act is an important step that will determine the future of artificial intelligence in the context of personal data protection.
Some of the requirements in the bill are physically impossible to fulfill right now.

There’s also a heated debate going on about whether the AI Act should ban face recognition outright. It’s a contentious issue because EU nations dislike when Brussels tries to tell them how to handle national security and law enforcement issues.

In other countries, such as France, the government is considering special rules for the use of facial recognition to protect national security. In contrast, the new German government, another major European nation and an influential voice in EU decision making, has stated that it supports a total ban on face scanning in public places.

There will also be a debate about which types of AI should be labeled as “high risk.” The AI Act includes a variety of AI applications, such as lie detection tests and systems for allocating welfare payments. There are two competing political factions: one that fears that the broad scope of the legislation will stifle innovation, and another that claims that the bill does not go far enough to protect individuals from significant injury. Some consider facial recognition risky, is big data going too far?

What effect will the law have on technology development?

A frequent complaint from Silicon Valley lobbyists is that the new rules will add to the burden on AI firms. The EU disagrees. The European Commission, the EU’s executive body, argues that only the riskiest category of AI applications would be covered by the AI Act, which it predicts would apply to 5 to 15% of all AI apps. If you wonder how tech giants use artifical intelligence, learn how businesses utilize AI in security systems, here.

“Tech companies should be reassured that we want to give them a stable, clear, legally sound set of rules so that they can develop most of AI with very limited regulation,” explained Benifei. 

AI Act is an important step that will determine the future of artificial intelligence in the context of personal data protection.
The European Commission argues that only the riskiest category of AI applications would be covered by the AI Act.

Organizations that do not comply with the AI Act will be fined up to $31 million (€30 million) or 6% of worldwide yearly sales. Europe has shown a propensity to hand out fines to tech businesses in the past. In 2021, Amazon was fined $775 million (€746 million) for failing to adhere to the GDPR, and Google was fined $4.5 billion (€4.3 billion) for violating EU antitrust regulations in 2018.

When will the EU AI Act come into effect? 

It will be at least another year before a final text is decided upon, and several years before firms must comply. There’s a chance that hammering out the fine points of such a comprehensive bill with so many contentious components may take longer than expected. The GDPR took more than four years to negotiate and six years to come into force in the EU. Anything is conceivable in the world of EU legislationmaking. If you are into artificial intelligence and ML systems, check out the history of Machine Learning, it dates back to the 17th century.

]]>
https://dataconomy.ru/2022/05/13/eu-ai-act-regulates-artifical-intelligence/feed/ 0