EU AI Act – Dataconomy https://dataconomy.ru Bridging the gap between technology and business Mon, 11 Dec 2023 08:29:51 +0000 en-US hourly 1 https://dataconomy.ru/wp-content/uploads/2022/12/cropped-DC-logo-emblem_multicolor-32x32.png EU AI Act – Dataconomy https://dataconomy.ru 32 32 What will EU AI Act change for real? https://dataconomy.ru/2023/12/11/what-will-eu-ai-act-change-for-real/ Mon, 11 Dec 2023 08:29:51 +0000 https://dataconomy.ru/?p=45504 The EU AI Act, a groundbreaking piece of legislation to govern artificial intelligence, has been agreed upon by European Union policymakers, setting a precedent for the most comprehensive framework to date for overseeing this transformative technology. EU AI Act discussions took 38 hours This consensus on the EU AI Act emerged following extensive discussions, spanning […]]]>

The EU AI Act, a groundbreaking piece of legislation to govern artificial intelligence, has been agreed upon by European Union policymakers, setting a precedent for the most comprehensive framework to date for overseeing this transformative technology.

EU AI Act discussions took 38 hours

This consensus on the EU AI Act emerged following extensive discussions, spanning nearly 38 hours, between legislators and policymakers.

“The AI Act is a global first. A unique legal framework for the development of AI you can trust. And for the safety and fundamental rights of people and businesses. A commitment we took in our political guidelines – and we delivered. I welcome today’s political agreement,” stated EU chief Ursula von der Leyen.

Since the introduction of OpenAI’s ChatGPT last year, which significantly raised public awareness of the swiftly evolving AI sector, there has been a notable acceleration in the efforts to pass the AI Act. Initially proposed by the EU’s executive branch in 2021, the EU AI Act is now widely regarded as a model for governments worldwide. It aims to harness AI’s potential benefits while mitigating various risks, such as misinformation, job displacement, and copyright infringement.

What will EU AI Act change for real
What will EU AI Act change for real (Image: Kerem Gülen/DALL-E 3)

A preliminary political agreement on the Artificial Intelligence Act was reached after negotiators from the European Parliament and the EU’s 27 member states resolved substantial disagreements on contentious issues, including generative AI and law enforcement’s use of facial recognition technology.

European Commissioner Thierry Breton celebrated the breakthrough with a “Deal!” tweet, marking the finalization of a political agreement on the Artificial Intelligence Act.

This sentiment was echoed by the parliamentary committee spearheading the EU Parliament’s negotiation efforts, announcing the accord on the EU AI Act. Initially slowed by debates over regulating language models that utilize online data and AI’s application in police and intelligence operations, the legislation is now poised for approval by EU member states and the Parliament.

The law will mandate that technology companies operating within the EU disclose the data underpinning their AI systems and conduct rigorous testing, particularly for high-stakes uses like autonomous vehicles and healthcare applications. It prohibits the blanket collection of images from the internet or security cameras for facial recognition databases, though it allows for “real-time” facial recognition by law enforcement in tackling terrorism and severe criminal activity.

Technology companies that fail to comply with the EU AI Act will be subject to stringent financial penalties, facing fines up to seven percent of their global revenue. The severity of the fines will be contingent on the nature of the violation and the size of the company. This EU legislation stands out as the most thorough attempt yet to establish regulatory oversight over AI, amidst a growing assortment of guidelines and regulations around the world.


Round Table: Will there be a global consensus over AI regulation?


Internationally, other nations are advancing in their own directions. In the United States, President Joe Biden issued an executive order last October, concentrating on AI’s influence on national security and issues of discrimination. Meanwhile, China has introduced regulations mandating that AI technologies align with “socialist core values”. In contrast, countries like the UK and Japan have opted for a more relaxed, less interventionist stance towards AI regulation.

The race to regulate AI

The EU initially took the forefront in the global effort to establish AI regulations, unveiling its initial draft in 2021. However, the surge in generative AI’s popularity necessitated swift updates to the proposal, which is seen as a potential global standard. Generative AI systems, such as OpenAI’s ChatGPT, have captivated the global audience with their capacity to generate human-like text, images, and music. However, they have also sparked concerns about their impact on employment, privacy, copyright protection, and even human safety.

As a response, countries like the USA, UK, China, and international groups including the G7, have started introducing their regulatory frameworks for AI, albeit still trailing behind Europe’s advancements. The final form of the EU AI Act awaits endorsement from the EU’s 705 politicians before the upcoming EU-wide elections. This step is anticipated to be a procedural matter.


UK prepared AI rulebook two months after the initial EU AI Act draft


The AI Act’s initial design aimed to address risks associated with various AI functionalities, categorizing them from low to unacceptable risk. However, the scope was broadened to include foundational models like OpenAI’s ChatGPT and Google’s Bard chatbot. These foundational models, crucial for general-purpose AI services, were a major point of contention in Europe. Despite resistance, notably from France advocating for self-regulation to boost European generative AI firms against major U.S. competitors like Microsoft-backed OpenAI, a provisional compromise was reached early in the negotiations.

Known as large language models, these systems are trained on extensive datasets of text and images from the internet. Unlike traditional AI, which processes data and performs tasks based on pre-set rules, generative AI can create novel content, marking a significant evolution in AI capabilities.

What will EU AI Act change for real
What will EU AI Act change for real (Image: Kerem Gülen/DALL-E 3)

Key changes awating

The EU AI Act is set to bring about significant changes in the story of regulating artificial intelligence, particularly within the European Union. Key changes and impacts include:

  • Establishes the most comprehensive framework for AI oversight in the EU, influencing global standards.
  • Requires technology companies in the EU to disclose AI training data and conduct rigorous testing.
  • Targets critical AI applications in areas like autonomous vehicles and healthcare.
  • Prohibits indiscriminate scraping of images for facial recognition databases, with limited exceptions for law enforcement.
  • Introduces stringent financial penalties for non-compliance, with fines up to seven percent of global revenue.
  • Positions Europe as a leader in global AI regulation.
  • Balances AI benefits with risks like misinformation, job displacement, and copyright infringement.
  • Expands scope to include foundational models like ChatGPT and Google’s Bard, addressing challenges in generative AI.
  • Sets regulatory benchmarks for managing AI’s impact on employment, privacy, and safety.

Featured image credit: Kerem Gülen/DALL-E 3

]]>
UK prepares AI rulebook two months after EU AI Act https://dataconomy.ru/2022/07/19/uk-ai-rulebook-and-eu-ai-act/ https://dataconomy.ru/2022/07/19/uk-ai-rulebook-and-eu-ai-act/#respond Tue, 19 Jul 2022 14:44:07 +0000 https://dataconomy.ru/?p=26095 The British government has frequently emphasized its goals of becoming an “AI superpower” in the world. Today, it revealed a new “AI rulebook” that it hopes would help regulate the industry, spur innovation, and increase public confidence in the technology. According to the AI rulebook, artificial intelligence regulation will be less centralized than it is […]]]>

The British government has frequently emphasized its goals of becoming an “AI superpower” in the world. Today, it revealed a new “AI rulebook” that it hopes would help regulate the industry, spur innovation, and increase public confidence in the technology. According to the AI rulebook, artificial intelligence regulation will be less centralized than it is in the EU and will instead give existing regulatory authorities the freedom to make choices based on the circumstances at hand.

AI rulebook seeks to support innovation

With the intention of giving regulators flexibility to implement these in ways that meet the usage of AI in particular industries, the UK’s approach is built on six basic principles that regulators must adhere to.

Damian Collins, the digital minister, provided a comment on the measures:

“We want to make sure the UK has the right rules to empower businesses and protect people as AI and the use of data keeps changing the ways we live and work.

It is vital that our rules offer clarity to businesses, confidence to investors and boost public trust. Our flexible approach will help us shape the future of AI and cement our global position as a science and tech superpower.”

Today, UK revealed a new "AI rulebook" that it hopes would help regulate the industry and spur innovation, just 2 months after EU AI Act.
The AI rulebook portrays artificial intelligence technology as a “general purpose technology,” similar to electricity.

At the moment, it might be challenging for businesses to navigate and comprehend the extent to which existing regulations apply to AI. The government is also worried that innovation may be impeded and that it will be more challenging for regulators to uphold public safety if AI legislation do not keep up with the rate of technological advancement.

The AI rulebook portrays artificial intelligence technology as a “general purpose technology,” similar to electricity or the internet, that will have a significant impact on many aspects of our life and vary significantly based on the context and application. Many of which, at this time, we probably even cannot predict.

According to the plans unveiled today, the UK’s approach to regulation appears to seek to give regulators and their industries as much latitude as possible. It remains to be seen if this gives organizations the clarity they require or not, but the expectation is that this strategy will provide them more investment freedom.

Today, UK revealed a new "AI rulebook" that it hopes would help regulate the industry and spur innovation, just 2 months after EU AI Act.
The AI rulebook outlines the fundamental aspects of artificial intelligence (AI) to help define the framework’s breadth.

Contrary to the EU AI Act, which will be overseen by a single regulatory agency and seeks to standardize EU regulation across all member states, this AI rulebook is a different approach. The EU’s regulatory strategy is described in the rulebook as having a “relatively fixed definition in its legislative proposals.”

“Whilst such an approach can support efforts to harmonize rules across multiple countries, we do not believe this approach is right for the UK. We do not think that it captures the full application of AI and its regulatory implications. Our concern is that this lack of granularity could hinder innovation,” the AI rulebook states.

Instead, in what the UK refers to as a “Brexit seizing moment,” the AI rulebook outlines the fundamental aspects of artificial intelligence (AI) to help define the framework’s breadth while allowing regulators to develop more particular definitions of AI for their individual domains or industries.

“This is in line with the government’s view that we should regulate the use of AI rather than the technology itself – and a detailed universally applicable definition is therefore not needed. Rather, by setting out these core characteristics, developers and users can have greater certainty about scope and the nature of UK regulatory concerns while still enabling flexibility – recognising that AI may take forms we cannot easily define today – while still supporting coordination and coherence,” the AI rulebook adds.

Today, UK revealed a new "AI rulebook" that it hopes would help regulate the industry and spur innovation, just 2 months after EU AI Act.
In light of this, the AI rulebook suggests creating a “pro-innovation framework” for regulating artificial intelligence technologies.

In light of this, the AI rulebook suggests creating a “pro-innovation framework” for regulating artificial intelligence technologies. This framework would be supported by a set of principles that are:

  • Context-specific: They suggest regulating AI in accordance with its application and the effects it has on people, communities, and enterprises within a specific environment, and giving regulators the task of creating and enacting suitable legislative responses. This strategy will encourage innovation.
  • Pro-innovation and risk-based: They suggest concentrating on problems where there is demonstrable proof of actual risk or lost opportunities. And they want regulators to pay more attention to real threats than imagined or minor ones related to AI. They aim to promote innovation while avoiding erecting pointless obstacles in its path.
  • Coherent: A set of cross-sectoral principles customized to the unique properties of AI are proposed, and regulators are requested to understand, prioritize, and apply these principles within their respective sectors and domains. They will search for ways to assist and encourage regulatory cooperation in order to create coherence and boost innovation by making the framework as simple to use as possible.
  • Proportionate and adaptable: In order to make their approach adjustable, they want to first lay out the cross-sectoral ideas on a non-statutory basis, however they’ll keep this under review. They will request that regulators first take a light hand with options like voluntary actions or guidelines.

“We think this is preferable to a single framework with a fixed, central list of risks and mitigations. Such a framework applied across all sectors would limit the ability to respond in a proportionate manner by failing to allow for different levels of risk presented by seemingly similar applications of AI in different contexts.

This could lead to unnecessary regulation and stifle innovation. A fixed list of risks also could quickly become outdated and does not offer flexibility. 

Today, UK revealed a new "AI rulebook" that it hopes would help regulate the industry and spur innovation, just 2 months after EU AI Act.
The AI rulebook summarizes cross-sectoral principle under 6 headlines.

A centralized approach would also not benefit from the expertise of our experienced regulators who are best placed to identify and respond to the emerging risks through the increased use of AI technologies within their domains,” the government stated. 

Cross-sectoral principles

The guideline acknowledges that the UK’s strategy does have risks and challenges. Compared to a centralized model, the context-driven approach delivers less uniformity, which could cause confusion and less assurance for enterprises. As a result, the UK wants to make sure that it handles “common cross-cutting challenges in a coherent and streamlined way” by adding a set of overarching principles to this strategy.

The cross-sectoral principles in the rules outline how the UK believes well-regulated AI use should behave and build on the OECD Principles on AI. Existing regulators will interpret and put the ideas into reality, and the government is looking into how it may strongly encourage the adoption of a “proportionate and risk-based approach.”

Today, UK revealed a new "AI rulebook" that it hopes would help regulate the industry and spur innovation, just 2 months after EU AI Act.
EU AI Act’s approach will probably have an impact on how AI regulation is handled globally, much like GDPR did.

The AI rulebook summarizes cross-sectoral principle under 6 headlines:

  • Ensure that AI is used safely
  • Ensure that AI is technically secure and functions as designed
  • Make sure that AI is appropriately transparent and explainable
  • Embed considerations of fairness into AI
  • Define legal persons’ responsibility for AI governance
  • Clarify routes to redress or contestability

The principles will be put into practice by regulators, including Ofcom, the Competition and Markets Authority, the Information Commissioner’s Office, the Financial Conduct Authority, and the Medicine and Healthcare Products Regulatory Agency. They will be urged to take into account “lighter touch” methods, such as counseling, voluntary action, and setting up sandboxes.

Conclusion

In this case, the EU AI Act’s approach will probably have an impact on how AI regulation is handled globally, much like GDPR did. While context is crucial, there are numerous concerns linked with AI that might significantly affect people’s life. Flexibility is desirable, but the general public and consumers require clear channels for reporting or contesting the use of AI and access to information about the decision-making processes used. The UK clearly wants to adopt a hands-off strategy that encourages investment, but one must hope that this won’t come at the expense of decency, clarity, and equity.

]]>
https://dataconomy.ru/2022/07/19/uk-ai-rulebook-and-eu-ai-act/feed/ 0