Artificial intelligence laws and regulations are still a new concept for most countries, and the current state of rules is neither universal nor inclusive. It’s crystal clear that AI’s constructive and negative roles in every sector require a call for agreed-upon rules.
Artificial intelligence (AI), the creation of computer systems that can learn and make decisions without the need for human intelligence, has the potential to revolutionize and foster innovation in business and government. More goods and services are entering the market as AI research and technology continue to advance. For instance, businesses are creating AI to help people manage their houses and let the elderly live in their homes for longer. AI is applied in many aspects of modern life, including self-driving cars, digital assistants, and healthcare technology.
However, initiatives to look into and create standards have been motivated by worries about potential abuse or unforeseen repercussions of AI.
Artificial intelligence laws and regulations
Already, AI is enhancing healthcare, connecting people in new ways, and drastically increasing productivity. But when applied incorrectly or irresponsibly, AI might result in employment losses, prejudiced or racist outcomes, etc.
According to many in the scientific community, deep learning models, in particular, require rules and restrictions for their development and use in order to prevent unwarranted harm. The artificial intelligence laws and regulations are still up for debate.
AI regulation debate
A straightforward set of regulatory policies will soon be required, according to the majority of AI experts and policymakers; as computing power continues to rise, new startups in AI and data science appear on a near-daily basis, and the amount of data that businesses collect on individuals increases exponentially.
The environmental impact of AI makes regulations vital for a sustainable future
Many national governments have already established artificial intelligence laws and regulations about how data should and shouldn’t be used and gathered, albeit they are occasionally ambiguous. When discussing AI regulation and how it should be implemented, governments frequently collaborate with significant corporations.
The way AI should be explicable is also governed by some legal regulations. Many machine learning and deep learning algorithms now operate in a black box or have their inner workings classified as proprietary technology and kept private. As a result, organizations risk missing a biased output if they don’t completely comprehend how a deep learning model decides.
What are the legal issues of artificial intelligence?
When striving to follow international data protection rules, organizations employing personal information for AI may experience difficulties. There are artificial intelligence laws and regulations in place in every country, and they typically include the gathering, use, processing, disclosure, and security of personal data. Transfers of personal information across borders may be prohibited by certain regulations.
- The following objectives underlie the majority of data protection laws:
- Minimizing the volume of personal data an organization keeps on a person.
- Ensuring that an individual’s expectations are met on how an organization handles personal information.
- Ensuring the accuracy of personal data while taking the organization’s data processing goals into account.
- Holding companies that collect personal data accountable for abuses of data protection.
Why are law and regulation important in AI development?
For two reasons, AI needs to be regulated. First, businesses and governments are using AI to make decisions that could significantly affect our daily lives. For instance, algorithms that measure academic success can be catastrophic. In the UK, the Secretary of State for Education utilized an algorithm to calculate each student’s final exam mark.
The fantastic precursors of Artificial Intelligence
As a result, over 40% of students earned worse grades than those that their teachers had previously assigned, according to an article published by The Guardian. Furthermore, the algorithm favored kids in private schools over those in public schools, in addition to being erroneous. The private sector has also exposed the limitations of AI. In one instance, the electronics company Apple’s credit card had lower credit limits for women than for men, BBC reported.
The second reason is that if someone makes a choice that has an impact on us, they are responsible to us. Human rights law outlines the minimal norms of behavior that everyone is entitled to. In the event that those requirements are not upheld, and you experience harm, it grants everyone the right to seek redress. Governments are responsible for ensuring that these standards are kept and that individuals responsible for violating them are held accountable, typically through administrative, civil, or criminal law. That implies that everyone must adhere to specific norms when making decisions, including businesses and governments.
People who violate established norms and cause harm to another person must take responsibility for their actions. However, there are already hints that the firms developing AI might escape liability for the issues they bring about. For instance, it was initially unclear who would be held accountable when a pedestrian was killed in 2018 by an Uber self-driving car. Was it the driver, Uber, or the car’s manufacturer? That is why artificial intelligence laws and regulations are vital.
Worldwide AI laws and regulations 2022
It’s no secret that countries have been preparing to establish artificial intelligence laws and regulations for some time, given the public indignation generated by viral news articles about the risks of AI. For those of us engaged in these initiatives, 2021 felt like every other week saw a new official body issue guidance, standards, or a Request for Information (RFI), indicating an impending new and significant transformation.
Although 2021 may have been a significant year for AI regulation, 2022 will be a whole different story for new artificial intelligence laws and regulations.
How many countries have AI regulations?
At least 60 nations have adopted artificial intelligence laws and regulations since 2017, a flurry of action that almost matches the rate at which new AI is being implemented. Concerns about impending obstacles to international collaboration are raised by the growth of AI governance. That is to say; any new legislation will have a significant impact on global markets due to the increasing prevalence of AI in both physical products and online services.
This image is further complicated by the numerous methods AI may be educated and used. For instance, cloud-hosted AI systems can be accessed remotely from any location with an internet connection. Retraining and transfer learning allow various teams operating out of various nations to construct an AI model using a variety of datasets cooperatively. Globally distributed physical objects can now share data that influences how their AI models work, thanks to the edge and federated machine learning techniques.
These factors make AI governance more challenging, but they shouldn’t be used as a justification to forego important safeguards; we won’t go over all of them here. Implementing effective governmental supervision of AI while also enabling these international AI supply networks would be the optimal result. A more united international strategy for artificial intelligence laws and regulations might also improve supervision, direct research toward common problems, and encourage the exchange of best practices, code, and data.
Artificial intelligence laws and regulations in Europe
The absence of thorough regulation of artificial intelligence concerns the European Union. The future of artificial intelligence in the context of personal data protection will be influenced by the EU AI Act, which is a significant step.
In today’s civilization, artificial intelligence lives in a lawless environment. The AI Act is a suggested remedy from the European Union. AI algorithms are increasingly making important decisions about people’s lives without any oversight or accountability.
Innocent individuals may be imprisoned as a result, and students may perform poorly in their academics or even face financial difficulties. It is the first law ever created that aims to control the entire industry and prevent these harms. If the EU succeeds, it might create a new international standard for artificial intelligence laws and regulations.
EU AI Act
Open-source AI regulation is mentioned in the Artificial Intelligence Act (AIA), which is now under discussion in the EU. However, severely restricting the use, sharing, and distribution of general-purpose, open-source AI (GPAI) can be seen as a step backward.
The EU’s AIA will establish a procedure for government control and self-certification of various categories of high-risk AI systems, transparency requirements for AI systems that communicate with people, and an effort to outlaw a few “unacceptable” characteristics of AI systems. Even though the law has been given a thorough overview in other Brookings Institution analysis, just a few of the sections are particularly crucial for how this law will influence global algorithmic governance. High-risk artificial intelligence in products, high-risk artificial intelligence in human services, and AI transparency requirements are the three categories of AI needs that are most likely to have global implications and warrant independent examination.
These artificial intelligence laws and regulations will undoubtedly have some extraterritorial effects, which means they will influence the creation and application of several AI systems globally and serve as inspiration for additional legislative initiatives. The concept of a “Brussels effect” implies that the EU can unilaterally establish standards that become universally accepted, not through coercion but through the appeal of its 450 million-strong consumer market. This has significant implications beyond the EU’s influence outside of its borders.
The General Data Protection Regulation (GDPR) and the ePrivacy Directive, two EU laws governing data privacy, serve as examples of how the “Brussels effect” emerges in two related ways: “de facto” and “de jure.” Many websites worldwide have copied the EU’s mandates to ask users for agreement to process personal data and use cookies rather than creating unique procedures for Europeans.
This demonstrates the de facto Brussels effect, in which businesses universally adhere to EU regulations to standardize a good or service and streamline their operational procedures. This is frequently followed by the de jure Brussels effect, in which formal laws consistent with EU law are established in other nations, partly because multinational corporations vehemently oppose regulations that would interfere with their recently standardized processes. This is one of the reasons why many other nations have passed conforming or comparable data protection laws due to these privacy measures.
In order to fully comprehend the potential impact of these artificial intelligence laws and regulations, one should take into account both of these factors, much like in data privacy: What effects will the AIA have on international relations (the de facto Brussels effect) and how much will it have a unilateral influence on international lawmaking (the de jure Brussels effect)?
In addition to the AIA’s primary objectives of safeguarding EU consumers and, to a lesser extent, encouraging EU AI innovation, EU MPs have mentioned the Brussels effect as justification for passing it as soon as possible.
It is widely held among EU policymakers that having horizontal AI regulation enacted first among large nations will have a considerable and long-lasting economic advantage. But according to research from the Center for European Policy Studies, as other nations invest in a digital regulatory capacity and catch up to the EU, which is a key facilitator of the EU’s influence on global rulemaking, Europe may lose its competitive advantage in digital governance.
There is less benefit in regulatory competition, which attempts to draw businesses through clear and effective laws, according to another academic study. These factors include the high costs of relocating AI companies to the EU and the difficulty of quantifying the effects of artificial intelligence laws and regulations.
These broader viewpoints give grounds for doubting the AIA’s impact on the world, but they do not examine the AIA’s specific criteria in relation to the real business models of digital enterprises. However, the architecture of AI systems used by technology corporations and the specific rules of the AIA interact, suggesting a more subdued worldwide influence.
Notably, this necessitates attempting to predict the shift in corporate behavior in reaction to the AIA—a challenging task fraught with the possibility of error. Despite this risk, determining how to get toward global consensus and assessing the genuine consequences of the AIA depend on anticipating the responses of corporate players.
A wide spectrum of AI systems utilized in already regulated industries like aviation, automobiles, boats, elevators, medical equipment, industrial machinery, and more are included by the proposed AIA. In these situations, the sectoral regulators, such as the EU Aviation Safety Agency for aircraft, or a combination of approved third-party organizations and a central EU body, as in the case of medical devices, incorporate the requirements of the high-risk AI into the current conformity assessment process. As a result, extraterritorial businesses that sell regulated goods in the EU have already undergone the conformity assessment procedure. These artificial intelligence laws and regulations simply modify the details of this oversight; they make no modifications to its scope or method.
These modifications are not minor ones, though. In general, they demand that businesses that include AI systems in regulated goods intended for sale in the EU give those AI systems at least some specialized attention. Implementing a risk management approach, adhering to higher data standards, thoroughly documenting the systems, routinely recording its actions, informing users of its purpose, enabling human oversight, and continual monitoring are all necessary.
While the majority of these artificial intelligence laws and regulations are probably new, some of them have probably previously been put in place by sectoral regulators who have dealt with AI for a while. As a result, rather than just assessing the product’s overall functionality, AI systems within regulated products will need to be independently recorded, evaluated, and monitored. This raises the bar for thinking about integrating AI systems into products.
Germany’s role in the EU AI Act
Germany is actively participating in the AI regulation debate: AI is referred to as a “digital key technology” in the federal government’s Coalition Treaty, and a European AIA is broadly supported. Data privacy specialists contend that AI should be strictly regulated in accordance with the EU’s General Data Protection Regulation in an article published in the German newspaper Frankfurter Allgemeine Zeitung: “In most circumstances, AI is useful and safe, and once it has demonstrated its fitness, its use may even be ethically justified, for instance in the field of healthcare or the battle against crime.”
Due to the AIA’s extraterritorial effect, U.S.-based businesses should take it seriously because it will probably encompass AI systems in the country (much like the GDPR) if the “output” of the AI is “used” in the EU. The proposed sanctions, which may amount to as much as 6% of the breaching company’s annual global revenue, are extremely harsh and much greater than those set forth by the GDPR.
The discussion of artificial intelligence laws and regulations in Germany appears to have completely migrated to the EU. The German government has stated that “negotiations [on the AIA on the EU level] are ongoing” in a written answer to a formal information request made by the Alternative für Deutschland party (AfD).
The German government views the regulation’s scope, the definition of “AI system,” and the scope of forbidden AI systems and high-risk AI systems as being crucial to moving negotiations between EU member states forward. For instance, the new German administration has stated that it favors a complete prohibition on the use of AI and facial recognition in public settings.
The EU institutions are currently under pressure from German business and trade organizations to change the AI Act. The majority of the German industry is in favor of the AIA concept. German industry’s stance is best summed up by Iris Plöger, a member of the executive board of the Federation of the German Federation of Industry (BDI): “With the draft [AIA] regulation, the EU Commission presents an initial proposal for a legal framework for AI that is shaped by European values. It is right that the proposal focuses on AI systems that may be associated with particularly high risks.”
She also expresses the fear of the BDI that excessive regulation of AI will prevent the early creation of creative uses for essential technology. According to her, the legislative framework provided by the AIA must be well-balanced in order for European businesses to combine their industrial might in the AI sector to compete effectively against nations like China, the USA, or Israel.
Artificial intelligence laws and regulations in the United Kingdom
The British government has frequently underlined its intentions to rise to the status of a global “AI superpower.” It unveiled a new “AI rulebook” in July in an effort to govern the sector, promote innovation, and boost public trust in the technology. The AI rulebook states that regulation of artificial intelligence will be less centralized than it is in the EU and will instead give current regulatory authorities the discretion to decide according to the situation.
The UK’s approach is based on six fundamental principles that regulators must follow, with the goal of giving regulators flexibility to execute these in ways that fit the usage of AI in different industries.
Damian Collins, the digital minister, provided a comment on the measures:
“We want to make sure the UK has the right rules to empower businesses and protect people as AI and the use of data keep changing the ways we live and work. It is vital that our rules offer clarity to businesses, confidence to investors, and boost public trust. Our flexible approach will help us shape the future of AI and cement our global position as a science and tech superpower.”
Businesses may find it difficult at this time to understand and navigate how current legislation relates to AI. The government is also concerned that if artificial intelligence laws and regulations do not keep up with the rate of technological improvement, innovation may be hampered, and it would be harder for regulators to safeguard public safety.
The AI rulebook describes artificial intelligence as a “general purpose technology” that will have a substantial impact on many aspects of our lives and vary significantly depending on the context and application, comparable to electricity or the internet. Many of which we now lack the ability to forecast.
According to the plans presented today, the UK’s approach to artificial intelligence laws and regulations looks to allow regulators and their industry as much leeway as possible. The idea is that this strategy will provide companies with more investment freedom, but it needs to be seen if it actually delivers them the information they need.
This AI guideline takes a different tack from the EU AI Act, which will be controlled by a single regulatory body and aims to harmonize EU law among all member states. According to the rules, the EU’s regulatory approach has a “relatively fixed definition in its legislative proposals.”
“Whilst such an approach can support efforts to harmonize rules across multiple countries, we do not believe this approach is right for the UK. We do not think that it captures the full application of AI and its regulatory implications. Our concern is that this lack of granularity could hinder innovation,” the AI rulebook states.
Instead, the AI rulebook outlines the fundamental characteristics of artificial intelligence (AI) to help define the framework’s breadth while allowing regulators to develop more specific definitions of AI for their particular domains or industries, in what the UK refers to as a “Brexit seizing moment.”
“This is in line with the government’s view that we should regulate the use of AI rather than the technology itself – and a detailed universally applicable definition is therefore not needed. Rather, by setting out these core characteristics, developers and users can have greater certainty about the scope and the nature of UK regulatory concerns while still enabling flexibility – recognizing that AI may take forms we cannot easily define today – while still supporting coordination and coherence,” the AI rulebook adds.
In light of this, the artificial intelligence laws and regulations of the UK suggest creating a “pro-innovation framework” for regulating artificial intelligence technologies. This framework would be supported by a set of principles that are:
- Context-specific: They suggest regulating AI in accordance with its application and the effects it has on people, communities, and enterprises within a specific environment and giving regulators the task of creating and enacting suitable legislative responses. This strategy will encourage innovation.
- Pro-innovation and risk-based: They suggest concentrating on problems where there is demonstrable proof of actual risk or lost opportunities. And they want regulators to pay more attention to real threats than imagined or minor ones related to AI. They aim to promote innovation while avoiding erecting pointless obstacles in its path.
- Coherent: A set of cross-sectoral principles customized to the unique properties of AI are proposed, and regulators are requested to understand, prioritize, and apply these principles within their respective sectors and domains. They will search for ways to assist and encourage regulatory cooperation in order to create coherence and boost innovation by making the framework as simple to use as possible.
- Proportionate and adaptable: In order to make their approach adjustable, they want to first lay out the cross-sectoral ideas on a non-statutory basis. However, they’ll keep this under review. They will request that regulators first take a light hand with options like voluntary actions or guidelines.
Artificial intelligence laws and regulations in the United States
States have passed their own artificial intelligence laws and regulations, leading to a fragmented approach in the United States up to this point. The majority of the laws that have been passed center on creating various committees to decide how state agencies can use AI technology and to research AI’s possible effects on the workforce and consumers. Common state legislation that is currently pending takes it a step further and would govern the responsibility and transparency of AI systems when they analyze and make choices based on consumer data.
In order to strengthen and coordinate AI research, development, demonstration, and teaching activities across all U.S. Departments and Agencies, the National AI Initiative was established after the National AI Initiative Act was passed by the U.S. Congress in January 2021. The Act involved numerous U.S. administrative agencies, including the Federal Trade Commission (FTC), Department of Defense, Department of Agriculture, Department of Education, and the Department of Health and Human Services, and created new offices and task forces aimed at implementing a national artificial intelligence laws and regulations strategy.
The Algorithmic Accountability Act of 2022 was presented in both houses of Congress in February 2022 and is among the pending pieces of national legislation. The proposed Act would instruct the FTC to draft regulations requiring “covered entities,” which include companies meeting certain criteria, to conduct impact analyses when using automated decision-making processes. This is in response to reports that AI systems can produce biased and discriminatory outcomes. This especially covers those produced using AI or machine learning.
Although the FTC has not yet published artificial intelligence laws and regulations, the subject is on the agency’s agenda. The FTC warned businesses in a memo it released in April 2021 that deploying AI that results in discriminatory outcomes is against Section 5 of the FTC Act, which forbids unfair or misleading acts. Furthermore, the FTC may soon go further with its warning; in June 2022, the agency said it would file an Advanced Notice of Preliminary Rulemaking to “guarantee that algorithmic decision-making does not result in detrimental discrimination,” with the public comment period closing in August 2022.
The FTC also sent a report to Congress outlining potential uses of AI to prevent online harms like scams, deep fakes, and the sale of opioids. However, they cautioned against relying too much on these tools due to their propensity to yield erroneous, biased, and discriminating results.
Companies should carefully discern whether other non-AI-specific regulations could subject them to potential liability for their use of AI technology. For example, the U.S. Equal Employment Opportunity Commission (EEOC) put forth guidance in May 2022 warning companies that their use of algorithmic decision-making tools to assess job applicants and employees could violate the Americans with Disabilities Act by, in part, intentionally or unintentionally screening out individuals with disabilities. Further information covering the EEOC’s guidance can be read here.
Other American government organizations are starting to contribute to the designation of artificial intelligence laws and regulations. In an effort to create a “Bill of Rights for an Automated Society,” the White House Office of Science and Technology Policy invited participation from stakeholders from a variety of businesses in November 2021.
Topics like the use of AI in the criminal justice system, equal opportunity, consumer rights, and the healthcare system might all be included in such a Bill of Rights. Additionally, to create “a voluntary risk management framework for trustworthy AI systems,” the National Institute of Standards and Technology (NIST), a division of the U.S. Department of Commerce, is collaborating with stakeholders. The final product of this initiative could be comparable to the voluntary regulatory framework that the EU has suggested.
The maintenance of AI’s accountability, transparency, and justice is the overarching topic of all current and proposed AI policies worldwide. It may be challenging and expensive for businesses using AI technology to maintain system compliance with the many rules designed to achieve these aims. Oversight is particularly difficult due to two elements of AI’s decision-making process:
- Users can influence data inputs and outputs but are frequently unable to articulate how and from which data points the system came to a conclusion. This is known as opaqueness.
- It can be characterized as frequent adaptation in which systems change over time as a result of system learning.
In order to ensure that stakeholders may still make use of AI technology’ significant benefits in a cost-effective way, policymakers must take care not to overwhelm businesses. The EU and China are currently preparing artificial intelligence laws and regulations, and the United States has the chance to watch the results to see if their strategies strike a good balance. To help set the global norm for AI regulatory requirements, the U.S. may want to expedite the adoption of laws with a similar purpose.
Ethical AI: AstraZeneca’s guidelines on applicable regulations
Artificial intelligence laws and regulations in China
In terms of advancing artificial intelligence laws and regulations past the proposal stage, China has acquired the lead. China approved a law in March 2022 regulating how businesses utilize algorithms in online recommendation systems, mandating that these services uphold moral and ethical standards, are accountable and transparent, and “disseminate positive energy.”
According to the law, businesses must warn users when an AI algorithm is used to decide what content to display to them and provide them with the choice to stop being targeted. The law also forbids the use of algorithms that present consumers with varying rates based on personal information. We anticipate that when AI legislation spreads over the world, it will reflect similar themes.
Artificial intelligence laws and regulations in India
Although there are no particular data protection rules in India, Sections 43A and 72A of the Information Technology Act protect personal data. Similar to GDPR, it provides a right to compensation for unauthorized disclosure of personal information. The right to privacy was deemed a fundamental right protected by the Indian Constitution in 2017 by the Supreme Court.
In 2035, AI has the potential to increase GDP by 957 billion US dollars, or around 15% of India’s present gross domestic product. In the years to come, AI will be able to affect everyone’s life in some way. NITI Aayog, the policy commission, launched a number of programs on AI applications in 2018.
The Ministry of Electronics and Information Technology established four committees to focus on and examine artificial intelligence laws and regulations. Based on a proposed data protection statute, a Joint Parliamentary Committee is now debating the PDP Bill (Personal Data Protection Bill) 2019. The measure will be enacted into law once it has been approved by both chambers of Parliament. In India, the use of AI is progressing more quickly than the laws put in place to control it. With the help of AI technology, industries have started the process of upskilling their workforce.
The recently released New Education Policy places a strong emphasis on starting to teach coding to pupils as early as Class VI. In the coming years, India will serve as a center for cutting-edge AI technologies.
Perhaps the first law practice in India to apply AI, which is mostly used to analyze and adapt contracts and other legal documents, is Cyril Amarchand Mangaldas. Speaking on the topic of increased AI use in the judicial system, particularly in the area of docket management and decision-making, is current CJI SA Bobde.
When the Supreme Court Bar Association hosted an event (SCBA), however, in developing nations like India, resistance to this new trend may prevent the regularization of AI usage. There is also concern that AI could have negative effects, especially in an undeveloped country like India, where the majority of people are ignorant and living in poverty.
The consequences of informal regulation of AI
A framework for future rulemaking or legislation can be established by the White House’s recently announced guidance on artificial intelligence laws and regulations. The administration’s dedication to a sectoral approach is good news. Having a federal AI commission enforce one-size-fits-all regulations is pointless because artificial intelligence is merely a collection of statistical techniques that may be applied to many sectors of the economy. The White House study makes a reasonable recommendation to sectoral regulators to create regulations for AI applications under their purview. R. David Edelman, a former employee of the White House, makes a similar argument in a recent op-ed piece about the need to avoid treating AI as a single entity.
Unfortunately, the report also continues to use an antiquated, hands-off strategy. It incites regulators to view their actions as ones that stifle innovation. “Avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth,” is the advice given to regulators. Regulation is perceived as an expense, a delay, a burden, or a barrier that should only be used as a last resort if absolutely necessary.
This theory rejects the notion that policies like transparency, accountability, and fairness might foster innovation and progress in the field of artificial intelligence. However, in the modern world, the actual responsibility of AI regulators is to develop a set of artificial intelligence laws and regulations that simultaneously protect the general public and foster corporate innovation—not to sell off one in favor of the other.
Conclusion
Artificial intelligence laws and regulations impact people but also democracy, and society on a large scale. Finally, it applies to all legal systems that good monitoring determines whether they are effective. Although it is yet unclear exactly how this coordination should look, supervisory authorities must work closely together more frequently—because of this, paying attention to how monitoring is organized is still important.