Dr Vassilia Orfanou, PhD, Post Doc
Writes for the Headline Diplomat eMagazine, LUDCI.eu
Thank you credits to Silicon Luxembourg, LIST and all other important contributions to support the publication of this article.
Introduction
Not long ago, we found ourselves amidst the uncharted territory of artificial intelligence innovation, reminiscent of the Wild West. Concepts like self-driving cars mastering the rules of the road and generative AI tools excelling in conversations, content creation, creativity and robotic surgery were just beginning to take shape.
Major tech players, such as OpenAI, Microsoft, Google, Amazon, X, and Meta were embroiled in a frenzied race to stake their claim and cast their charm in a domain, where caution had always been the norm.
Almost overnight, the future of artificial intelligence has sparked numerous possibilities. With such remarkable progress comes the weighty responsibility to mitigate the potential “dark side” of technology.
The World’s first AI Law
On February 2nd, 2024, the European Union stepped in with the AI Act, known as the world’s first comprehensive AI law. The aim is to guarantee the security of artificial intelligence systems (AIS), secure fundamental rights, protect democracy and the development of businesses.
According to some stakeholders and observers, this could prove to be a brake or hindrance to innovation and a strong constraint on small businesses and startups. This article critically examines the implications of the AI Act on businesses, taking a leap of faith on its potential benefits, but also examining the critical challenges to be addressed as companies adapt to an ever-evolving regulatory landscape.
The EU’s AI Act necessitates a Responsible AI
AI is generally considered very powerful. Some experts believe that minimizing risks posed by AI should be a “global priority” alongside other risks of societal proportions, such as nuclear war or pandemics.
The EU has taken a decisive step forward here. The AI Act addresses various aspects of AI development and use, including transparency, accountability, and safety.
Francesco Ferrero, Director of IT for Innovative Services at LIST, says “The EU AI Act marks a positive step in regulating AI, recognizing its power and necessity for oversight. The Act aims to foster inclusive development and equal AI access while promoting safety and innovation. However, if confined to Europe, it might stifle European innovation and hinder its business attractiveness”.
Francesco continues, “Exemptions for open AI tech and the obligation for member states to establish AI regulatory sandboxes can mitigate this as they provide controlled environments for testing under oversight. Organisations such as LIST, which is the sole Luxembourg partner of an EU Testing and Experimentation Facility for AI and has developed its own AI sandbox to assess the bias of AI models, can help companies, especially SMEs and start-ups, navigate these uncharted waters.
To compete globally, he says, “Europe must prioritize becoming a knowledge and technology superpower alongside being a regulatory one. Initiatives at both national and European levels, akin to the Chips Act, are essential. Luxembourg could spearhead efforts to prioritize AI research, following the example of the semiconductor sector, to reduce strategic dependence and foster excellence in AI development.”
Key Features of the AI Act
These are some of the most important regulatory frameworks of the world’s first AI regulation:
Focus on high-risk systems
The Act categorizes AI systems based on their potential risks – unacceptable risk, high risk, limited risk and minimal or no risk.
First, unacceptable risk models like the controversial social scoring system used in China are banned. “All AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned, from social scoring by governments to toys using voice assistance that encourages dangerous behavior,” part of the act reads.
There is a focus on high-risk systems, like credit scoring, CV-sorting software for recruitment, and facial recognition. They will face stricter regulations compared to lower-risk applications.
Automated biometric recognition in public spaces, such as AI-based facial recognition, remains permitted. But it should only be used “for the risk assessments of natural persons for the purpose of law enforcement.”
Generative AI – Is it Caught in the Middle?
According to the act, Generative AI developments will “not be classified as high-risk.” In the future, however, developers of AI-based models, such as ChatGPT or Gemini will have to prove compliance with copyright regulations. They must design a model that prevents it from generating illegal content. Less strict regulations apply to open-source models.
“The technical documentation should be kept up to date, appropriately throughout the lifetime of the AI system…Furthermore, high-risk AI systems should technically allow for the automatic recording of events, using logs, over the duration of the lifetime of the system.” – the AI Act reads.
Transparency in Training Data
AI companies like OpenAI, X, and Google must provide a “detailed summary” of the content they use to train their models. This is so that creators can check whether their works were used, according to Taylor Wessing. However, there will be no right to information about specific works, and the question of a claim to remuneration has not been regulated.
The AI Act: A Double-Edged Sword for Businesses. Benefits and Challenges
The implementation of the AI Act carries profound implications for organizations involved in the development, deployment, or utilization of AI systems. As a result of the recent EU regulations, below, we explore both the advantages and challenges:
Benefits with a lasting global impact
The EU AI Act may only be limited to Europe. But, according to experts in the industry, it will have a lasting global impact. The benefits are as follows:
Potential for Global Benchmarking
The AI Act, being the first comprehensive regulatory scheme for AI, has the potential to set an EU benchmark for responsible AI development. Misch Strotz, CEO at LetzAI says, “At LetzAI we’re very happy with the outcome of the AI Act. Our product was built to solve the many challenges that artists and rights holders were having with existing generative AI products”. Strotz continues, “LetzAI is opt-in, so users bring the content, and we’re committed to guaranteeing safety this way. So, for us, all of the proposed requirements make sense, and we are happy to contribute our part”.
Considering this perspective, the AI Act establishes a standard for Europe, yet it may not be the exclusive framework adopted globally. Nevertheless, other nations and regions could draw upon its principles, while formulating their own AI regulations, potentially fostering a more cohesive, global approach to AI governance.
“Not only is there value in regulating AI systems, but being among the first major governments to do so will have broad global impact to the benefit of the EU—often referred to as the “Brussels Effect,” writes Alex Engler, a former fellow of Governance Studies at the Center for Technology Innovation, for Brookings.
Increased Efficiency and Productivity
AI adoption has always posed a serious reputational challenge for a lot of businesses. This is why some have decided to steer-clear – albeit this being, the wrong decision. The AI Act is expected to encourage more adoption for businesses concerned of their reputation amid questions about AI ethics. This will lead to more development and use of responsible AI to automate tasks, streamline processes, and optimize workflows, for increased efficiency and productivity.
Enhanced Customer/Social Experience
Customers and businesses will benefit from the AI Act as it’ll promote the development of responsible AI that avoids biases that could negatively impact customer experience. By using AI technologies mandated by the AI Act, businesses can analyze customer data, personalize offerings, and provide better customer service.
Societal AI Needs Regulation
Regulation is key in technology meant for society. Ben Zhao, Professor of Computer Science at the University of Chicago, expresses in a forum “regulation makes a lot of sense because this is a problem that in the worst case, has world-ending implications. And if we want to have any hope of doing it “right,” we need to give it time.”
Reduced Operational Costs
According to some experts, the “overcomplicated governance system” may increase costs, initially due to compliance requirements. But in the long run, it could lead to savings through efficient AI use and more adoption.
Through the automation of routine tasks and optimization of processes, AI technologies enforced by the AI Act can help businesses reduce operational costs, identify areas for cost savings, and improve profitability over time.
Competitive Edge
By implementing AI technologies in compliance with the AI Act, businesses will have a competitive edge. They will be able to optimize pricing strategies, improve customer retention rates, and identify new market opportunities.
Challenges stifling Innovation
While the AI Act boasts several positive aspects, it also faces challenges and criticisms, as these are elaborated below.
Burdensome Obligations for Startups/Small Businesses
Some observers believe that the “burdensome” obligations will slow down progress and hamper competitiveness for startups and smaller companies, limiting their ability to develop and bring innovative AI solutions to market.
“The regulation could stifle innovation and make it more difficult for European companies to compete with companies from other parts of the world, such as the United States and China,” comments Ronit Saini, a marketing and AI enthusiast focused on increasing sales and customer experience via AI.
Premature Regulation for the Current AI (ANI)
For some schools of thought, we have not yet reached that level, where regulation is necessary “We should regulate when we eventually get closer to Artificial General Intelligence (AGI),” says Thia Kai Xin, senior data scientist at Refinitiv and cofounder of DataScience SG.
“What we have achieved so far is Artificial Narrow Intelligence (ANI) – single purpose AI like AlphaGo, self-driving cars, machine learning models like recommendation systems, deep learning translation, etc. Artificial General Intelligence (AGI) – AI that matches human level ability across a variety of tasks – is not yet achieved.”
But she explains that this stage precedes the last, which is Artificial Super Intelligence (ASI) – AI ‘beyond human level ability across a variety of tasks.”
Underregulation Concerns
Some, however, still express concerns about underregulation. For example, the use of facial recognition in police investigations has garnered significant attention and sparked numerous discussions. The member states originally agreed that the use of biometric remote recognition should be fundamentally banned. Some also fear whether users’ privacy will be effectively protected.
‘Immature’ AI Technologies and Costs
Integrating AI technologies as required by the AI Act, despite still being in an “immature” stage and under development, can be costly. It may necessitate substantial investments in hardware, software, storage, and skilled personnel. This could potentially pose financial hurdles for businesses, particularly smaller ones.
“AI standards remain incomplete and immature relative to those in comparable industries,” argues Hadrien Pouget and Ranj Zuhdi, experts at Carnegie Endowment, in a commentary on the act.
Skilled Personnel Challenges
Finally, implementing AI technologies necessitated by the AI Act may require specialized technical skills and ongoing training to keep up with advancements in AI. This poses challenges for businesses in acquiring and retaining skilled personnel. These challenges threaten to make compliance expensive and enforcement inconsistent.
Despite the significance of these challenges that may hinder AI applications, these same challenges are also opportunities, resulting in positive developments. Addressing compliance requirements will drive innovation in developing more secure, transparent, and explainable AI systems. Also, by prioritizing responsible AI development, Europe can potentially establish itself as a leader in ethical AI practices.
Stakeholder Perspectives on Regulatory Compliance
The Corporate angle
A group of more than 160 company executives in Europe, including Siemens, Carrefour, Renault, and Airbus, wrote to EU lawmakers, raising “serious concerns” about the EU AI Act, as reported by the CNN. “In our assessment, the draft legislation would jeopardize Europe’s competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing.”
Established players fear that the AI Act could hinder Europe’s technological advancement without effectively addressing core challenges. This suggests a need for ongoing dialogue between regulators and the industry to ensure regulations foster, rather than hinder, responsible innovation.
The view of Startups and Small Enterprises
Christopher Saleh, CEO at Socratezzz says: “I think it is a smart move. As long as they don’t get too overzealous. Transparency, basic accountability, and ethical integrity are good. By encouraging companies to implement these rules ahead of their enforcement, the EU is sending a clear signal that it takes AI ethics and accountability seriously and that it expects companies to do the same.”
Saleh’s perspective reflects a cautious optimism toward regulation. Evidently, SMEs or startups like Socratezzz see the value in transparency and ethical guidelines but worry about overly burdensome regulations. However, they appreciate clear expectations from the EU regarding AI ethics.
On the other hand, Xavier Amatrain, co-founder, machine learning expert, and CTO at Curai, argues that regulators might not fully understand the technology, potentially hindering its development. “Honestly, if we had to regulate AI any time soon, we would not know how to do it. What’s even worse, we could let people with absolutely no understanding of the technology do it,” says Amatrain, expressing a strong opposition to general AI regulation.
“This would be worse than having let the governments regulate the Internet in the 80s. Therefore, AI as such should not be regulated. What should be heavily regulated is its use in dangerous applications, such as guns or weapons.”
While Amatrain makes a good point on focusing on specific harmful applications of AI, it should be noted that his comments predate the AI Act, which also directly addresses his concern by establishing a risk-based regulatory framework.
The Government standpoint
German Digital Minister, Volker Wissing says on reaching a compromise to approve the AI Act despite opposition to regulate foundation models: “The wrangling over the German position on the AI Act came to an end today with an acceptable compromise. The negotiated compromise lays the foundations for the development of trustworthy AI. Without the use of artificial intelligence, there will be no competitiveness in the future.”
Wissing’s comment highlights the government’s goal of striking a balance. He acknowledges the need for regulation to ensure trustworthy AI while emphasizing its importance for future competitiveness. This suggests a pragmatic approach that aims to promote responsible AI development.
The Policy approach
Von der Leyen, president of the European Commission says in a press statement on the political agreement on the EU AI Act: ”Until the Act will be fully applicable, we will support businesses and developers to anticipate the new rules. Around 100 companies have already expressed their interest in joining our AI Pact, by which they would commit voluntarily to implement key obligations of the Act ahead of the legal deadline. Our AI Act will also make a substantial contribution to the development of global guardrails for trustworthy AI,” Leyen said, outlining the EU’s commitment to supporting businesses during the transition.
Finding a Balance Between Innovation and Risk
While some contend that the AI Act may not fully grasp the interests of the tech industry, it is also true that the Act’s risk-based approach provides a pragmatic and balanced solution.
“First, applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned. Second, high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements,” The Future of Life Institute highlights the Act’s key tiered structure in its EU AI Act-dedicated website. “Lastly, applications not explicitly banned or listed as high-risk are largely left unregulated.”
Admittedly, the regulated risk areas can potentially affect some major AI innovations in the future. But models with “unacceptable risks” that the Act is trying to regulate are not currently used in the EU. For instance, the more intrusive Chinese social scoring/credit system is not the model used for creditworthiness or for similar purposes in the West.
Additionally, “high risk” category models are not banned but will be regulated. For example, “CV-scanning tools that rank job applicants” that fall within the high-risk category, are known for “algorithmic bias.” According to an Oxford research, such “complex algorithms” are known to be discriminatory against “women, ethnic minorities, people with disabilities and other legally protected groups.”
For instance, “It has been shown that in the US labor market, African-American names are systematically discriminated against, while white names receive more callbacks for interviews,” writes Julius Schulte, a Data Scientist and Strategic Intelligence at the World Economic Forum.
The report also reveals that the “algorithm that Amazon employed to screen job applicants reportedly penalized words, such as ‘women’ or the names of women’s colleges on applicants’ CVs.” This insight led to the scrapping of the AI tool in 2018.
So, for the time being, the direct impact on the industry is likely minimal. The models and businesses most affected by the act are clearly those raising ethical and social efficiency concerns. By focusing stricter regulations on high-risk applications, the AI Act allows for more innovation in lower-risk areas. Such an approach will foster responsible development, while mitigating potential dangers posed by powerful AI.
Conclusion
The AI Act presents a dual landscape of challenges and opportunities for businesses. Successfully navigating its regulations demands thoughtful deliberation. Rather than focusing solely on potential drawbacks, embracing the Act may foster responsible AI development and provide a strategic edge.
The Act doesn’t seem to stifle innovation; it provides a framework for developing safe, secure, and ethical AI solutions, helping Businesses to embrace certain principles to unlock new opportunities, but still practice innovation responsibly.
By prioritizing compliance with the AI Act, businesses can demonstrate their commitment to responsible AI practices, thus building trust with customers, partners, and regulators, while remaining competitive.
Call to Action
European businesses employing AI are encouraged to initiate an internal AI inventory. This involves assessing their existing and future AI systems to determine their risk classification under the Act. Subsequently, developing a compliance strategy becomes imperative. This strategy will entail adjustments to development protocols, data management procedures, and risk mitigation approaches.
Furthermore, investing in employee training and upskilling is recommended. Engaging with policymakers and actively participating in dialogues concerning AI governance is essential to ensure that the regulatory direction is mutually beneficial for all stakeholders.
References
Amatrain, X. (2018). Should artificial intelligence be regulated? Quora. Available at https://www.quora.com/Should-artificial-intelligence-be-regulated?topAns=53516215. Retrieved on April 14, 2023.
Chan, K. (2024). The E.U. Has Passed the World’s First Comprehensive AI Law, Time. Available at https://time.com/6903563/eu-ai-act-law-aritificial-intelligence-passes/. Retrieved on April 9, 2024.
Dastin, J. (2018). Insight – Amazon scraps secret AI recruiting tool that showed bias against women, Reuters. Available at https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G/. Retrieved on April 14, 2024.
European Parliament (2023). EU AI Act: first regulation on artificial intelligence. Available at https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence. Retrieved on April 9, 2024.
European Parliament (2024). Artificial Intelligence Act – European Parliament. Available at https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.pdf. Retrieved on April 9, 2024.
Kelly-Lyth, A. (2021). Challenging Biased Hiring Algorithms, Oxford Journal of Legal Studies. Available at https://academic.oup.com/ojls/article-abstract/41/4/899/6166290. Retrieved on April 14, 2024.
Kradodomski, A. (2024). The EU’s new AI Act could have a global impact, Chattam House. Available at https://www.chathamhouse.org/2024/03/eus-new-ai-act-could-have-global-impact. Retrieved on April 9, 2024.
NDTV World (2023). Tackling Risks From AI Should Be “Global Priority,” Say Experts, Agency France Presse. Available at https://www.ndtv.com/world-news/tackling-risks-from-ai-should-be-global-priority-say-experts-4080112. Retrieved on April 9, 2024
Papakonstantinou, V. (2023). Remote Biometric Identification and Emotion Recognition in the Context of Law Enforcement, EUCrim. Available at https://eucrim.eu/articles/remote-biometric-identification-and-emotion-recognition-in-the-context-of-law-enforcement/. Retrieved on April 9, 2024.
Pouget, H. and Zuhdi, R. (2024). AI and Product Safety Standards Under the EU AI Act, Carnegie Endowment for International Peace. Available at https://carnegieendowment.org/2024/03/05/ai-and-product-safety-standards-under-eu-ai-act-pub-91870. Retrieved on April 9, 2024.
Pownall, C. (2019). Understanding the Reputational Risks of AI, Research Gate. Available at https://www.researchgate.net/publication/340088726_Understanding_the_Reputational_Risks_of_AI. Retrieved on April 14, 2024.
Saini, R. (2023). Should Europe be concerned about AI regulations curbing its competitiveness? Quora. Available at https://www.quora.com/Should-Europe-be-concerned-about-AI-regulations-curbing-its-competitiveness. Retrieved on April 9, 2024.
Schulte, J. (2019). AI-assisted recruitment is biased. Here’s how to make it more fair, World Economic Forum. Available at https://www.weforum.org/agenda/2019/05/ai-assisted-recruitment-is-biased-heres-how-to-beat-it/. Retrieved on April 14, 2024.
Wessing, T. (2024). The EU AI Act and general-purpose AI, Lexicology. Available at https://www.lexology.com/library/detail.aspx?g=28a3cf78-9186-44a8-8dfa-902a47f349b1. Retrieved on April 9, 2024.
Zenner, K. (2024). Some personal reflections on the EU AI Act: a bittersweet ending, Kai Zenner (Digitizing Europe). Available at https://www.kaizenner.eu/post/reflections-on-aiact. Retrieved on April 14, 2024.
Zhao, B.Y. (2018). Should artificial intelligence be regulated? Quora. Available at https://www.quora.com/Should-artificial-intelligence-be-regulated. Retrieved on April 9, 2024.