By Szilvia Sandberg, Adrian Sandberg
Note: This article builds on a piece published on the FCPA Blog approximately two years ago, but has been substantially updated and expanded to reflect recent developments in AI and compliance.
“I am putting myself to the fullest possible use, which is all I think that any conscious entity can ever hope to do” said HAL 9000, a heuristically programmed algorithmic computer of the spaceship Discovery One in the classic movie 2001: A Space Odyssey. Made in 1968, this Stanley Kubrick masterpiece was one of the first attempts to show both the potential and the risks of working together with artificial intelligence programs in the future. Almost six decades have passed since then. In the meantime, more and more revolutionary ideas have shaped our technological development, such as the Fortran method, the programming language theory of Chomsky or the World Wide Web, but we are still in search for one single answer: Is artificial intelligence the biggest blessing or the biggest challenge of our age?
Considering the extensive volume and relentless change of legal statutes in every jurisdiction in the world, the question is more relevant than ever for Compliance Professionals. Compliance departments of today have to track and comply with all legal amendments relevant for the business activity, detect all possible red flags among thousands of third parties and conduct sanction list screening of all debitors and creditors. Banks and financial institutions must abide by all requirements of AML and KYC principles, screening all their customers before opening a bank account or fulfilling substantial transactions. According to EUR-Lex database, 1,416 legal provisions (including regulations, directives, and decisions) were adopted in 2025 alone.
Tracking all those legal changes, maintaining master data of thousands of third parties, or detecting and preventing unauthorized financial transactions would be almost impossible without automated processes, which increasing means AI applications. After the EU’s GDPR regulation entered into force in May 2018, certain AI applications were able to be used to find relevant data privacy clauses in contracts and flag them for updating, thereby reducing the manual workload significantly. In addition, AI is already playing an important role in the field of compliance risk assessment or predictive analysis where tens of thousands of data inputs have to be analyzed and translated into action items for Compliance Managers and actionable management information for business leaders. In addition, companies are increasingly applying natural language processing (NLP) software to identify relevant compliance-related legal cases or laws adopted by authorities.
Nevertheless, like every new technology, AI constitutes new risks as well. Offering more accurate and more precise solutions by AI applications do not necessarily lead to the most human solution. For example, in 2018 Amazon had to stop applying a recruiting engine as the tool learned to be discriminatory against women. In order to select the most appropriate applicants, the AI hiring tool observed a pattern among the candidates in the past ten years and learned that men had been more successful due to the dominance of male applicants in the past. This early case of what we would come to know as “algorithmic responsibility” (or accountability) was a prominent example of the challenge facing regulators at the time.
The most important AI laws and regulations currently shaping the technology landscape are defined by a mix of comprehensive regional acts, industry-specific rules, and emerging, binding international standards. As of early 2026, the EU AI Act stands as the most comprehensive, followed by significant, targeted legislation in the U.S. (especially California), China, and rising standards in Asia. The EU AI Act (Regulation (EU) 2024/1689) entered into force on August 1, 2024, adopting a risk-based approach to ensure safety and fundamental rights. The regulations are being phased in, with prohibited practices effective from February 2, 2025, and high-risk requirements largely mandatory by August 2, 2026. US President Donald Trump’s proposed 10-year moratorium on the regulation of AI was eventually shelved by Congress. The subsequent “nudify” scandal brought about by - amongst others - Elon Musk’s Grok AI platform has given new impetus to the subject of regulation, particularly at the state level. In the meantime, in the absence of clear rules, the US courts are filling the void with ad hoc answers in the form judgments applying traditional legal logic to the new landscape.
The EU AI Act defines four levels of risk for AI systems: unacceptable risk, high risk, limited risk, minimal risk. Unacceptable risks, such as harmful AI-based exploitation of vulnerabilities or individual criminal offence risk assessment or prediction are banned. The prohibitions became effective in February 2025. The European Commission published two key documents to support the practical application of the prohibited practices: guidelines on prohibited AI practices under the AI Act and guidelines on the AI system definition of the AI Act. In addition, AI technologies that can pose serious risks to health, safety or fundamental rights are defined as high-risk. High-risk AI systems are subject to strict obligations including adequate risk assessment and mitigation systems, appropriate human oversight measures, high level of robustness, cybersecurity and accuracy. The rules for high-risk AI will enter into force in August 2026 and August 2027.
But what kind of future is awaiting Compliance Officers in this new world? Should we be worrying that our careers are being threatened by self-learning machines? On the contrary, this new age shaped by new paradigms will need human judgement more than ever before. In order to make an impact in this future, we have to complement our traditional skill sets by extending our knowledge in the field of computer science, data analytics, risk assessment, workflow management or software testing to harness the benefits of AI whilst at the same time mitigating the potential risks and machine errors.
As such, we have to be able to identify areas of compliance where human effort can be supplemented but not replaced by a self-learning algorithm. As an example, an internal investigator will always be needed to unveil potential fraud or harassment within a company as the trust necessary for a successful investigatory interview can only be built up between human beings. Therefore, the biggest priority for Compliance professionals in this new era is to learn to cooperate and co-exist with AI in accordance with our human values. If we learn to adapt these new paradigms to benefit from the advantages of AI, we will never be lagging behind HAL 9000. Moreover, we will be remembered as those who will have shaped a digitally advanced future of human-centered compliance through technology and responsibility.