Skip to content

Exabeam Delivers First Configurable Peer Benchmarking for CISO Decision-Making — Read the Release

AI Regulations and LLM Regulations: Past, Present, and Future

  • 9 minutes to read

Table of Contents

    What Are AI Regulations? 

    AI regulations are a set of guidelines and rules designed to govern the development, application, and impact of artificial intelligence (AI) technologies. These regulations are essential in ensuring that AI is used responsibly and ethically, and that it doesn’t pose any harm or threat to society. They cover a range of issues, from data privacy and security, to ethical considerations, and even matters of national security.

    AI is a rapidly evolving field, and its widespread adoption raises a host of complex legal, ethical, and societal issues. These include privacy concerns, potential job losses due to automation, the risk of AI-enabled cyber attacks, and even existential risks to humanity. AI regulations are therefore not just about controlling the technology itself, but also about managing its impacts on society and individuals.

    Without proper regulations, there is a risk that AI could be misused or abused, leading to negative outcomes. For example, AI could be used to spread disinformation or propaganda, leading to societal unrest. It could also be used to carry out cyber attacks, or to invade people’s privacy. In the worst-case scenario, uncontrolled AI could even pose an existential risk to humanity.

    About this Explainer:

    This content is part of a series about AI technology.


    Should AI Be Regulated? 

    The question of whether AI should be regulated is a contentious one. On one hand, some argue that regulation is necessary to prevent misuse of the technology and to ensure it is developed and used ethically. On the other hand, others worry that too much regulation could stifle innovation and hinder the growth of the AI industry.

    There is a growing consensus, supported by leading technology companies such as OpenAI, Google, and Microsoft, that regulation is necessary. However, it should be carefully designed to allow the continued development of AI technologies, and take into account the dynamic and unpredictable nature of the field.

    Regulation can provide a framework for the responsible use of AI, ensuring that it is used in a way that is ethical, fair, and does not harm society. It can also provide safeguards against potential abuses of the technology. However, it’s important that regulation does not become overly restrictive, as this could restrict the societal and economic value of AI innovations.

    Tips from the expert

    Steve Moore

    Steve Moore is Vice President and Chief Security Strategist at Exabeam, helping drive solutions for threat detection and advising customers on security programs and breach response. He is the host of the “The New CISO Podcast,” a Forbes Tech Council member, and Co-founder of TEN18 at Exabeam.

    In my experience, here are tips to enhance compliance and optimize approaches in light of AI regulations:

    Develop a clear explainability policy for all AI systems
    Ensure that your organization has internal protocols to articulate how AI models make decisions. Use frameworks like Explainable AI (XAI) to provide clear, understandable outputs for stakeholders and regulators.

    Conduct preemptive compliance audits
    Regularly assess AI systems against emerging global regulations, even if they’re not yet enforceable in your jurisdiction. Staying ahead helps prevent costly adjustments when compliance becomes mandatory.

    Adopt AI ethics-by-design frameworks
    Integrate ethical principles into every stage of AI development, from concept to deployment. This includes fairness, accountability, and privacy, ensuring systems naturally align with regulatory expectations.

    Establish a multidisciplinary AI governance team
    Create teams that include legal experts, ethicists, engineers, and domain-specific professionals to ensure compliance while maintaining operational and ethical standards.

    Invest in continuous monitoring for bias and discrimination
    Implement tools that identify and measure biases in real time. This is particularly important for LLMs and high-risk AI applications to maintain fairness and meet evolving regulatory transparency requirements.


    Key Parameters Shaping Regulations and their Outcomes 

    Transparency, Fairness, and Explainability

    The principles of transparency, fairness, and explainability are fundamental to ensuring that AI is used responsibly and ethically:

    • Transparency refers to the ability to understand how an AI system makes decisions. This is important because it allows users to understand why a system is making certain decisions, and it can help to build trust in the technology.
    • Fairness is about ensuring that AI systems do not discriminate against certain groups or individuals. This is crucial in preventing the misuse of AI for malicious purposes, and in ensuring that the benefits of AI are shared equally across society.
    • Explainability refers to the ability to explain how an AI system works. This is important for building trust in the technology, and for ensuring that users understand how decisions are being made.

    Risk-Based Approach

    A risk-based approach to AI regulation involves assessing the potential risks posed by AI technologies, and then designing regulations to mitigate these risks. This approach allows for a nuanced and flexible regulatory framework, which can adapt to the rapidly evolving nature of AI technology.

    The risk-based approach involves: 

    • Identifying potential risks
    • Assessing their likelihood and potential impact
    • Identifying how risks could manifest across different types of AI technology
    • Developing strategies to manage these risks in specific contexts

    Addressing Security Risks and Malicious Actors

    While the risk-based approach focuses more broadly on any risk arising from AI technology, the security aspect of AI regulation specifically emphasizes potential misuse of AI technology, and its potential use in cyber attacks.

    Mitigating security risk involves measures such as improving the security of AI systems, implementing ethical guidelines, and establishing mechanisms for accountability and redress.

    Addressing malicious actors involves cooperation with the private sector, cybersecurity researchers, and law enforcement bodies, to ensure that both private and public sector have the means to combat cyber crime making use of AI technology.

    Institutional Approach

    An institutional approach to AI regulation involves establishing dedicated institutions or bodies to oversee the development and use of AI. These institutions would be responsible for enforcing regulations, monitoring compliance, and addressing any issues or concerns that arise.

    An institutional approach can provide a more robust and effective regulatory framework, as it allows for a more coordinated and comprehensive oversight of AI. It can also provide a platform for dialogue and cooperation between different stakeholders, including government, industry, academia, and civil society.

    International Harmonization

    International harmonization refers to the process of aligning AI regulations across different countries or regions. This can help to ensure a level playing field for AI development and use, and can prevent regulatory arbitrage, where companies move to countries with less stringent regulations.

    International harmonization can also help to foster cooperation and collaboration between countries, and can help to ensure that the benefits and risks of AI are managed on a global scale. However, achieving international harmonization can be challenging, due to differences in legal systems, cultural norms, and political systems.

    The need for international harmonization is becoming increasingly urgent, as AI technologies become more global in nature. With AI systems being developed and deployed across borders, there is a need for a coordinated global approach to managing the risks and benefits of this technology.


    History of AI Regulations and Laws Around the World 

    United States of America

    In the United States, there is currently no comprehensive federal legislation regulating the use of AI. In October 2022, the Biden administration issued a proposal for an AI Bill of Rights, including provisions like data privacy, notice and explanation, algorithmic discrimination protection, safety and effectiveness, and human alternatives and fallbacks. In June, 2023, US lawmakers introduced the National AI Commission Act, to create a federal commission that will review the United States’ approach to AI regulation.

    In October, 2023, the White House announced the AI Executive Order, also known as EO 14110, which provides a framework for the development, deployment, and governance of artificial intelligence (AI) technologies in federal agencies and across the nation. It aims to promote AI innovation, while ensuring AI development is ethical, secure, and respects privacy and civil rights.

    United Kingdom

    The United Kingdom has taken proactive steps towards creating a regulatory environment that supports the ethical development of AI technologies. The UK’s AI strategy is focused on leveraging the country’s strengths in governance, ethics, and innovation. One key development is the establishment of the Centre for Data Ethics and Innovation (CDEI), which advises the government on AI and data governance issues. 

    The CDEI works on developing frameworks and guidelines to ensure that AI technologies are used responsibly. Furthermore, the UK’s approach emphasizes the importance of public sector innovation, ethical standards, and building a skilled workforce to maintain its leadership in the AI domain.

    EU

    The European Union is at the forefront of establishing comprehensive regulations for artificial intelligence, with the proposed AI Act highlighting its commitment to safe and ethical AI development. This groundbreaking regulation aims to create a balanced framework that promotes innovation while protecting citizens’ rights. It categorizes AI systems based on the risk they pose, applying stricter requirements to high-risk applications in areas such as employment, education, and law enforcement. 

    The AI Act’s focus on transparency, accountability, and data governance sets a precedent for regulatory approaches worldwide. Additionally, the establishment of a European Artificial Intelligence Board underlines the EU’s dedication to harmonizing AI regulations across member states, ensuring a unified and effective approach to AI governance.

    Japan

    Japan’s AI strategy emphasizes the integration of AI technologies into society and the economy, fostering innovation while addressing ethical and security concerns. The government has launched initiatives to promote AI research and development, aiming to strengthen Japan’s competitiveness in the global AI market. 

    Japan focuses on creating an environment that encourages collaboration between industry, academia, and government to accelerate AI applications in various sectors, including healthcare, manufacturing, and transportation. Moreover, Japan is working on establishing guidelines for AI ethics, focusing on transparency, user privacy, and security to ensure the responsible use of AI technologies.

    Australia

    Australia’s approach to AI regulation involves creating a supportive environment for innovation while ensuring ethical and responsible development. The Australian government has introduced a national AI strategy that outlines its vision for Australia to become a leader in developing and applying responsible AI. This strategy includes investing in AI research and development, establishing ethical frameworks for AI use, and promoting international collaboration. 

    Australia is also focusing on building public trust in AI technologies by emphasizing transparency, accountability, and inclusivity in AI applications, ensuring that they contribute positively to society and the economy.

    Canada

    Canada is recognized for its significant contributions to AI research and is actively working on frameworks to guide the ethical development and deployment of AI technologies. The Canadian government has introduced strategies that aim to position Canada as a world-leading destination for AI innovation. 

    These strategies emphasize ethical AI development, support for high-quality research, and the promotion of economic growth through AI technologies. Canada’s approach includes initiatives to foster a skilled AI workforce and to establish collaborative partnerships between the government, academia, and industry to leverage AI for social and economic benefits.

    Brazil

    Brazil is taking steps to integrate AI into its economic and social development, recognizing the potential of AI technologies to drive innovation. The Brazilian government has initiated discussions on AI policy frameworks, focusing on ethical guidelines, innovation promotion, and the development of AI skills among the workforce. 

    Brazil aims to leverage AI for addressing national challenges, such as healthcare, education, and environmental sustainability. The country is also looking at international cooperation to ensure that it remains aligned with global standards and practices in the development and use of AI technologies.

    China

    China has articulated a bold vision to become a world leader in AI by 2030, underpinning this ambition with substantial investments in AI research and development. The Chinese government’s approach to AI regulation emphasizes state-led coordination, aiming to harness AI’s potential to drive economic growth and technological innovation while maintaining social stability. 

    China’s AI strategy includes the development of new AI technologies, applications, and industries, supported by policies that encourage collaboration between government, industry, and academia. Additionally, China states its focus to create ethical standards for AI to address issues of privacy, security, and fairness.


    What Regulations Are Emerging to Oversee Large Language Models (LLMs)? 

    LLMs such as OpenAI’s ChatGPT, Google Gemini and Meta LLaMA have entered mainstream use, and their capabilities are rapidly advancing. AI industry experts, governments, and even some of the organizations developing these models, have voiced concerns about their potential risk to society. In light of these concerns, specific regulations are evolving to govern the global use of LLMs.

    U.S. Algorithmic Accountability Act (AAA)

    The Algorithmic Accountability Act, introduced in the United States, aims to give consumers more transparency and control over the automated decision-making systems that impact their lives. If passed, it would require companies to conduct impact assessments of their AI systems, including LLMs, for:

    • Bias and discrimination: Ensuring systems do not perpetuate unfair bias or discrimination.
    • Data privacy: Examining how personal data is used and protected.
    • Algorithmic accountability: Companies would need to disclose how their LLMs make decisions, the data they use, and the potential impacts on consumers.

    U.S. National Artificial Intelligence Initiative Act

    The National Artificial Intelligence Initiative Act is a comprehensive national program in the United States to accelerate AI research and application, including LLMs. This Act focuses on:

    • Supporting AI research: Encouraging the development of AI, which encompasses LLMs, through grants and initiatives.
    • Ethical standards and policies: The Act promotes the creation of AI ethics standards and policies, potentially influencing how LLMs are trained and used.
    • International collaboration: It aims to establish guidelines for international cooperation in AI research, which could shape how LLMs are developed and governed globally.

    European Union AI Act

    The EU AI Act is a proposed regulation that represents one of the first major legislative frameworks specifically targeting AI systems. Its aim is to ensure that AI systems are safe, respect EU laws on privacy and data protection, and are implemented in a manner that prevents discrimination. 

    Here are the potential impacts of the EU AI Act on LLM systems:

    • Risk assessment: LLMs may be categorized based on the risk they pose to rights and safety. High-risk applications, such as those affecting critical infrastructure, employment, or personal data, will face stricter scrutiny.
    • Transparency requirements: There may be mandates for LLMs to disclose when individuals are interacting with an AI, not a human, ensuring transparency in communications.
    • Quality and data governance: The Act could enforce rigorous data governance to prevent biases in AI output, which is especially crucial for LLMs trained on vast datasets.

    European Commission Guidelines for Trustworthy AI

    In 2019, the European Commission’s High-Level Expert Group on Artificial Intelligence published a document called “Ethical Guidelines for Trustworthy AI,” which establishes a framework for achieving trustworthy AI. This framework specifically mentions LLMs and emphasizes:

    • Human agency and oversight: AI should support human autonomy and decision-making, as prescribed by the guidelines.
    • Robustness and safety: LLMs must be secure and reliable, operating without unintended harm.
    • Privacy and data governance: The guidelines strongly focus on protecting personal data, a critical aspect given the data-hungry nature of LLMs.
    • Transparency: The guidelines stress the importance of explainability, meaning that LLMs should be understandable to experts and laypeople alike.
    Learn more:

    Read our detailed explainer about LLM security.


    Exabeam Fusion: The Leading AI-Powered SIEM

    Exabeam offers an AI-powered experience across the entire TDIR workflow. A combination of pattern-matching rules and ML-based behavior models automatically detect potential security threats such as credential-based attacks, insider threats, and ransomware activity by identifying high risk user and entity activity. The industry-leading user and entity behavior analytics (UEBA) baselines normal activity for all users and entities, presenting all notable events chronologically.

    Smart Timelines highlight the risk associated with each event, saving an analyst from writing hundreds of queries. Machine learning automates the alert triage workflow, adding UEBA context to dynamically identify, prioritize, and escalate alerts requiring the most attention.

    The Exabeam platform can orchestrate and automate repeated workflows to over 100 third-party products with actions and operations, from semi- to fully automated activity. And Exabeam Outcomes Navigator maps the sources of the feeds that come into Exabeam products against the most common security use cases and suggests ways to improve coverage.

    Learn More About Exabeam

    Learn about the Exabeam platform and expand your knowledge of information security with our collection of white papers, podcasts, webinars, and more.

    • Blog

      What’s New in LogRhythm SIEM October 2025

    • Blog

      What’s New with New-Scale in October 2025: Measurable, Automated, Everywhere Security Operations

    • Blog

      Catching the Quiet Threats: When Normal Isn’t Safe

    • Blog

      UEBA vs. XDR: Rethinking SIEM Augmentation in the AI Era