AI adoption is rapidly expanding—and so are AI regulations. It’s time to cut through the noise of these regulations so you can stay informed of the key concepts and standards you need to know to adopt AI responsibly. Grammarly has been a builder and buyer of AI technology for over 15 years, giving us a unique understanding of the complexities of AI compliance. In this blog post, we’ll explore the key considerations for AI regulations, drawing from insights we at Grammarly have refined over the years, so you can navigate emerging regulations with ease. 

The Evolution of AI Regulations

AI laws have their heritage in privacy regulation, growing out of laws like the General Data Protection Regulation (GDPR), which laid the foundations for data collection, fairness, and transparency. The GDPR in 2018 marked a significant shift in data privacy laws. One of the key goals of the GDPR was ensuring that enterprise technology companies, particularly those in the US, treated the personal data of European residents fairly and transparently. GDPR influenced subsequent regulations like the California Consumer Privacy Act (CCPA) and other state-specific laws. These laws laid the groundwork for today’s AI regulations, particularly in areas like fairness and disclosure around data collection, use, and retention.

Today, the AI regulatory environment is expanding rapidly. In the US, there is a mix of White House executive orders, federal and state initiatives, and actions by existing regulatory agencies, such as the Federal Trade Commission. Most of these offer guidance for future AI regulation, while in Europe, the EU AI Act (AIA) is already in effect. The AIA is particularly noteworthy because it sets a “floor” for AI safety across the European Union. In the same way that the EU would regulate the safety of airplanes and legislate to ensure that no planes fly that have not met safety standards, the EU wants to similarly ensure that AI is being deployed safely.

US executive orders and the push to regulate AI

The recent Executive Order on Artificial Intelligence issued by President Biden on October 30, 2023, aims to guide the safe, secure, and trustworthy development and use of AI across various sectors. The order includes provisions for the improvement of AI safety and security standards, the protection of civil rights, and the advancement of national AI innovation. 

One of its significant aspects is the directive for increased transparency and safety assessments for AI systems, particularly those capable of influencing critical infrastructure or posing significant risks.

Several measures are mandated under this order:

  • Federal agencies are required to develop guidelines for AI systems with respect to cybersecurity and other national security dangers.
  • Future guidance must also ensure that AI developers meet compliance and reporting requirements, including disclosing critical information regarding AI safety and security.
  • The order also promotes innovation through investments and initiatives to expand AI research and talent.

The response from the AI community and industry has generally been positive, viewing the order as a step forward in balancing innovation with regulation and safety. However, there has been criticism about how burdensome this will be to put into practice. There are also open questions about the effect of this executive order; it is not a regulation in itself, but it directs agencies to enact regulations.

Translating regulations into implementation

A strong in-house legal team can help security and compliance teams translate these regulations into business and engineering requirements. That’s where AI frameworks and standards come into play. Here are three frameworks that every AI builder should understand and consider following:

  • NIST AI Risk Management Framework In early 2023, the National Institute of Standards and Technology (NIST) came out with the AI Risk Management Framework for organizations to assess whether they’ve identified risks associated with AI, specifically the trustworthiness considerations in designing, developing, and using AI products.
  • ISO 23894 ISO, the International Organization for Standardization, developed its own guidance on AI risk management to ensure products and services are safe, reliable, and of high quality.
  • ISO 42001 ISO also published the world’s first AI management standard, which is certifiable, meaning an organization can get audited by an independent third party to prove compliance and that it is meeting the requirements.

With that background, let’s discuss how to use these learnings when you want to procure AI for your own company.

A 3-Step Framework for AI Procurement and Compliance

When procuring AI services, it’s wise to follow a structured framework to ensure compliance. At Grammarly, we continuously monitor best practices for AI vendor review to adapt to changing market standards. Today, we use a three-step process when bringing on AI services:

  1. Identify “go/no-go” decisions. Identify critical deal-breakers regarding whether or not your company will move forward with an AI vendor. For instance, if a vendor is unable to meet cybersecurity standards or lacks SOC2 compliance, it’s a clear no-go. Additionally, consider your company’s stance on whether its data can be used for model training. Given the types of data shared with a product, you may require a firm commitment from vendors that they will only use your organization’s data for providing services and not for any other purposes. Other important factors are the length of the vendor’s retention policies and whether the vendor’s employees can access your data, a practice known as “eyes off.”
  2. Understand data flow and architecture. Once you’ve established your go/no-go criteria, conduct thorough due diligence on the vendor’s data flow and architecture. Understand the workflow between the vendor and its proprietary or third-party LLM (large language model) provider and ensure that your identifiable data—if even needed to provide the vendor’s services—is protected, de-identified, encrypted, and, if necessary, segregated from other datasets.
  3. Perform ongoing monitoring. Compliance doesn’t end after the initial procurement. Regularly review whether the AI is still being used as expected, if the type of data shared has changed, or if there are new vendor agreement terms that might raise concerns. This is similar to general procurement practices but with a sharper focus on AI-related risks.

Several teams are involved in third-party vendor reviews, such as procurement, privacy, compliance, security, legal, and IT, and each plays a different and important role. When a vendor has an AI product or feature, we also bring in our responsible AI team. The process begins with having vendors fill out our general questionnaire, which includes all the go/no-go’s and data flow and architecture points described above.

Grammarly’s Journey to Compliance

Grammarly’s commitment to responsible and safe AI has been a benchmark of our values and a North Star for how product features and enhancements are designed. We strive to be an ethical company that takes care of and protects our users who entrust us with their words and thoughts. And when the time (soon) comes that AI will be regulated by the US federal government, Grammarly will be positioned for it. 

At Grammarly, we’ve made AI compliance a priority by integrating industry standards and frameworks into our operations. For example, when the NIST AI Risk Management Framework and ISO AI risk management guidelines were released in early 2023, we quickly adopted them, incorporating these controls into our broader compliance framework. We’re also on track to achieve certification for ISO 42001, the world’s first global AI management standard, by early next year.

This commitment to compliance is ongoing. As new frameworks and tools emerge, such as ISACA’s AI Audit Toolkit and MIT’s AI Risk Repository, we continually refine our processes to stay ahead of the curve. We also have a dedicated responsible AI team that has developed our own internal frameworks, available for public use:

AI regulations are complex and rapidly evolving, but by following a structured framework and staying informed about emerging standards, you can navigate this landscape with confidence. At Grammarly, our experience as both a provider and deployer of AI technology has taught us valuable lessons in AI compliance, which we proudly share so that companies around the globe can protect their customers, employees, data, and brand reputation. Talk to our team to learn more about Grammarly’s approach to secure, compliant, and responsible AI.

Ready to see Grammarly
Business in action?