Generative AI is sweeping through industries. Organizations swift to adopt and implement gen AI will gain a competitive edge, while those on the sideline risk falling behind.
In a recent Grammarly Business webinar, Scott Roberts, CISO of UiPath, said, “You have to embrace generative AI, you cannot stick your head in the sand . . . Those who take an ostrich approach aren’t going to have very good luck.” Fortunately, most business leaders agree—a recent study conducted by Forrester on behalf of Grammarly found that by 2025, 97% of organizations will have implemented gen AI.1
While speedy adoption is important, so is evaluating possible security risks, privacy concerns, and the quality of various gen AI tools. In fact, that same Forrester study found that 64% of organizations reported they don’t know how to evaluate the security of a potential AI vendor.¹ Companies don’t just need to know how gen AI security works, but how this technology is being weaponized by hackers and the ways in which they can use it to sharpen their defenses.
Building Better Defenses
With the allure of gen AI comes new risks to data privacy and security. Rushing to onboard a new technology creates opportunities for malicious programs to infiltrate. Those risks shouldn’t deter adoption but rather help define strategies and influence policy. “We’re in this moment where generative AI is new and there isn’t a lot of structure. We have to navigate that, we have to ask a lot of questions, understand what the risks are, and educate our users,” said Jay Schwitzgebel, CISO at ModMed.
When searching for potential gen AI solutions, it’s important to determine if a vendor reflects your organization’s policies regarding data privacy and user control. Roberts, Schwitzgebel, and Grammarly CISO Suha Can, agreed that reviewing terms and conditions, especially data collection and usage, are critical to proper vendor evaluation. Understand what information is accessible when you implement a gen AI solution, how much control you have over that access, and where your data goes throughout the entire process. Being able to trace your data flow will help determine whether a vendor aligns with your preferences.
Once you’ve selected a secure AI solution supplier, there are ways to use gen AI for data defense. Processes that would normally require hours of employee effort, such as security documentation, can be accelerated with generative AI. By streamlining these tasks, organizations can develop, test, and strengthen their security measures—all while freeing up employees to focus on other high-value tasks.
The checks and balances that form the bedrock of data security can’t disappear with the addition of generative AI. With so much still unknown about gen AI, those measures are even more vital. “All the things that quality software development teams have known for years, things like peer review and quality assurance, still have to happen,” Schwitzgebel said. Ensuring data security and privacy is a two-part task:
- Understand the technology behind your gen AI solution and how your chosen vendor approaches user data.
- Maintain the information security measures protecting your organization’s data.
Being Diligent to Avoid Pitfalls
In addition to protecting your organization from internal gen AI, external threats exist as well. Since the dawn of the Internet age, companies have had to train employees on email security, raise awareness, and create policies around data sharing and storage. Generative AI is having a profound impact on data scams, hacks, and malware.
Take phishing emails—a common scam where attackers trick people into revealing sensitive information or installing malicious programs. Modern filters on email applications are able to catch and discard most of these attacks before they reach the inbox. Even when ones slip through, they may be easy to spot; spelling mistakes, an unrelated email address, or incorrect information within the email itself could be signs of a fake. But what if everything about the email was perfect?
“In the past, you might look for grammatical mistakes or other errors in phishing emails, but through large-language models, attackers can create perfect phishing attacks that pass traditional filters,” Roberts said.
Threats from the malicious use of gen AI extend far beyond phishing emails. Malware development could be accelerated, creating stronger, more resilient and tactical programs that infiltrate and override security systems. These threats shouldn’t be a roadblock to adoption, but rather a reminder for business leaders to maintain and improve upon existing security measures across their organization. By being diligent, organizations can mitigate gen AI risks while maximizing its potential.
The Role of Responsible AI
Responsible gen AI is the key to unlocking incredible business value while safeguarding data. According to the Gartner® Hype Cycle™ for AI 2023, “Responsible AI, trust and security will be necessary for safe exploitation of generative AI.”² Identifying a single solution that works across your organization can prevent IT headaches in the long run. Separate technologies for different teams create challenges in data management, cross-team collaboration, and information security. Having one gen AI solution can also streamline workflows by aligning messaging and providing greater clarity across teams.
“Generative AI should be handled as a platform with secure, paved roads, so your entire workforce knows how to use generative AI safely.”–Suha Can, CISO at Grammarly
How the value of gen AI is maximized will be specific to the needs of an organization, but there are several key elements that define quality solutions:
- Enterprise-grade security
- Regulatory compliance
- Data privacy
- User control
With Grammarly, teams get access to generative AI backed by more than 14 years of responsible AI development and a dedicated team of experts ensuring quality performance. Their data is protected by a simple policy—we have not, do not, and will not sell your data. Plus, with users in control of data accessibility, continuous testing to improve security, and content filters to reduce bias and inaccuracies, we empower teams without risking what matters most.
Mitigate Risk to Maximize Potential
Generative AI is ushering in a new era across industries, but not without its own share of risks. This shouldn’t deter adoption but rather inform it. With proper evaluation protocols and a focus on security, leaders can find gen AI partners ready to support their success.
To learn more about Grammarly’s generative AI solution and how you can chart a secure path to adoption, click here to get in touch with a product expert.
Going to the Gartner IT Symposium/Xpo in Orland, Florida? Our CISO Suha Can is debuting a framework for secure gen AI adoption—check it out.
1. Commissioned study conducted by Forrester Consulting on behalf of Grammarly | Maximizing Business Potential With Generative AI: The Path to Transformation, July 2023.
2. Hype Cycle for Artificial Intelligence, 2023. Gartner. Afraz Jaffri, July 19, 2023.
Gartner does not endorse any vendor, product, or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s Research & Advisory organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, and HYPE CYCLE is a registered trademark of Gartner, Inc. and/or its affiliates and are used herein with permission. All rights reserved.