Share on FacebookShare on TwitterShare on LinkedinShare via emailShare via Facebook Messenger

From Banning to Embracing: The Next Phase of AI in Education

In just one full academic year with generative AI, academic leaders have flipped 180 degrees. Provosts and CIOs are no longer asking, “How do we ban AI?” Now the question keeping them up at night is, “How do we embrace AI across an institution and empower our students to use these tools?”

According to research from the Digital Education Council, 86% of students in higher education say they regularly use ChatGPT in their studies. Fifty-four percent of those students say they use it weekly. And while they’re not adopting AI at the same pace as students, over a third of faculty use AI at least monthly. Whether or not schools are prepared, AI is a tool that students, faculty, and professionals already rely on.

The 2025 AI Trends That Matter
Download the 2025 AI Shortlist for more action steps on the trends that matter.

The challenge of a holistic approach to AI

Now that the question has shifted to AI adoption, the new challenge is how to implement it responsibly while preserving student learning.

We’re seeing two dynamics play out: increased adoption of AI on the one hand and increased academic integrity violations on the other. Sixty-eight percent of instructors say that AI is negatively impacting academic integrity. That’s validated by 47% of students—almost half—who admit that it’s easier to cheat with AI. This concern about academic integrity has led to a rise in disciplinary and integrity violations. 

Institutions know that they need to adopt AI, but they are still figuring out how to balance AI innovation with academic integrity. I’ve seen four primary approaches that institutions take toward AI: detection forward, academic integrity forward, responsible use forward, or innovation forward. 

This is not a maturity scale with one approach being better or worse than the other. Rather, it’s a reflection of the wide range of institutional and individual faculty postures toward a still emerging and powerful new technology. What is interesting is that there is no longer a meaningful group of institutions taking an avoidance approach to AI. Rather than prioritizing a single path, it’s time to shift to a holistic approach to ensure that students understand both the potential and the pitfalls AI presents. 

AI shifting from adversary to ally

Given the gradual but meaningful shift toward a thoughtful implementation of AI across institutions, decision-makers need a framework to aid them on in this journey:

  • Security is at the foundation of the framework. This includes data security, privacy, and technical reliability. 
    • With AI making headlines for using unsanctioned publications and training to build its models, institutions are concerned about their institutional and end-user data being used to train ongoing model improvements.
    • CIOs and CTOs should prioritize vendors with safeguards in place to ensure that no data is used to train its models, nor shared with third parties. Some institutions are even building their own in-house models to mitigate these concerns.
    • Going into the second academic year with AI, security remains essential for AI adoption but is table stakes. Truly driving impactful and responsible adoption requires a more proactive integration of AI into teaching and learning.
  • Transparency: The middle layer of the framework focuses on implementing the software in equitable, accessible, learning-forward ways. 
    • Decision-makers should be able to answer yes to the following questions when evaluating a vendor: Is it clear how AI operates, and are the tools accessible to all students in an equitable way? Are expectations for when and how to use these tools communicated to students in syllabi and academic integrity policies? Are students equipped with a nuanced, context-based understanding of appropriate and inappropriate AI use?
  • Trust: Effective implementation and clear expectations are the building blocks of the framework’s top layer.
    • Transparent rollout, policies, acceptable use practices, and enforcement mechanisms establish trust between students, faculty, and administrators. This stage is also where more thoughtful and impactful AI rollouts can happen, leading to AI literacy initiatives.

In 2025, we’ll see institutions strive to reach the topmost layer of the framework—trust. This will unlock the biggest transformation for pedagogy and student learning. There are several essential steps that institutions can take to get there:

  1. Establish clear and consistent policies: It’s vital to create policies that guide both educators and students in their use of AI. These guidelines should clarify what constitutes responsible AI use and the expectations surrounding it.
  2. Develop AI literacy programs: Educational stakeholders—both faculty and students—must be educated about responsible AI use. Institutions can implement programs that focus on AI literacy, ensuring that everyone understands both the opportunities and risks associated with these technologies.
  3. Utilize AI detectors judiciously: While AI detection tools can offer insights, they should be integrated into a broader framework of academic integrity. Institutions must acknowledge the limitations of these tools and avoid using them in isolation.
  4. Foster open conversations: Encouraging dialogue between faculty and students about AI use is crucial. Tools like Grammarly Authorship, which generate comprehensive reports detailing the origin of content—whether human-typed, AI-generated, or edited—can serve as valuable conversation starters. These discussions can help demystify AI and promote collaboration rather than suspicion.

The potential of AI as an empowering tool

When used responsibly, AI tools can enhance learning by providing personalized educational experiences, facilitating access to information, and streamlining administrative tasks. AI can offer tailored resources, assist in research, and even provide feedback on essays—all of which, when grounded in ethical use, can deepen a student’s engagement and understanding of the material.

Integrating AI into educational settings can also free educators from repetitive tasks, allowing them to focus on what matters: fostering critical thinking, creativity, and interpersonal skills. As institutions embrace AI as a partner in education, they’ll cultivate an environment in which innovation thrives, and where students learn to harness technology for long-term success.

As we navigate the growing presence of AI in education, institutions must take a proactive stance in establishing clear policies and promoting responsible use. Educational institutions can harness AI’s potential as an empowering resource by educating faculty and students, fostering open dialogues, and using tools judiciously. This will ensure that future generations are not merely prepared for the present but equipped to thrive in an ever-changing world.

The 2025 AI Trends That Matter
Download the 2025 AI Shortlist for more action steps on the trends that matter.

Your writing, at its best.
Works on all your favorite websites
iPhone and iPad KeyboardAndroid KeyboardChrome BrowserSafari BrowserFirefox BrowserEdge BrowserWindows OSMicrosoft Office
Related Articles