Blogs - InteDashboard

Teaching With AI: From Policing to Partnership in TBL Classrooms

Written by Vanesse Tang Jia Yi | Jan 28, 2026 1:00:02 PM

Generative AI has the potential to be a valuable partner in helping students learn and think critically.

In Team-Based Learning (TBL), students are already empowered to develop their critical thinking skills through activities such as application exercises, in which they come together to solve problems using course materials and other approved resources. 

AI could simply be another resource added to the workflow.

In higher education, the use of AI is becoming a common student behavior. In the Digital Education Council’s 2024 global survey, 86% of university students surveyed reported using AI in their studies.

This behavior is unlikely to end at graduation. Chances are that students will carry these tools into their future workplaces. As such, we believe banning or policing AI usage isn’t helpful. 

The question for TBL classrooms is not whether we should allow students to use AI. The question is, “Is AI adding any value to their learning context?” If yes, then how can we teach them to use it with sound judgment and transparency?

Educators can and should model responsible AI use

Educators use AI too. According to our “2025 Survey Insight Report: Use of AI in Team-Based Learning”, educators use AI to create test questions, streamline course planning, conduct research, provide feedback, and more. 

That reality is already shaping student expectations. One TBL educator shared a moment that’s becoming common: a student asked, “If you’re using AI as my teacher, doesn’t that mean I can use AI to teach myself instead?”

This reflects a growing misconception among students that AI output is equivalent to human expertise. In practice, educators who use AI in TBL planning know the opposite is true: AI can produce a fast first draft, but it still requires human judgment to identify weak assumptions, verify claims, and refine the response to align with disciplinary standards and learning goals.

Human expertise remains essential, and educators can lead the way by showing students what using AI well looks like in practice. However, modeling based solely on the educator will not scale. 

To sustain responsible use, educators need consistent expectations and routines for AI use across TBL classes.

How to Build a Culture of Responsible AI Usage

A culture of responsible AI usage helps students build durable habits for working with AI in ways that support learning rather than replace it. Here are four practical tips that work well in TBL classrooms.

Tip #1: Build core AI literacy into team routines

First, students should recognize potential bias or one-sided reasoning, since AI can present a confident “best answer” while quietly omitting important perspectives, trade-offs, or contextual factors. TBL educators could require teams to ask AI for an alternative viewpoint, then compare what changed and what was missing in the first answer.

Second, students should verify key claims because plausibility isn’t the same as accuracy. AI can sound right while being unsupported, outdated, or simply wrong. Instructors could ask teams to cite where they checked for the most important statements (course readings, guidelines, lecture notes) when reporting their answers.

Third, students need to understand AI’s limitations. Educators can encourage teams to name one thing AI might miss in this scenario (constraints, nuance, rubric expectations) and explain how they adjusted their final answer.

Tip #2: Discourage anthropomorphizing AI

Instruct students to interact with AI as a tool, not a person. Let them know that saying “please” or “thank you” for the AI’s output is unnecessary. This is less about etiquette and more about mindset. 

Students are more likely to question, verify, and edit outputs when they are encouraged to see AI as a tool they control rather than an authority they defer to.

Tip #3: Give students a prompt framework

A prompt framework is a simple, repeatable structure for writing prompts and reviewing outputs. You can provide one for them to use, such as the RHODES framework. 

Using a prompt framework supports critical thinking by prompting students to specify the goal, context, and constraints upfront, making AI output easier to evaluate.

In TBL, teams need AI outputs that are specific enough to evaluate, not just polished responses. Structured prompts reveal assumptions and options for teams to critique and verify.

Tip #4: Normalize transparency with AI-use declarations

Encourage or require students to disclose how they used AI, what they used it for, and what they checked or changed. In TBL, this can be easily incorporated into reporting. 

For example, teams can add a one-line note when they present: “AI was used to generate alternative options; we verified claims X and Y using the course materials or guidelines and revised the justification accordingly.” 

This keeps the focus on reasoning and evidence, not tool-policing. It also makes students' AI use coachable in the moment, which is exactly what you want in a transparent approach to AI use.

Making Responsible Use the Default

The practical takeaway is simple: responsible AI usage does not happen by accident. It needs clear expectations and routines that make good judgment and transparency the default in TBL. 

When students know what using AI well looks like and are prompted to show their process, AI becomes something educators can coach. Over time, it becomes a learning partner that supports better reasoning, not something that needs constant policing.

To learn more about integrating AI into TBL, you can watch the panel discussion below with Drs. Richard Plunkett and Amy Stone. Our panelists shared practical examples, highlighted what’s worked, and reflected on challenges and lessons learned.