Trust, Fairness and Responsibility
/ Resources / Artificial Intelligence (AI) / AI-Academic
On this page:
The responsible use of AI in academic work must align with JIBC’s Student Academic Integrity Policy. While AI tools can assist in learning—such as brainstorming ideas, summarizing content, or enhancing writing—they must be used ethically and with transparency. Submitting AI-generated work without proper acknowledgement may be considered a violation of academic integrity.
Faculty are encouraged to clearly outline their expectations regarding AI use in each course, as appropriate use may vary by discipline and assignment type. Until institutional policies evolve further, instructors are best positioned to set boundaries and communicate acceptable practices to their students.
At this time, no specific GenAI tools or software are officially endorsed for instructional or student use. Ongoing discussions are underway to better understand the pedagogical opportunities, challenges, and risks associated with these tools. Educators are encouraged to use AI thoughtfully and to consult with their program areas as well as teaching and learning support units such as the Centre for Teaching, Learning, and Innovation (CTLI) and the Writing Centre & Library, when determining appropriate use in their courses.
The use of GenAI detection tools is not recommended at this time. There are ongoing concerns about the validity, reliability, and legitimacy of these tools, as well as important pedagogical considerations related to surveillance and student trust.
The use of AI detection tools is not recommended for evaluating student work. While concerns about academic integrity are valid, the current generation of AI detection technologies poses significant pedagogical, ethical, and legal challenges.
Many AI-detection tools claim high levels of accuracy, but these claims have not been independently verified. Without clear evidence of reliability and validity, even small error rates can result in false accusations or overlooked instances of misconduct—particularly when used at scale.
AI detectors do not provide clear or reviewable evidence. Unlike plagiarism checkers, there is no source to compare flagged content against, making it difficult for instructors or students to verify results.
Submitting student work to third-party detectors may violate privacy legislation (e.g., BC’s FIPPA) and student intellectual property rights. Students must give informed consent before their work is uploaded to external services—consent must be voluntary and without penalty for refusal.
SafeAssign, a tool available in JIBC’s Blackboard, is built for plagiarism detection from known sources and is not equipped to detect AI-generated writing. Use of SafeAssign should not be confused with AI detection and
Relying on surveillance tools can erode trust in the classroom. Instead of focusing on detection, instructors are encouraged to design assessments that foster transparency, critical thinking, and responsible AI use.
Given the growing presence of generative AI (GenAI) tools, it is important for instructors to engage in open, transparent conversations with students about their use in the learning environment. Students may have varying understandings and assumptions about what is acceptable, which can lead to confusion, unintentional misuse, or concerns about fairness.
To support academic integrity and student learning, instructors are encouraged to:
Clearly communicate expectations around whether and how AI tools can be used in course activities, assignments, or exams.
Explain your rationale, including how AI use aligns—or does not align—with the learning outcomes of the course.
Include guidance in the course syllabus so that expectations are available in writing (see the “Syllabus Guide” section for examples).
Discuss both benefits and risks of using AI, including its potential to support learning and the importance of acknowledging its use.
Encourage student questions to ensure clarity and reduce anxiety around appropriate use.
These conversations foster trust and create space for ethical decision-making. As AI continues to evolve, setting clear and consistent expectations with students helps ensure fairness, academic integrity, and a shared understanding of how to engage with emerging tools responsibly.
While the use of generative AI (GenAI) in education continues to grow, it’s important to recognize that not all learners may feel comfortable using these tools. Some may have ethical, privacy, accessibility, or personal concerns. Instructors are encouraged to approach these situations with flexibility, empathy, and open communication.
Be transparent: Clearly state in your syllabus or assignment guide when and how GenAI is expected to be used.
Provide rationale: Explain how GenAI use supports learning outcomes, skill development, or course goals.
Create space for dialogue: Let students know they can approach you if they have concerns or prefer not to use AI tools.
Assess case-by-case: If AI is not essential to the learning objective, consider offering alternative methods to meet the same outcome.
Protect student choice: Learners should not be penalized for opting out of AI use, especially if an alternative demonstration of learning is possible.
This course may include the use of generative AI tools for certain assignments or learning activities. If you have concerns or are not comfortable using these tools, please connect with me early in the term or before the assignment is due. Accommodations or alternative approaches may be considered on a case-by-case basis to ensure all students can fully participate and demonstrate their learning in a way that aligns with the course outcomes.