Setting Expectations for Programs, Courses and Assignments
/ Resources / Artificial Intelligence (AI) / Faculty Guidance
On this page:
To help students navigate these tools ethically and effectively, we strongly encourage faculty to include a short, clear statement in their course syllabi about if, how, and when students may use generative AI tools.
You may wish to include:
The following sample syllabus statements offer flexible options faculty can adapt based on the level of AI integration appropriate for their course. These statements are designed to promote academic integrity while supporting meaningful learning. For practical applications and guidance, see the section titled “Real Life Examples at JIBC.”
The use of generative AI tools (e.g., ChatGPT, Microsoft Copilot, Gemini) to complete or support any graded assignments or assessments in this course is not permitted unless explicitly stated otherwise. Submitting work generated by or heavily supported by AI tools may be considered academic misconduct under JIBC’s Student Academic Integrity Policy.
Students are permitted to use generative AI tools for specific purposes in this course, such as brainstorming ideas, summarizing content, or organizing research notes. However, AI-generated text must not be submitted as final work unless specifically approved. All use of AI tools must be acknowledged. Improper use without disclosure may be considered academic misconduct.
This course encourages the thoughtful use of generative AI tools (e.g., ChatGPT) as part of the learning process. Students may use these tools to support their understanding, explore new ideas, or draft initial content. However, students are ultimately responsible for the accuracy, originality, and integrity of their work. AI-assisted content must be clearly acknowledged and cited following instructor guidance. For privacy reasons, do not input personal or sensitive information into any AI platform.
The AI Readiness scale helps instructors determine appropriate levels of AI use in course. It outlines six levels, from no AI involvement to full integration, ensuring alignment with learning outcomes and academic integrity expectations:
As generative AI (GenAI) tools like ChatGPT become more commonly used in academic settings—for brainstorming, summarizing, drafting, or even data analysis—it’s essential to cite them properly. Transparent citation practices not only uphold academic integrity but also clarify how these tools contributed to your work.
In-text citation: (OpenAI, 2025)
Narrative citation: OpenAI (2025)
Reference:
OpenAI. (2025). ChatGPT (May 14 version) [Large language model].
https://chat.openai.com/chat
Example: ChatGPT emphasized the role of fossil fuel consumption in climate change when prompted with “What causes climate change?” (OpenAI, 2025).
Footnote:
OpenAI’s ChatGPT, response to query from author, May 15, 2025.
Bibliography:
OpenAI’s ChatGPT. Response to “What are the main challenges of climate adaptation?” ChatGPT, OpenAI, May 15, 2025. https://chat.openai.com/chat
Citation:
OpenAI. “ChatGPT.” ChatGPT,
https://chat.openai.com/chat. Accessed 20 May 2025.
Citation:
[1] ChatGPT, response to author query. OpenAI [Online]. Available:
https://chat.openai.com/chat. (Accessed: May 20, 2025).
The emergence of Generative AI (GenAI) is reshaping how we approach assessment in education. With its capacity to assist with complex tasks—from summarizing information to generating thoughtful written responses, it is transforming how we think about assessment design. This presents an exciting opportunity to evolve our practices in ways that uphold academic integrity while enhancing learning outcomes.
Faculty are beginning to reimagine assessments to foster deeper learning. By thoughtfully integrating AI, we can support the development of key competencies such as critical thinking, ethical decision-making, and professional judgment — all essential for success in the high-stakes, real-world contexts our learners are preparing for.
As generative AI (GenAI) tools become more common in academic settings, assessment design should evolve to foster authentic learning, critical thinking, and ethical engagement with AI. The goal is not to ban its use, but to guide learners in using these tools responsibly and reflectively. Here are four strategies to support meaningful integration:
Encourage learners to develop case analyses or action plans based on real-world or current events. These complex, context-rich tasks promote originality and deeper engagement.
Create opportunities for students to share how they used GenAI in their learning process. This builds ethical awareness and fosters transparency around tool use.
Evaluate drafts, revisions, peer feedback, and reflective components—not just the final product. This highlights learner growth and discourages over-reliance on AI-generated content.
Design assignments that require connections to local policies, community needs, or specific scenarios. This encourages learners to apply AI in ways that are relevant and grounded.