Project Background


Addressing the impact of AI on higher education.

On 30 November 2022, OpenAI publicly released ChatGPT (Chat Generative Pre-Trained Transformer), which is a generative artificial intelligence (AI) chatbot based on a large language model (LLM). Generative AI functions by receiving input from the user in the form of a prompt and generating output based on its prior training, or machine learning. There exist different technical methods of machine learning, but they all require a training dataset for the AI system. This is how a generative AI tool like ChatGPT learns to respond to human prompts in a versatile and dialogic way. Other generative AI tools, including OpenAI’s DALL-E, generate images and other forms of media in response to a prompts, but ChatGPT and other text-based generative AI applications have taken centre stage in conversations about topics ranging from future job impacts to intellectual property concerns and more.

AI & Education

With the emergence and continued growth of publicly accessible generative AI tools, several areas of the education sector are experiencing high levels of disruption. Academic integrity remains one of the key concerns, especially for institutions of higher education, which aim to cultivate ethical, transparent, and effective teaching, learning, and research practices. For instance, suspected student use of generative AI to complete course work raises the question of competency and mastery, i.e., the extent to which knowledge/skills have been demonstrated and learning objectives have been met. If generative AI is completing assessments, either partially or fully, that are meant to evaluate student performance, how can institutions and employers be sure that graduates have indeed acquired the knowledge and skills that the completion of a degree is meant to represent? On the other hand, an increase in the prevalence of generative AI in the workplace suggests that colleges and universities may have to consider the need to include generative AI guidance and training as a core part of curriculum in order to emerge as active leaders in the development of principles and practices for the use of generative AI across industries.

Current Research Landscape

Given the still-nascent state of publicly accessible generative AI tools, research on the topic of generative AI in teaching and learning contexts is just beginning to emerge. The current literature includes some initial survey results from institutions across the world aimed at reporting and better understanding student and faculty knowledge, perceptions, and use of generative AI technology, as well as their attitudes about its role in higher education and the world at large. Such results may help guide the attention of administrators and support services in drafting new policy and creating resources for students and faculty, but they are limited in their ability to reveal tangible evidence of the impact generative AI has already had and continues to have on students’ learning experiences, as well as their performance on assessments that, in theory, measure the extent to which learning objectives have been in met in a course. Some related studies have focused on contextualizing generative AI within conversations about academic integrity, job readiness, diversity and inclusion, and accessibility, while others have examined generative AI’s potentials outside the classroom—for example, as a personalized tutor for students or a grading assistant for instructors. Still others have identified some of the risks and limits of the technology, such as instances of bias or false information (known as hallucinations) in generated content, or focused on problems associated with attempts to detect the use of generative AI by students through employing AI-detection software.

While most of this research intersects, whether explicitly or implicitly, with the topic of assessment, there is clearly a gap in the literature concerning data-driven analysis of what (if anything) instructors are specifically doing to address the potential and actual use of generative AI by students and the outcomes of such interventions. Many post-secondary institutions across Canada have revised their academic integrity policies to include language discussing generative AI, and some have provided faculty with resources to help them communicate expectations surrounding generative AI in their courses to their students, whether this means prohibiting use of the technology altogether or allowing responsible and transparent use for certain tasks. Still, there exists little to no material (scholarly or otherwise) offering instructors guidance in the form of theoretical principles that they can incorporate or concrete steps they can take to design assessments that either directly integrate generative AI into part of the student’s task or mitigate the impact of generative AI, rendering it another useful (but limited) tool alongside search engines, calculators, text-to-speech software, and more.

Our Study’s Intervention

Our study aims not only reveal and describe the actions (or lack thereof) that instructors are taking to address the impact of generative AI on the integrity and authenticity of assessments in their courses, but it will also establish a set of principles and best practices to help guide other instructors who find themselves struggling or unsure of where to start when (re)designing an assessment. Unlike the current dearth of research done on generative AI in teaching and learning contexts, there already exists a rich body of literature on assessment design in theory and practice, upon which the proposed study will build. Key pillars of assessment design for accessibility, inclusion, and social justice will be central to framing the principles that emerge from the analysis in the proposed study, as these subfields of assessment design have already pushed the limits of traditional thinking about assessment to promote an educational environment that removes barriers to learning. These foundations must be foregrounded given the well-documented existence of the digital divide. An approach to generative AI without frameworks of anti-oppression in place would risk perpetuating the systemic inequities that such technology should be leveraged to help diminish.

Survey Closed!

Thank you to all the participants who completed our survey and submitted sample assessments. We are currently interviewing participants and hope to publish results within the coming months. Stay tuned!

Please contact Ben Lee Taylor with any questions or concerns.

This study has been reviewed and received ethics clearance from the McMaster Research Ethics Board (Project #6636).