Stylized green and purple 'G' with "Global Society of Online Literacy Educators" in purple.

Webinar Descriptions

Pathways to Partnerships in Online Literacy Education: Process, AI Literacy and Co-Constructive Writing in the Age of AImpowering Tutors and Faculty to Equip Students with AI Literacy for the Writing Process

Webinar Leader: Justin Cary

Date of Webinar: September 26, 2025 from 12:00-1:00 pm EDT

Click here to register

Overview

This webinar has the following learning outcomes:

1. Co-Constructive AI Writing Partnerships

Participants will be able to apply AI as a metacognitive partner to co-construct online course content. This involves using the tool to generate and reflect on multiple perspectives, leading to an iterative process of inquiry and revision that enhances their understanding of digital and online content.

2. Metacognitive Course Design for Online Education

Participants will be able to analyze how an AI tool can aid in metacognitive online course design. They will use the tool to generate diverse pedagogical approaches and then evaluate the effectiveness of these online strategies for fostering student engagement and collaboration in a digital learning environment.

3. Fostering Critical Reflection with AI

Participants will be able to evaluate the outputs of an AI tool to foster students' critical reflection and online literacy. This includes identifying potential biases or misinformation within digital content and creating activities that require students to document and reflect on their interactions with the AI.

4. Developing a Process-Oriented Framework for Online Writing with AI

Participants will be able to design a personal framework for integrating AI into the online writing process.

Webinar Leader Bio

Justin Cary is a Senior Lecturer in the Writing, Rhetoric and Digital Studies Department (WRDS) and has been teaching at Charlotte for over ten years. In October 2024, Justin served on the AI in Teaching and Learning Task Force as the College of Humanities & Earth and Social Sciences (CHESS) representative, working with colleagues across Charlotte to collect viewpoints and perspectives from CHESS administrators, faculty and students around AI in teaching and learning. In Spring 2025, Justin served as a Center for Teaching and Learning (CTL) AI Faculty Fellow, building a database of AI Use Case Stories and collaborating on the CTL 3rd Annual AI Summit for Smarter Learning.  

Currently, Justin is exploring new and exciting pathways for incorporating responsible, critical and ethical AI Literacy frameworks in First-Year Writing and Writing Studies, discovering co-constructive, metacognitive applications for harnessing the potential for AI in writing to build upon the foundational skills and habits of critical thinking, communication, reflection, process and rhetoric offered across connected disciplines in the humanities and beyond.

Critically Responding to Institutional GenAI Mandates in Online and Offline Writing Programs: A Heuristic for WPAs and WCDs

Webinar Leaders: Stacy Wittstock, N. Claire Jackson, & Jennifer Burke Reifman

Date of Webinar: November 10, 2025 from 12:00-1:00 pm EDT

Click here to register

Overview

Since November 2022, the landscape around Generative AI, particularly in educational contexts, has evolved in a number of directions. Currently, many of our institutions are investing heavily in GenAI technologies and mandating AI literacy curricula; at the same time intense debates about the ethics and consequences of these products are challenging both the morality of their use and raising serious questions about their potential impact on student learning. Given this context, Writing Program and Writing Center Administrators may question how to respond ethically, efficiently, and responsibly to institutional mandates related to these technologies. 

In this workshop, three new and untenured WPAs/WCDs will discuss how they have taken up these efforts. The presenters will discuss emerging understandings of what “AI Literacy” is, including McIntyre, Fernandes, and Sano-Franchini’s (2025) “critical digital cultural literacies”; Söken and Nygreen’s (2024) situating of AI literacy in a broader framework of critical media literacy; Thornley and Rosenberg’s (2024) bridging of AI literacy and information literacy; among others. Presenters will also consider Presenters will also consider how to balance the documented “learning loss” from the use of GenAI tools (Gerlich, 2025; Kosmyna et al., 2025) with the the imperative to teach AI literacy, and explore pathways for instructor and student agency that do not assume, as we’ve been told, that resistance is futile. 

After engaging in emerging work in this area, we will describe each of our contexts where we have been mandated to integrate GenAI and must navigate doing so. We describe our efforts to meet the mandates by focusing on how GenAI integration may or may not align with course learning outcomes and course modalities at our institutions, considering issues of professional development and overall fit of GenAI products with already established curriculum. We then invite participants to discuss institutional mandates that may be impacting their own programs and consider how the information presented in this webinar might help them think through potential approaches. More specifically, we’ll provide a heuristic aimed at encouraging participants to examine their own programmatic outcomes and consider the extent to which AI literacy does/does not align with the existing outcomes and/or course modalities, discuss what it would mean to integrate AI literacy into these outcomes and courses, and develop materials for teaching AI literacy tied to their own outcomes. We will also provide a space for participants to strategize ways to respond critically to institutional hype while ensuring your continued place in the conversation.

This webinar explores how data students generate is used to train GAI without informed consent, which allows Big Technology (BigTech) companies and Educational Technology (EdTech) companies to profit off student data and labor. By discussing AI through a lens of surveillance and privacy, students are shown that the content they ask AI generators, upload, and share is monitored and often not attributed as their own intellectual property. Intellectual property is a global topic that is important for educators and students to address within and beyond the classroom space. Such conversations within the classroom can create space for discussions to happen outside of the classroom, impacting how public discourse surrounding GAI can and should include intellectual property concerns.

Whether or not instructors decide to implement GAI in the classroom, instructors should still have conversations with students about topics such as intellectual property and surveillance.

Participants will gain:

  • Define Strategies for connecting learning outcomes with GenAI Literacy as a concept
  • Approaches to responding critically to institutional hype 
  • Materials and assessments developed from the SLO for students 
  • Approaches for considering integration of GenAI across course modalities

Webinar Leader Bios

Please contact presenters for more information.

Beginning the Conversation: AI-Generated Texts and Social Reflection as Starting Points for Building Rhetorical Knowledge

Webinar Leaders: Meghan Velez, Kara Taczak, & Alicia Lienhart

Date of Webinar: January 23, 2026 from 12:00-1:00 pm EDT

Click here to register

Overview

According to a recent New  Yorker article, college students use ChatGPT to be “resourceful” and "efficient" yet many suggest they don’t retain any learning when they use it. This article’s main premise centered on what might happen after AI destroys college writing. But it, like many other media publications, asks the wrong types of questions. Instead of making connections to writing studies’ core threshold concepts, such as “all writers have more to learn” or “writing is (also) always a cognitive activity”, they focus on rigid, narrow versions of academic writing reinforcing to students (and fellow educators) that writing is not rhetorical. As the continuing proliferation of AI technologies forces us to reenvision our definitions of what writing is and what it means, it becomes even more important to invite our students to do so as well. This webinar will discuss one such invitation: the use of social reflection and AI-generated texts in first-year writing courses as a way of empowering writers to theorize and question their processes, practices, beliefs, attitudes, and understandings about writing (Dryer et al., 2015).  journey the facilitators have gone on to work with multiple contact points to communicate what AI is, what it is not, and what it could be across the disciplines. We will examine the ethical knots around citational issues through a series of hypothetical scenarios that will serve as springboards for compassionate discussions about AI use. The facilitators will theoretically ground the opening discussion with a heuristic to tease out the implications of ethical AI use. The session will then move into an interactive component that allows the participants to discuss and find productive spaces for potential collaboration or continued discourse. This webinar does not seek to reinvent the wheel: rather, we wish to build on existing knowledge and ethical perspectives across the disciplines to help generate a more unified, productive discussion about AI in academia.

Webinar Leader Bios

Please contact presenters for more information.

From Compliance to Co-Authorship: Designing Student-Inclusive AI Policies in Online Writing Instruction

Webinar Leader: Sydney Sullivan

Date of Webinar: April 3, 2026 from 12:00-1:00 pm EDT

Click here to register

Overview

This webinar explores a student-inclusive approach to AI policy design in online writing instruction. Rather than imposing static, instructor-written policies about generative AI, participants will learn how to co-author classroom guidelines with students—promoting critical literacy, rhetorical awareness, and shared responsibility in digital spaces.

We will explore a replicable, four-phase model for implementing participatory AI policy development in online settings. This includes activities such as anonymous polling, asynchronous forums, Padlet-based brainstorming, and collaborative syllabus clause drafting. Drawing from threshold concepts in Writing Studies—particularly authorship, the rhetorical situation, and writing as knowledge-making (Adler-Kassner & Wardle, 2015)—the session shows how policy writing can itself become a writing assignment that strengthens rhetorical thinking and digital agency.

The approach is further grounded in Asao Inoue’s labor-based assessment theory, which reimagines classroom authority through dialogic, values-based co-creation (Inoue, 2019). Additionally, the session draws on Sambell and McDowell’s (1998) insights about the “hidden curriculum” embedded in policy and assessment design—offering strategies to make implicit expectations visible and student-centered. Sambell, McDowell, and Montgomery (2013) extend this work into the online learning context, where assessment-for-learning practices can foster trust and transparency.

In light of recent institutional moves toward adopting enterprise AI tools like ChatGPT Edu (e.g., San Diego State University IT Division, 2025), which are often introduced without inclusive dialogue, the session addresses the need for critical AI literacy frameworks that resist top-down implementation. Rather than framing policy as punishment or risk mitigation, the webinar encourages attendees to treat it as a collaborative artifact that evolves in response to student voices, technological shifts, and institutional tensions.

Interactive Components:

  • Live Polling + Jamboard Mapping: Participants reflect on how their current AI policies (or lack thereof) reflect assumptions about student ethics, learning, and authorship.
  • Collaborative Padlet Activity: Participants brainstorm student values and concerns around AI to simulate a co-writing session.
  • Syllabus Clause Remix Workshop: Attendees revise traditional top-down AI policies into participatory versions with inclusive and flexible language.
  • Resource Roundtable: The session concludes with access to editable templates, student-facing prompts, and survey tools to pilot in attendees’ own classrooms.

Participants will:

  • Identify limitations and implicit messages in instructor-centered AI use policies.
  • Understand how threshold concepts and anti-racist assessment frameworks support student-inclusive policy design.
  • Design at least one activity to involve students in ethical, rhetorical reflection on AI tool use.
  • Draft a co-authored, student-facing AI policy clause for online or hybrid syllabi.
  • Access shareable, adaptable resources to support implementation and future reflection.

Webinar Leader Bios

Please contact presenters for more information.

Privacy Policy | Contact Information  | Support Us| Join Us 

 Copyright © Global Society of Online Literacy Educators 2016-2023

Powered by Wild Apricot Membership Software
!webmaster account!