DUNMORE, Pa. — Penn State Scranton Lecturer of Information Sciences and Technology Fred Aebli’s contribution to the Online Learning Consortium (OLC) Accelerate 2025 Conference — “How ChatGPT Turned My Final Exams Into AI-Powered Learning Labs" -— received a Best-in-Track designation in the Innovative and Effective Digital Learning Design track. As a result, Aebli’s session was identified as one that OLC leadership felt would resonate with the consortium’s community, and he will give his presentation a second time via a live OLC webinar on Feb. 17, as part of a series featuring OLC Best-in-Track honorees.
OLC Accelerate showcases groundbreaking research and highly effective practices in online and digital learning across K-12, higher education and corporate learning and development.
Aebli, who has taught at Penn State Scranton for over 20 years, came up with the subject matter of his presentation while grappling with the question: How do we get students to a fundamental knowledge level to use artificial intelligence (AI) AI effectively without using AI?
“Since the arrival of OpenAI’s ChatGPT, and having spent a great deal of time teaching and using tech, I realized we still need to have good creative thinkers and problem solvers,” he said. “Although we all have an AI in our pocket with our phones, our prior knowledge of certain subject matter allows us to use AI quicker to solve problems. I then looked at how we assessed knowledge with exams and then looked at the goals and objectives of the course.”
Aebli subsequently built exams that met the goals and objectives of the course, “but then, I also felt that since [the students] are heading into the working world, they need to be challenged while being allowed to use AI,” he explained. “So, the split exam/quiz idea surfaced.”
Students were given a closed resource exam component, which tests their fundamental knowledge, and then moved to the next part of the exam where they were able to use any resource from the internet to AI chatbots to formulate a solution, with a stipulation that they had to reflect on what the solution provided.
Administering the exam was daunting, at first, Aebli said.
“I had never used the Respondus LockDown Browser in a fully in-person setting, so I wasn’t sure how it would behave with multiple students working simultaneously in the room," he said. "That uncertainty added some tension going in. However, once the exam began, it ran smoothly. The technology held up, the logistics worked, and the experience was far more seamless than I anticipated.”
One of the most striking things he said he observed, particularly in his upper-division courses, was how the students engaged with the reflective portions of the exam.
“When students reached those questions, they didn’t rush,” he said. “They leaned back in their chairs rather than leaning forward to simply ‘get it done’. They paused. They thought. I even allowed students to step into the hallway, provided they did not interact with one another, and many took advantage of that space to reflect. Watching that happen in real time confirmed what I was hoping to achieve: deeper thinking rather than reactive answering.”
When asked if he had any concerns with students cheating, or not learning the material, Aebli emphasized that while there is a significant amount of discourse online about that topic, the reality is that even when students use external tools, they must still understand what those tools produce and, more importantly, how to apply that information correctly. Without foundational knowledge, the outputs are meaningless, he said.