Editor's note: This story is from the Penn State College of Education's fall 2024 magazine.
UNIVERSITY PARK, Pa. — The generative artificial intelligence (AI) revolution is altering the landscape in higher education. Generative AI, which is capable of generating text, images and videos, can enhance learning experiences. However, as highlighted in a report by global higher education outlet Inside Higher Ed, there are also concerns about the quality of education delivered through AI-driven platforms as well as ethical issues such as plagiarism and environmental harm.
Penn State College of Education faculty members are working to help students harness the powers of AI by becoming active, responsible practitioners of the technology while also understanding its challenges.
Classroom learning with ChatGPT
Marcela Borge, associate professor of education specializing in learning, design and technology (LDT), said she is empowering her students to unlock the full potential of generative AI by demonstrating how to use the systems to support learning processes as they solve complex problems. The College of Education’s LDT program focuses on theoretically and ethically informed design, study and advancement of learning environments that are technologically enhanced and culturally situated to provide student support.
“The level of sophistication that you can get from interactions with AI technology and with the products that you can get it to create is really dependent on the level of sophistication and knowledge of the user,” she said.
Borge’s research interests are at the intersection of learning, cognition and design. Her current research focuses on unpacking group cognition and student needs to design new interaction models and technological tools to enhance learning. This fall, she is teaching “LDT 577: Computer Supported Collaborative Learning (CSCL),” as well as an online course, “LDT 843: Technology for Good and Evil.”
One of the most significant AI developments in recent years is generative AI systems, like ChatGPT, that are more accessible to the public, Borge said. ChatGPT, developed by OpenAI, can generate human-like conversational responses and enables users to refine and steer a conversation towards a desired length, format, style, level of detail and language to simulate real-world discussion.
The increasingly widespread use of ChatGPT by college students in their coursework has prompted concerns about plagiarism and lack of original thinking, according to a report in Wired. One of the main concepts that Borge focuses on in her CSCL class is empowering learners by helping them understand how these technologies can support complex collaborative thinking processes.
She said she also helps students develop sophisticated techniques to generate prompts for ChatGPT that help them think about and improve how they carry out sense-making processes. For example, in a paper published in the International Journal of Computer-Supported Collaborative Learning, Borge discussed different types of conversational moves that humans can use when partnering with chat-based agents to support domain content and collaborative process learning. These moves can include prompts for interactional expectation, domain topic exploration and co-regulation of domain-related thinking during discussion, she explained. There are also post-discussion moves that users can employ to support joint reflection between the human and the AI agent to help the human evaluate how well they engaged in sense-making and where and how they can improve. These skills may carry over to everyday communication that doesn’t involve technology, Borge added.
“By having individual students have conversations with ChatGPT, they are able to practice higher-quality moves we know are conducive to problem solving,” Borge said. “They can get in the habit of using conversational moves that we know help human conversation as well.”
Borge said that she has proactively dealt with the academic integrity issues related to ChatGPT by requiring students to turn in both the transcript of their “collaborative” process with ChatGPT and the final product when they work with the AI system. The students share their work with the class and engage in discussions on strategies.
“For me, plagiarism is not an issue because how I’m doing the work has changed,” Borge said. “I want to see evidence that they are introducing important concepts from the class readings.”
Borge said she doesn’t see her job as policing.
“My job is to provide the students with interesting opportunities to learn new things and become better learners.”
Transforming human resources through AI
AI offers transformative potential for human resource management by automating repetitive tasks, enhancing decision-making and boosting efficiency, according to William Rothwell, distinguished professor of education in the Workforce Education and Development program. The program is designed to promote excellence, opportunity and leadership among professionals in the field of workforce education and development, including professionals employed in secondary and postsecondary education institutions, social services, employee organizations and private sector businesses.
AI has dramatically altered the employee recruitment process, Rothwell said. Rather than using job boards, companies are developing databases of applicants that are screened by AI technologies. According to a report by Gartner, a consulting firm that conducts research on technology and shares this research both through private consulting as well as executive programs and conferences, 76% of human resource leaders believe that if their organization does not adopt and implement AI solutions in the next 12 to 24 months, they will lag in organizational success compared to those that do. Such solutions include virtual assistants, chatbots and processing unstructured data, which include email text, text files and video.
“Sometimes, a human being will never see all the applicants for a certain position,” Rothwell said. “AI will sort candidates based on keywords or other factors.”
AI can sift through hundreds of résumés in significantly less time than it would take a human, Rothwell said. This is an advantage in the sense that algorithms can identify 80% of qualified candidates, he added, but it is a double-edged sword. Due to the lack of personalization in the process, the technology can skip over some qualified applicants and select other people who are unqualified. The efficacy of the AI systems highly depends on the competence of the people managing them, Rothwell said.
Also, according to Pluralsight, an online education company since AI relies on historical data to make decisions, AI models could adopt any inherent biases in the data and use them when making decisions.
“We have to weigh the benefits versus the costs,” Rothwell said. “In the hands of a genius, AI magnifies the genius. But in the hands of an idiot, AI magnifies the idiocy.”
Facilitating online student dialogue with machine learning
Machine learning is a type of artificial intelligence that enables a program or algorithm to learn and improve its processes from experience. Priya Sharma, associate professor of education with a focus on developing and using emerging technologies for teaching and learning in formal and informal contexts, researches the impact of AI machine learning models on student engagement and interaction in online classes.
Sharma is a collaborator on a project that has been working to develop a prototype learning analytics dashboard that provides instructors with insights into students cognitive and behavioral engagement in online discussions. The project emerged from the researchers’ experiences as designers and facilitators of online higher education courses. By capturing students’ online engagement, they said they hope to support instructors in providing feedback and modifying their pedagogical design. The dashboard, which is built on a machine learning back end, displays social networks that represent whom students talk to within the discussion board and discourse analyses that provide qualitative assessment of the content of student posts.
“What we were trying to do is help instructors to look at student discussions online and get an idea of the quality of discussion,” Sharma said. “Was it more engaged or on autopilot? To what extent are people talking to each other?”
A paper based on this work has been published in Educational Technology Research and Development. More recently the team has analyzed student engagement in remote learning environments by investigating whether the language students produce in online discussions is an indication of their cognitive engagement in collaborative activities. They used a computational linguistic tool called Tool for the Automatic Analysis of Lexical Sophistication (TAALES) to measure various indices of language including sophistication, connections and a wide range of other sub-constructs. By comparing the indices of produced language to the researchers’ qualitative assessment of the quality of the post, the team was trying to assess how easily machine learning could add additional helpful metrics into the dashboard. They presented their work in March and it was published in the Proceedings of the 55th Association for Computing Machinery Technical Symposium on Computer Science Education.
“Can we use machine learning to engage with people’s learning?” Sharma said. “We’re using machine learning not just as a way to grade but as a way for instructors to engage in a positive way with students’ learning.”
In the same line of research, Sharma and her colleagues are using large language models (LLMs) such as ChatGPT to evaluate discussion boards. The goal is to understand how to use LLMs to generate automated feedback on student performance but with instructor oversight.
“There is a crucial need for a human in the loop,” Sharma said. “AI can do some things really well, but it has to be trained. We have so much insight into what works and what doesn’t, so we’re trying to integrate humans into the loop as safeguards.”
Using AI to advance educational equity
Exploring how AI may enhance equity in teaching and learning is an open challenge that has many facets, said ChanMin Kim, associate professor of education who researches various designs for improving equity through education. Kim is principal investigator of two projects on this topic, in collaboration with researchers from the College of Information Sciences and Technology and the College of Education at Penn State University Park and Penn State Hazleton. The team is working with pre-service and in-service teachers on improving ambient cues in classrooms — attributes of a physical environment that can influence people's perceptions and behaviors, like décor and background music — for minoritized students.
An example of ambient cues, Kim said, can be found in an educational video series that is often shown in science, technology, engineering and mathematics (STEM) classrooms in which a white male instructor teaches about science and engineering. An inequitable aspect of those videos, she explained, is that the instructor is positioned as the expert, while students of color are portrayed as struggling to understand scientific content.
According to Kim, finding equitable video materials for teaching in elementary STEM classrooms is not a simple task because elementary STEM teachers have a high workload — for example, each STEM teacher in the school district with which Kim works teaches all grades, averaging 250 students per week — and videos need to be played in their entirety to determine the appropriateness of content and visual cues.
Kim and her collaborators are working with STEM teachers to co-design a sociotechnical AI system that automatically identifies and classifies equitable video materials especially for minoritized students. The researchers are also investigating the use of image generator AI techniques for teacher education. They want to use AI to design content containing equitable ambient cues in classrooms to promote diversity, equity, inclusion and belonging.
“The impact of these projects will be immediately broad considering the students who would benefit from equitable materials with ambient cues with which they positively associate themselves,” Kim said.
Examining ethical considerations of AI
Despite the potential benefits espoused by some faculty members, AI is not without its pitfalls, including detrimental environmental impacts, according to Dylan Paré, assistant professor of education who focuses on social issues surrounding technology. Paré expressed concerns about the environmental issues surrounding AI and how they connect with ethical issues and education.
“AI has substantial environmental, social and financial costs that are essential to know about as we determine as a society what role AI should play in our futures, including energy and freshwater use, the spread of low quality and biased information and loss of jobs to AI,” Paré said. “ChatGPT queries use 10 times more electricity than a Google search query and use 500 milliliters of water for every 20 questions. Microsoft's emissions have grown by 29% since 2020 as it constructed more data centers to support AI.”
Over the course of their career, Paré has integrated their background in gender and cultural studies with a comprehensive understanding of social and ethical issues surrounding technologies. Supporting students' AI literacy must include developing their critical awareness of AI dangers, they said.
“As educators, we are faced with a profound ethical dilemma,” Paré said. “Is it right to require our students to engage with AI such as ChatGPT, given the considerable environmental harms that threaten all our futures, especially younger generations who will not experience a stable climate because of our actions?”