Mejorando la evaluación y retroalimentación en programas de diseño de videojuegos: aprovechando la IA Generativa para una evaluación eficiente y significativa
Enhancing Assessment and Feedback in Game Design Programs: Leveraging Generative AI for Efficient and Meaningful Evaluation
James Hutson
Lindenwood University, USA
https://orcid.org/0000-0002-0578-6052
Ben Fulcher
Lindenwood University, USA
https://orcid.org/0009-0005-4338-6067
Jeremiah Ratican
Lindenwood University, USA
https://orcid.org/0000-0002-6814-8672
RESUMEN
La integración de herramientas de IA generativa en la educación de diseño de videojuegos ofrece formas prometedoras de optimizar los procesos de calificación, evaluación y retroalimentación, que suelen ser intensivos en mano de obra. En los programas de diseño de videojuegos, el profesorado a menudo maneja formatos de archivo variados, incluidos modelos 3D, prototipos ejecutables, videos y documentos complejos de diseño de juegos. Los métodos tradicionales de evaluación y retroalimentación, principalmente basados en texto, tienen dificultades para proporcionar a los estudiantes ideas oportunas y accionables. Además, solo un pequeño porcentaje de los estudiantes más destacados revisa y aplica de manera constante la retroalimentación, lo que genera ineficiencias. Este artículo explora cómo las herramientas de IA generativa pueden mejorar estos procesos mediante la automatización de aspectos de la calificación, la generación de retroalimentación más personalizada y significativa, y la resolución de la naturaleza intensiva en tiempo de la revisión de formatos de archivo diversos. Se discuten estrategias clave, incluido el uso de rúbricas adaptadas para la evaluación basada en IA, indicaciones automatizadas para asignaciones impulsadas por la narrativa y la aplicación de la IA en la revisión de construcciones de proyectos complejos. El objetivo es crear más tiempo para que el profesorado se involucre en la mentoría en vivo y actividades de aprendizaje práctico, que la investigación demuestra ser más efectivas. Se proporcionan ejemplos prácticos de diversas tareas de diseño de videojuegos, incluidas revisiones de construcciones y evaluaciones de documentos, para ilustrar estos nuevos enfoques. Este cambio promete mejorar el compromiso estudiantil y mejorar los resultados de aprendizaje.
PALABRAS CLAVE
IA Generativa; Educación en Diseño de Videojuegos; Evaluación Automatizada; Eficiencia en la Retroalimentación; Compromiso Estudiantil.
ABSTRACT
The integration of generative AI tools in game design education offers promising ways to streamline the grading, assessment, and feedback processes that are typically labor-intensive. In game design programs, faculty often deal with varied file formats, including 3D models, executable prototypes, videos, and complex game design documents. Traditional methods of assessment and feedback, primarily text-based, struggle to provide timely and actionable insights for students. Furthermore, only a small percentage of top students consistently review and apply feedback, leading to inefficiencies. This article explores how generative AI tools can augment these processes by automating aspects of grading, generating more personalized and meaningful feedback, and addressing the time-intensive nature of reviewing diverse file formats. Key strategies are discussed, including the use of rubrics tailored for AI-based assessment, automated prompts for narrative-driven assignments, and the application of AI in reviewing complex project builds. The objective is to create more time for faculty to engage in live mentoring and hands-on learning activities, which research shows to be more effective. Practical examples of various game design assignments, including build reviews and document evaluations, are provided to illustrate these new approaches. This shift promises to enhance student engagement and improve learning outcomes.
KEYWORDS
Generative AI; Game Design Education; Automated Assessment; Feedback Efficiency; Student Engagement.
1. INTRODUCTION
The current state of grading, assessment, and feedback strategies in higher education has undergone significant transformation in recent years (Winstone & Boud, 2022). Traditional assessment methods often rely on labor-intensive processes, including grading papers, exams, and projects manually, which can lead to delays in providing timely feedback to students (Ewell et al., 2011). Across disciplines, faculty members are often tasked with grading large volumes of work, making it difficult to provide individualized feedback that can promote student improvement (Couper, 2018). Furthermore, research shows that only a fraction of students, typically the highest achievers, fully engage with the feedback provided, which undermines the effectiveness of these efforts (Kritek, 2015; Stevens & Levi, 2023; Winstone & Carless, 2019). Many institutions are beginning to explore ways to automate and streamline assessment through technology, recognizing that more efficient grading practices can enhance learning outcomes by enabling more timely feedback (Crompton & Burke, 2023; Heil & Ifenthaler, 2023; Hutson & Plate, 2023). Nevertheless, the balance between quality and efficiency in grading remains a key challenge for educators (Wang et al., 2023).
In disciplines such as the sciences, humanities, and social sciences, grading rubrics are often employed to bring consistency to assessment processes. Rubrics provide a structured framework that allows faculty to evaluate student work based on clear criteria, facilitating both grading and feedback (Wollny et al., 2021). While rubrics can enhance transparency and fairness, they do not always accommodate the complex or creative nature of assignments in fields like the arts or engineering, where qualitative judgment is crucial (Fitch, 2023). Faculty members in these disciplines frequently express concerns about the time commitment required to assess student work thoroughly, especially in courses with large enrollments (Winstone & Carless, 2019). This situation has led to increased interest in integrating AI-powered tools that can handle repetitive or low-complexity grading tasks, freeing educators to focus on providing more in-depth feedback on higher-level work. AI tools also offer the potential to personalize feedback for students, ensuring that it is relevant and actionable (Crompton & Burke, 2023).
Despite these advancements, feedback practices remain inconsistent across higher education, with some students receiving only limited or superficial comments on their work (Jin et al., 2024). Research indicates that feedback is most effective when it is specific, timely, and linked directly to learning outcomes (Bennett, 2011). However, the pressures of grading large volumes of work can lead to generic or delayed feedback, reducing its impact. Furthermore, the reliance on written feedback may not suit all learning styles, prompting calls for more diverse and multimodal approaches, such as video or audio feedback (Wollny et al., 2021). Emerging technologies, including generative AI (GAI), offer promising solutions to these issues by providing tools that can automate aspects of feedback generation, ensuring that students receive personalized, actionable insights without overwhelming faculty. As higher education institutions continue to evolve, the integration of AI in assessment processes may hold the key to more efficient, equitable, and student-centered learning experiences (Zouhaier, 2023).
In game design education, assessment practices have evolved to accommodate the diverse and complex nature of student outputs, ranging from game prototypes to 3D models and branching narratives (Adams, 2014). Unlike traditional disciplines, game design courses often require the submission of varied file formats, including executable game builds, 3D modeling files, narrative documents, and gameplay videos (Udeozor et al., 2023; Wollny et al., 2021). These formats present unique challenges for both students and faculty, as the transferability of these files between different hardware or software environments can lead to inconsistencies in evaluation. As a result, many faculty members have adopted a more flexible approach, allowing students to submit media such as videos of gameplay or links to online prototypes to ensure assessability across different platforms (Wolz, Ault, & Nakra, 2007). Rubrics are commonly employed to provide a structured framework for evaluating creative elements such as game mechanics, storytelling, and user experience (Petrova, 2020). However, grading remains labor-intensive, as each project often requires in-depth review and personalized feedback to guide students in refining their design skills.
Moreover, feedback in game design programs is crucial for iterative learning, as students often work in development cycles where continuous improvement is key to success. Faculty generally provide detailed, typed feedback on game builds and design documents, focusing on both technical and creative aspects (Winstone & Carless, 2019). However, video feedback, though more detailed, is often impractical due to the large volume of students in these programs. The emphasis on personalized feedback is particularly important in game design, as only a fraction of students—typically the most engaged—actively apply the suggestions they receive (Wollny et al., 2021). This challenge has led to growing interest in incorporating AI-powered tools that can help automate the assessment of technical aspects, such as code functionality or game physics, while allowing faculty to focus on providing more meaningful insights into the design and narrative components. This approach not only reduces the time burden on educators but also ensures that students receive timely, relevant feedback that can be implemented in their next development cycle.
In response to the challenges posed by traditional assessment strategies, faculty in game design programs have devised innovative approaches that address the unique demands of the discipline. These strategies are specifically tailored to accommodate the variety of file types used in game design, such as executable builds, 3D models, and narrative documents, ensuring smoother transfer and review across different platforms. Additionally, faculty have begun incorporating AI-powered tools to assist with grading technical aspects of projects, such as code analysis and physics simulations, allowing more time for qualitative feedback on creative elements. Furthermore, the design of assignments in game design has shifted to include prompts that encourage iterative learning, requiring students to submit multiple versions of their work for continuous improvement. By providing clear rubrics and prompts that guide students in understanding project requirements and expectations, these strategies foster a more efficient and targeted feedback process. Ultimately, these faculty-devised strategies aim to address the limitations of traditional assessment by streamlining evaluation and feedback, enhancing student engagement, and improving learning outcomes in game design education.
2. LITERATURE REVIEW
Gamification has extended far beyond its origins in the gaming and entertainment industry and has been widely adopted in educational contexts, demonstrating significant pedagogical benefits. The rise of game-based learning (GBL) is due in part to its adaptability across different industries and demographics. This expansion is rooted in the demonstrable effectiveness of gamification in enhancing content delivery and engagement. It has proven useful not only in business software and entertainment but also in the realm of education. As Exterman (2021) notes, research shows that when learning is framed as entertaining, it becomes more engaging, and learning outcomes are more easily achieved. Educators have increasingly recognized the value of incorporating game-like elements into classroom instruction, such as point systems, badges, leaderboards, and trophies to motivate student behaviors. Additionally, game-based learning can involve the direct use of games to facilitate learning, as seen in simulations and educational games (Kapp, 2012). Despite the growing adoption of these techniques, many educators still fail to apply them systematically or within a scaffolded instructional framework (Tang, 2014), limiting their potential impact on learning outcomes.
The infusion of gamification into various aspects of life, including education, represents a broader cultural and technological shift driven by advancements in game development and increased accessibility for users from diverse backgrounds. The benefits of gamification, particularly in content delivery and engagement, have been well-documented across domains, including mobility, business software, and entertainment (Exterman, 2021). In education, the gamification of learning enhances student engagement by introducing goals, achievements, and other motivational strategies that make the learning process more interactive and meaningful. Popular platforms such as Khan Academy and Duolingo exemplify the successful application of these gaming methodologies, particularly in secondary education. Despite the advances in digital and participatory culture that permeate students’ daily lives, traditional educational models remain largely unchanged, with students still grouped into segregated levels under the guidance of a single instructor (Squire, 2011). This creates a disconnect between the participatory media culture students engage with outside of school and the largely passive consumption model they experience in the classroom (Jenkins, 2006; Black, 2008).
The rise of participatory media has given students unprecedented access to create and share content across various platforms, often without formal training, using tools like YouTube, Reddit, and other social media outlets (Jenkins, 2006). In contrast, traditional education continues to operate on a one-to-many instructional model, leaving little room for immediate feedback or community-based learning (Squire, 2011). As Levin and Sousan Arafeh (2002) revealed in their study, K-12 students increasingly turn to online resources to find information that is personalized, relevant, and available on demand. This shift toward online learning environments allows students to access tailored content aligned with their interests and abilities, providing more immediate and meaningful feedback compared to traditional classroom settings.
As the academy reevaluates its role in the digital age, it becomes essential to rethink how learning outcomes are achieved. Studies have shown that incorporating participatory elements into lesson plans can significantly improve student retention and engagement in the learning process. Dastyar (2019) emphasizes that motivational factors, when combined with participatory learning, lead to enhanced academic performance and motivation among students. Similarly, games are emblematic of a larger cultural shift toward participatory media, offering models for experiential and interactive learning. The act of gaming transforms students from passive consumers of information into active participants who shape their learning experiences through their interaction with games and peers (Squire, 2011). This participatory model can be effectively adopted in educational settings, creating a more engaging and dynamic learning environment for students.
At the same time, research on grading, assessment, and feedback in game design education is relatively sparse, with much of the existing literature focusing instead on game-based learning and assessments rather than the specific pedagogical needs of game design education. This gap is significant because game design as a discipline requires a unique approach to assessment that balances technical proficiency with creativity. Most studies on assessment frameworks within educational contexts are geared toward evaluating learning through games rather than evaluating students’ work in game design courses. For example, frameworks like Evidence-Centered Design (ECD) are primarily used for assessing skills within immersive environments but do not address the intricacies of assessing game design projects themselves (Serrano-Laguna et al., 2018)
Within game design education, traditional methods of grading often fall short due to the diversity of student outputs such as prototypes, 3D models, and complex narratives. Assessment frameworks must incorporate both formative and summative feedback to guide students through iterative development processes. Educators frequently use rubrics tailored to game design’s creative and technical aspects, but the literature lacks comprehensive studies on the effectiveness of these rubrics in fostering student learning. Studies have shown that while rubrics help students understand project expectations, they often need to be supplemented with detailed feedback that emphasizes creativity alongside technical skills (Burke, 2010; Morton et al., 2021).
Moreover, feedback plays a crucial role in game design education because the development process is iterative. Formative assessments, such as peer reviews and design critiques, are commonly used to give students continuous feedback on their projects. These formative assessments help students refine their designs before they reach the final stages of development. However, the existing literature on feedback in game design education suggests that feedback is most effective when it is specific and action-oriented, helping students make meaningful improvements. Educators often struggle to balance providing detailed feedback with the time-intensive nature of reviewing complex game design projects (Easterday et al., 2017).
One challenge that remains largely unaddressed in literature is the need for scalable assessment methods that can handle large class sizes while maintaining quality feedback. While automated grading tools have been explored in programming education, there is little research on their application in game design courses. The integration of AI-driven tools to assess technical aspects such as code and physics simulations has shown promise in reducing the workload for educators, allowing them to focus on providing feedback on creative elements (Zarkoob, 2024). However, empirical studies on the effectiveness of such tools in game design education are lacking.
At the same time as we note the limitations of applicable studies, the significant ethical challenges, particularly concerning fairness, transparency, and bias in evaluation processes need be discussed. AI systems are not neutral; they reflect the biases present in the data used to train them and the decisions made by their designers. These inherent biases in AI can perpetuate or even exacerbate existing social inequalities, disproportionately affecting marginalized groups (Ferrara, 2023). In education, where fairness and equity are paramount, such biases raise critical concerns about the integrity of AI-assisted assessments and feedback systems.
One of the main sources of bias in AI is the reliance on non-representative or skewed datasets, which can result in unfair outcomes. For instance, if an AI system is trained primarily on data from students in well-resourced schools, it may not accurately assess the work of students from under-resourced schools, leading to biased evaluations that reinforce educational disparities. Ferrara (2023) stresses the urgency of addressing these biases to prevent the entrenchment of inequalities in educational settings. These biases not only affect individual student assessments but can also have long-term implications for students’ academic and professional trajectories. Additionally, cognitive biases—stemming from the subjective decisions made by those who design and implement AI systems—can further exacerbate issues of fairness (Chen et al., 2023).
AI-generated feedback systems, while offering personalized insights, also present risks of bias. These systems may inadvertently favor certain groups of students over others based on their learning history, socioeconomic background, or even gender (Ntoutsi et al., 2020). For example, gender bias has been a particular focus in AI and machine learning (ML), with scholars emphasizing the need to integrate diversity and gender theory into AI system development to mitigate such biases (Hall & Ellis, 2023; Kordzadeh & Ghasemaghaei, 2022; Leavy, 2018). Bias in AI feedback can result in unequal learning experiences, where certain students receive more detailed, constructive feedback than others, undermining the principle of equity in education.
Moreover, transparency in AI decision-making processes is critical to maintaining trust in educational systems. Students and educators must understand how AI-driven assessments and feedback are generated to ensure accountability. Alam (2023) emphasizes the importance of addressing concerns related to privacy, security, and biases in AI applications in education. Without transparency, there is a risk that AI systems will perpetuate hidden biases, making it difficult to detect and rectify unfair outcomes. Strategies for enhancing transparency include the use of explainable AI (XAI) tools that clarify the decision-making processes of AI systems, enabling educators and students to understand how conclusions are reached.
To mitigate these biases and promote fairness, several strategies must be implemented. First, an interdisciplinary approach to AI design and deployment is necessary, involving ethicists, educators, and technologists in the development of AI systems (Srivastava et al., 2023). This collaborative approach ensures that diverse perspectives are considered, reducing the likelihood of bias. Participatory Design (PD) is a key methodology that can foster inclusivity by involving a wide range of stakeholders in the AI design process. By incorporating the voices of students, educators, and marginalized communities, PD helps ensure that AI systems are responsive to the needs of all users (Zytko et al., 2022). PD has been shown to reduce the societal impacts of AI and enhance its positive contributions, particularly for vulnerable populations.
In addition to PD, Value Sensitive Design (VSD) is an essential framework for embedding ethical principles into AI systems. VSD focuses on integrating values such as transparency, justice, fairness, and privacy into the design process (Jacobs et al., 2021). By prioritizing these values, VSD ensures that AI systems are not only functional but also socially responsible. For example, Umbrello (2019) demonstrates how VSD can be used to align AI systems with shared societal values, making them more equitable and transparent.
Lastly, continuous monitoring and evaluation of AI systems are crucial to identifying and correcting biases as they arise. Tools that visualize and measure fairness, such as those explored by Mashhadi, Zolyomi, and Quedado (2022), can help educators and developers understand how biases manifest in AI systems and take steps to mitigate them. Incorporating fairness and bias themes into AI education, as Islam et al. (2022) suggest, is also critical for training future technologists who are equipped to design AI systems that promote equity and inclusivity.
Therefore, while research on the application of AI tools for assessment in higher education has grown significantly in recent years (Crompton & Burke, 2023; Heil & Ifenthaler, 2023), there remains a notable gap in the literature concerning specific strategies for game design education. The unique challenges presented by game design programs, including the evaluation of diverse file formats, complex project builds, and creative elements, have not been adequately addressed in the context of AI-assisted assessment (Adams, 2014; Udeozor et al., 2023). Despite the potential benefits of AI in streamlining grading processes and enhancing feedback quality, as demonstrated in other disciplines (Wang et al., 2023; Zouhaier, 2023), tailored approaches for game design education are lacking. The following sections will discuss novel strategies that leverage AI tools to address the specific needs of game design programs, aiming to improve assessment efficiency, feedback quality, and student engagement in this rapidly evolving field.
3. RECOMMENDATIONS
Game design education presents a unique set of challenges for faculty when it comes to assessment and feedback, distinguishing it from many other academic disciplines. These challenges stem from the diverse and complex nature of game design projects, which often involve multiple file formats, large builds, and creative elements that are difficult to evaluate using traditional assessment methods (Gui et al., 2023; Moreno-Ger et al., 2008).
Variety of File Formats
Game design projects present a unique challenge due to the wide array of file formats involved, making assessment a complex task for faculty. For instance, a single student project might include an executable file (.exe) for a playable game prototype, alongside compressed folders (.zip) containing numerous assets and source files. These projects often incorporate various image files, primarily JPEG and sometimes PNG, showcasing concept art, textures, and user interface designs. Additionally, students frequently submit video files demonstrating gameplay mechanics or narrative sequences (Kramarzewski & De Nucci, 2023). Text documents, typically in.docx format, are also common for game design documents, level design plans, and project reports (Brunnberg, 2020). To illustrate, a student’s final project submission might consist of a playable.exe file, a folder of 3D models and textures, a gameplay video, and a 20-page game design document. This diversity in file types poses significant challenges for faculty in terms of file management, ensuring accessibility across different systems, and maintaining consistent evaluation standards across varied project components.
Interoperability Issues
Large game builds frequently encounter interoperability problems when transferred between student and faculty devices, presenting a significant hurdle in the assessment process. These issues can arise due to differences in hardware specifications, such as when a student develops a graphically intensive game on a high-end PC that struggles to run on a faculty member’s standard-issue laptop. Varying software versions or game engines can also cause complications; for example, a project created in Unity 2022.2 might not open correctly in Unity 2021.3 installed on a faculty machine. Operating system incompatibilities further exacerbate this issue, as demonstrated when a Windows-based game fails to run on a Mac OS system. To circumvent these challenges, faculty often resort to alternative submission methods. For instance, students might be asked to submit screenshots showcasing key game elements, record video walkthroughs of their gameplay, or provide links to online playable versions hosted on platforms like itch.io. While these alternatives ensure that faculty can assess the projects, they often fail to capture the full interactive experience of the original game, potentially impacting the depth and accuracy of the evaluation (Stãnescu et al., 2013).
Time-Intensive Grading Process
The multifaceted nature of game design projects renders the grading process particularly time-intensive, requiring faculty to evaluate a diverse range of elements within each submission. Visual components, such as character models, environmental designs, and user interface layouts, demand careful scrutiny for their artistic merit and functional effectiveness. Textual content, including narrative scripts, design documents, and technical specifications, must be assessed for coherence, creativity, and adherence to game design principles. Interactive elements like gameplay mechanics and overall user experience require hands-on testing to evaluate their effectiveness and entertainment value. Furthermore, the technical execution of the project, including code quality and performance optimization, needs expert evaluation. For example, grading a single student’s final project might involve reviewing a 50-page game design document, playtesting a 30-minute game demo, analyzing code structure and efficiency, and evaluating the artistic cohesion of various game assets. This comprehensive evaluation process can easily consume several hours per student, leaving faculty with limited time for providing personalized feedback or engaging in one-on-one mentoring sessions, which are crucial for student development in this highly creative and technical field (Seering et al., 2019).
Limited Engagement with Feedback
Despite the significant time and effort invested in providing detailed feedback, faculty in game design programs consistently report a concerning trend: only a small percentage of students, typically the most academically engaged ones, thoroughly read and apply the feedback provided. This phenomenon raises critical questions about the efficiency and effectiveness of current feedback methods in game design education. For instance, an instructor might spend an hour crafting comprehensive feedback on a student’s level design, offering insights on pacing, difficulty curve, and environmental storytelling, only to find that the student makes minimal changes in subsequent iterations. This limited engagement is particularly problematic in game design, where iterative development and continuous improvement are fundamental to the creative process. The reasons for this lack of engagement can vary; some students may feel overwhelmed by the volume of feedback across multiple project components, while others might struggle to translate written feedback into actionable improvements in their game designs. Additionally, the technical nature of some feedback, such as suggestions for optimizing render pipelines or refactoring code structures, may be challenging for students to implement without further guidance. This situation not only affects individual student progress but also impacts the overall educational effectiveness of the program, as valuable instructional insights fail to translate into improved student work and skill development (McGinness et al., 2020).
Proposed Strategies: AI-Assisted Rubric Generation
To address the unique challenges in game design assessment, we propose the following strategies that leverage generative AI tools. Developing customizable, AI-powered rubrics can significantly streamline the assessment process for game design projects. These rubrics would be capable of adapting to various project types and file formats, incorporating both technical and creative evaluation criteria. For instance, an AI system could analyze the project requirements and automatically generate a rubric that includes relevant criteria for assessing gameplay mechanics, visual aesthetics, narrative design, and technical implementation. The system could draw from a vast database of game design principles and industry standards to ensure comprehensive coverage. For example, when evaluating a 3D action game, the AI might include criteria for character controller responsiveness, camera functionality, level design complexity, and visual effects quality. The rubric could also dynamically adjust its weighting based on the project’s focus, giving more emphasis to narrative elements for story-driven games or mechanics for puzzle games. This adaptability ensures that each project is evaluated fairly and comprehensively, regardless of its specific focus or format (Wolf et al., 2023).
Automated File Analysis
Implementing AI tools capable of analyzing different file types can provide initial assessments on technical aspects, freeing up faculty time for more nuanced creative evaluation. For image files, AI could assess elements like color palette coherence, compositional balance, and adherence to art style guidelines. In video analysis, the system could evaluate frame rates, visual consistency, and even basic gameplay flow. For code files, AI could perform static analysis to check for best practices, potential bugs, and optimization opportunities. For instance, when analyzing a student’s 3D model submissions, the AI could automatically check for polygon count, texture resolution, and rigging quality, flagging any issues that might impact game performance. Similarly, for a gameplay video, the AI could track player progression, identify potential difficulty spikes, and assess the pacing of key events. This automated analysis would provide faculty with a solid foundation for their assessment, allowing them to focus their expertise on evaluating the more subjective and creative aspects of each project (Sezen, 2024).
Intelligent Prompt Generation
Utilizing AI to create tailored prompts for different project stages can encourage students to submit more standardized and assessable deliverables. These prompts would be dynamically generated based on the project type, current development stage, and individual student progress. For example, during the concept phase of a role-playing game project, the AI might generate prompts like “Describe three unique features of your game’s magic system and how they impact gameplay” or “Outline the character progression system, including at least five distinct skills or attributes.” As the project moves into production, prompts could become more specific, such as “Provide a video demonstration of the inventory system, showcasing item sorting and equipment comparison features.” By guiding students with these tailored prompts, faculty can ensure that submissions address key aspects of game design more consistently, making assessment more straightforward and comprehensive (Sudhakaran et all, 2023).
Personalized Feedback Synthesis
Employing generative AI to compile and synthesize feedback from multiple sources can create cohesive, actionable recommendations for each student. This system would integrate data from rubrics, automated file analysis, and faculty input to generate comprehensive feedback reports. For instance, if the rubric indicates a low score in “level design,” the automated analysis detects inconsistent difficulty curves in gameplay videos, and the faculty notes lack of player guidance, the AI could synthesize this information into a coherent feedback statement. It might read: “Your level design would benefit from more consistent difficulty progression. Consider adding more visual cues to guide the player through complex areas and review the spike in difficulty at the 10-minute mark of your gameplay video.” This synthesized feedback would not only highlight issues but also provide specific, actionable suggestions for improvement, making it easier for students to understand and apply the feedback they receive (Liu et al., 2023).
Interactive Feedback Platforms
Developing AI-powered platforms that allow students to engage with their feedback through guided questions and reflections can increase the likelihood of feedback application. These platforms could present feedback in an interactive format, prompting students to reflect on specific aspects of their work and plan improvements. For example, after receiving feedback on character design, the platform might ask questions like “How does your character’s visual design reflect their backstory?” or “What changes could you make to better align the character’s abilities with the game’s core mechanics?” The AI could then provide additional resources or examples based on the student’s responses. This interactive approach encourages deeper engagement with the feedback, helping students internalize the critiques and actively plan their next steps. Additionally, the platform could track student interactions with feedback over time, allowing faculty to identify areas where students might need additional support or clarification (Anjum et al., 2024). By implementing these AI-assisted strategies, game design programs can address the unique challenges of assessment in this field. These approaches aim to streamline the evaluation process, provide more consistent and comprehensive feedback, and increase student engagement with the feedback they receive. Ultimately, this can lead to more effective learning outcomes and better prepare students for the realities of professional game development.
Practical Application: AI-Assisted Assessment of Branching Narratives
To illustrate the practical application of AI in game design assessment, we can examine a specific example: evaluating a branching narrative assignment submitted as an HTML file. This type of assignment presents unique challenges that are characteristic of game design education, differentiating it from more traditional academic disciplines. Unlike many other fields where faculty primarily deal with standard document formats such as Word files or images, game design assignments often involve complex, interactive elements. Branching narratives, commonly used in adventure games or interactive fiction, exemplify this complexity. These narratives are frequently created using specialized tools that output HTML files, which can be particularly challenging and time-consuming for faculty to review manually.
To address this challenge, an AI-assisted approach can be implemented. In this example, ChatGPT-4 was utilized to aid in the assessment process. The procedure involved uploading an anonymized student assignment (the HTML file containing the branching narrative) to ChatGPT-4 and prompting the AI with the instruction: “Summarize the existing branching narrative in the file attached.” The output is as follows:
The branching narrative “Echoes of Temptation” centers around a cursed village where the player character must navigate complex choices involving the village’s past, a demon’s bargain, and various rituals to either save or doom the inhabitants. Key decision points include:
•The Demon’s Deal: The player encounters a demon offering to lift the village’s curse in exchange for the elder’s life. The deal involves persuasion, possession, and multiple moral dilemmas.
•The Ritual: The player must gather specific items (a forest flower, mirrors, charms, blood, and a chant) to perform a ritual that could lift the curse. The outcome depends on how successfully these items are gathered and used.
•Village Interactions: Players engage with villagers, the blacksmith, herbalist, and elder, each providing critical tasks, information, and side quests to fix the well, uncover the elder’s dark past, and locate hidden artifacts.
•Multiple Endings: Choices culminate in various endings depending on whether the player completes the ritual correctly, accepts or rejects the demon’s deal, and other key actions, leading to outcomes like the village’s recovery, failure, or the player becoming bound to the demon’s will.
The narrative focuses on moral decisions, exploration, and strategy, with significant weight on dialogue and interactions with characters.
The AI-generated summary provides a comprehensive overview of the narrative structure, including several key elements. It outlines the main storyline and identifies critical plot points, giving the assessor a quick understanding of the narrative’s core. The summary also quantifies the number of distinct branches or endings, providing insight into the complexity and replay value of the interactive story. Furthermore, it analyzes the depth of the branching, indicating how many levels of decisions are available to the player or reader, which is crucial for understanding the narrative’s complexity and player agency.
Next, a follow up prompt sought to evaluate the narrative without going through the various iterations of each scenario but still provide meaningful analysis and feedback. The following prompt was used: “You are providing feedback for this branching narrative assignment for a student. Use the criteria that focuses on playability, narrative structure and efficacy and relative sophistication and output with paragraphs on how to improve.” The output is as follows:
Feedback on the Branching Narrative Assignment:
Playability:
The story’s structure offers clear branching paths, allowing players to feel in control of the narrative. Each decision point is well integrated into the storyline, maintaining a sense of immersion. However, some choices lack immediate consequences, which can hinder the player’s engagement. For example, accepting the demon’s deal doesn’t lead to significant variations until later in the story. Introducing more immediate feedback or smaller consequences for choices early on can enhance the sense of interactivity.
Suggestions for Improvement:
1.Add Immediate Feedback: After major decisions like “accept the demon’s deal,” add smaller branching events, such as changes in dialogue or environment, that reflect the player’s choice more visibly.
2.Enhance Fail States: Introduce more instances where the player might fail or face setbacks if they make poor decisions, increasing tension and challenge.
Narrative Structure:
The narrative effectively captures a gothic, eerie atmosphere, which suits the tone of the story. The demon’s manipulation of characters and the sense of impending doom create a compelling storyline. However, some sections of the narrative are text-heavy, with long passages that could break the player’s sense of immediacy and agency. The player might lose focus if the text doesn’t lead to choices soon enough.
Suggestions for Improvement:
3.Break Up Long Texts: For longer passages (e.g., dialogue with the demon), break them into smaller chunks where players can make minor decisions in between, even if they are narrative choices that only affect how much information is revealed.
4.Intertwine Lore with Action: When delivering exposition (such as explaining the curse), embed it within more dynamic interactions. For instance, learning about the curse could happen while investigating a physical object in the environment.
Efficacy:
The narrative effectively builds suspense and tension, with clear stakes introduced for each character and event. However, the pacing of some plot points could be smoother. In particular, the transition between scenes like the demon’s deal and exploring the village can feel abrupt. Consider adding narrative glue that helps to transition between larger decisions and more exploratory sections.
Suggestions for Improvement:
5.Smooth Transitions: Use small narrative prompts between larger scenes (e.g., after making a deal with the demon, the player could briefly reflect on their decision while walking to their next destination).
6.Pacing Adjustments: If the player takes a deal or refuses it, have subtle variations in pacing (e.g., the environment could change slightly, reflecting the demon’s influence or lack thereof).
Sophistication and Output:
There’s a rich variety of branching paths that are well thought out, providing a broad scope of player choice. However, the consequences of some decisions could be more deeply explored. In particular, choices such as forcing open doors or eavesdropping don’t always seem to result in noticeable shifts in the plot. Players may feel like their agency is reduced if different choices lead to similar outcomes.
Suggestions for Improvement:
7.Consequential Choices: Ensure that even small decisions (e.g., eavesdropping or knocking) lead to unique outcomes, even if they are minor. This enhances the illusion of control and deepens engagement.
8.Refine the Endings: If the player reaches an ending, like being bound to the demon, consider creating multiple variations of this based on prior choices. This will reward the player for their journey and emphasize the uniqueness of their path.
In summary, your branching narrative is highly immersive, with strong thematic elements. By enhancing the consequences of player choices and refining transitions, the narrative can become even more engaging and dynamic. Keep experimenting with these elements to ensure players feel truly in control of their story’s outcome.
A significant component of the AI analysis is the identification of key decision points and their consequences. This information is vital for assessing how well the student has implemented cause-and-effect relationships within the narrative structure. The AI also identifies recurring themes or motifs across different branches, offering insights into the narrative’s cohesion and the student’s ability to maintain consistent themes despite diverging plotlines.
The benefits of this AI-assisted approach are manifold. Primarily, it offers a substantial time-saving advantage. Faculty can quickly grasp the structure and complexity of each narrative without the need to manually explore every branch, which can be exceptionally time-consuming for complex narratives. This efficiency allows for more thorough assessment of a larger number of submissions within a given timeframe. Moreover, this method enhances consistency in the assessment process. By providing a standardized overview for each student’s work, it ensures a fair comparison across different submissions. This standardization is particularly valuable in game design education, where the creative and technical aspects of assignments can vary widely.
Another significant advantage is that it allows faculty to focus their expertise on assessing the qualitative aspects of the work. With the structural elements already summarized, instructors can dedicate more time to evaluating the quality of writing, depth of character development, and overall narrative cohesion. This shift in focus aligns well with the creative and analytical skills that are crucial in game design. An additional benefit of this approach is its potential for identifying cases of plagiarism. Unusual similarities in AI-generated summaries across different submissions could help flag potential instances of academic dishonesty, a concern in any educational setting but particularly complex in creative fields like game design.
This AI-assisted assessment method can be effectively integrated with other proposed AI strategies in game design education. For instance, the AI-generated summary can inform the creation of a tailored rubric for each student’s unique narrative, ensuring that the assessment criteria are appropriately matched to the specific structure and content of each submission. In terms of feedback, the AI summary can be combined with faculty input to generate comprehensive, personalized feedback on both the narrative structure and content. This synthesized feedback can provide students with a more holistic understanding of their work’s strengths and areas for improvement. Furthermore, this approach can be incorporated into interactive feedback platforms. Students could be prompted to reflect on specific elements of their narrative structure as summarized by the AI, encouraging deeper engagement with the feedback and fostering metacognitive skills essential for game design.
By leveraging AI in this manner, faculty can more efficiently and effectively assess complex, interactive assignments like branching narratives. This approach directly addresses one of the unique challenges of game design education: the need to evaluate non-standard, interactive content that doesn’t fit traditional assessment models. As game design education continues to evolve, such AI-assisted methods may become increasingly valuable in maintaining high standards of assessment while managing the unique demands of the field.
4. DISCUSSION
The findings presented in this study reveal significant potential for the application of generative AI tools within game design education, especially when addressing the challenges of diverse file formats, iterative feedback, and time-intensive grading processes. Game design educators frequently encounter various project submissions that encompass a wide range of file types, from executable prototypes and 3D models to video demonstrations and narrative design documents. This diversity, while essential for capturing the multifaceted nature of game design, complicates traditional assessment methods, which are not always equipped to handle such technical and creative diversity. AI tools present an opportunity to streamline this process by automating certain evaluative aspects, enabling more efficient grading and feedback mechanisms.
While the primary focus of this study has been on the assessment of game prototypes and 3D assets, AI tools can be further extended to evaluate other common file types in game design education, such as:
9.Code Files: AI tools could analyze scripts used in game engines (e.g., C#, Python) to check for syntax errors, adherence to coding best practices, and performance optimization. For example, tools like ChatGPT or OpenAI Codex could automate the grading of game logic and detect potential inefficiencies or bugs within student submissions.
10.1Narrative Files: Interactive stories often rely on branching narratives, submitted as HTML or other text-based files. AI can analyze the complexity, coherence, and branching structure of narratives, providing instructors with a summary of possible story outcomes, as well as evaluating thematic consistency and player agency within the narrative.
11.Video Files: AI can assess gameplay videos for frame rate stability, identify visual inconsistencies, and evaluate user experience elements. Through object recognition and gameplay analysis, the system could detect core mechanics, analyze level progression, and highlight areas where the game might not meet design standards.
12.3D Models and Animations: For asset-heavy projects, AI tools could assess technical quality, such as polygon count, texture mapping, and rigging integrity. AI-driven evaluations of artistic coherence and style alignment can provide students with insights that otherwise might be labor-intensive to deliver.
The incorporation of AI tools in game design education offers transformative potential; however, several limitations must be acknowledged. A primary challenge lies in the ability of AI to effectively evaluate the subjective and creative aspects of game design. While AI excels at assessing technical components—such as code functionality and physics simulations—it struggles with more nuanced areas that require human judgment. Creative decisions, such as the originality of a game’s artistic direction, the depth of its narrative, or the emotional impact on players, are difficult for AI to evaluate comprehensively. These elements of game design embody an artistic dimension that current AI technologies are unable to fully grasp or quantify. Another significant limitation involves the issue of interoperability. AI-based assessments rely heavily on the compatibility of software and hardware environments, which can present challenges when game design projects are developed in various game engines or software versions. For instance, a game created in a newer version of a game engine may not be accessible for AI analysis on a system running an older version. These compatibility issues can result in incomplete evaluations and complicate the assessment process for both faculty and students.
Scalability also poses a problem in the widespread use of AI for grading. While AI can streamline the evaluation of technical aspects, ensuring that feedback remains personalized and relevant across large class sizes is a considerable challenge. Personalized feedback is essential for student development in game design, as it encourages iterative improvement and creative refinement. Over-reliance on AI could lead to feedback that is too generic, limiting its impact on student growth and learning. Additionally, like game projects themselves, AI tools are not immune to bugs and technical errors. Faults in file analysis or incorrect assessments can mislead students, resulting in potentially unfair grades. This underlines the need for human oversight to mitigate the risks associated with over-dependence on AI technologies.
To fully leverage the benefits of AI in game design education, educators should adopt several key strategies. A hybrid approach that combines the efficiency of AI-driven assessments with the qualitative insights provided by human instructors is crucial. While AI can manage routine technical evaluations, human judgment is necessary for assessing creative components such as narrative cohesion and artistic innovation. By allowing AI to handle technical evaluations, instructors can focus their efforts on providing deeper, more personalized feedback on the creative and narrative aspects of student projects. In addition, AI models used in game design courses should be carefully calibrated to align with specific educational objectives. This involves training AI systems using rubrics that reflect course learning outcomes. As these models are fine-tuned over time, they can provide more tailored feedback that resonates with both the technical and creative demands of game design.
Interactive platforms powered by AI could also enhance the feedback process by encouraging student engagement with the feedback they receive. These platforms could present feedback through reflective questions and suggested next steps, prompting students to think critically about their work and apply insights to future iterations. Such systems could increase student receptiveness to feedback and foster a culture of continuous improvement. Moreover, AI can be integrated into the early stages of game development, where it can assist students in organizing their projects. For example, AI-powered project management tools could help students manage assets, document their design decisions, and keep track of development progress. These tools could ensure a more structured and efficient workflow, ultimately leading to higher-quality final submissions.
Faculty training is essential for the effective implementation of AI tools in game design education. Workshops and professional development opportunities should be provided to ensure that instructors are well-equipped to integrate AI into their teaching practices. Training should focus on how to use AI in ways that complement, rather than replace, human feedback, ensuring a balanced approach to student assessment. Thus, while AI holds significant promise for streamlining assessment in game design education, it cannot replace the essential role of human judgment, particularly when evaluating creative and artistic elements. A hybrid approach that integrates AI for technical tasks with human feedback for creative aspects offers the most effective path forward. As AI systems evolve, future iterations should aim to increase their adaptability to the diverse and complex nature of game design, ensuring that they support learning outcomes and foster student growth.
While AI holds significant potential for transforming assessment and feedback processes in game design education, it is crucial to acknowledge its practical and technological limitations. These limitations can hinder the seamless integration of AI into educational environments, particularly in creative disciplines like game design, where subjective judgment and interpretive analysis are paramount. One of the most prominent limitations of AI in game design education lies in its ability to effectively evaluate creative projects. AI systems excel at analyzing structured data and performing tasks that require the recognition of patterns, but they struggle with assessing the nuanced, subjective elements that characterize creative works. Game design is a multifaceted discipline involving both technical competencies, such as coding and mechanics, and creative expression, such as storytelling, art direction, and user experience design. While AI can provide feedback on more technical aspects, such as code efficiency or 3D model optimization, it lacks the interpretive capabilities to evaluate the aesthetic or emotional impact of a game, which is critical for comprehensive feedback in game design education (Ntoutsi et al., 2020).
For instance, evaluating narrative complexity, player engagement, or emotional resonance in a game is inherently subjective, requiring a deep understanding of human experience and creative intent—something AI cannot yet fully grasp. Creative elements like visual aesthetics, storytelling, and gameplay experience involve interpretative evaluation, which relies on human judgment and context. AI’s inability to account for these subjective factors can lead to overly simplistic or incomplete assessments that do not fully address the depth and complexity of a student’s creative work (Leavy, 2018). This limitation underscores the need for a hybrid assessment approach that combines AI’s efficiency in evaluating technical aspects with human instructors’ nuanced judgment in creative evaluation.
Another significant constraint is the interoperability issue between AI tools and various game design platforms. Game design projects are often developed using a range of software and tools, including game engines like Unity or Unreal Engine, 3D modeling software such as Blender, and sound design programs like FMOD. Each of these platforms has unique file formats, workflows, and technical requirements, making it difficult for AI systems to interface with them seamlessly. AI-driven assessment tools may not always be compatible with the diverse range of file formats used in game design, such as executable game builds, 3D assets, narrative scripts, and gameplay videos. This creates challenges in assessing projects that involve multiple file types or complex integrations between different game design tools (Mashhadi, Zolyomi, & Quedado, 2022).
For example, an AI system trained to evaluate narrative documents in plain text formats may struggle to assess interactive branching narratives submitted in formats like HTML or executable builds. Similarly, an AI tool designed for evaluating code performance in C# may not be compatible with a project built in a different programming language or using a custom game engine. These interoperability challenges can result in incomplete or inaccurate assessments, where the AI fails to properly analyze critical components of the project. In some cases, students may be required to submit alternative versions of their projects, such as gameplay videos instead of interactive builds, to ensure that the AI can evaluate their work. However, this approach often reduces the depth of the assessment, as the AI cannot fully experience the interactivity and player agency central to game design (Stãnescu et al., 2013).
Additionally, the reliance on specific versions of game design software and platforms can create further challenges. Game engines and software tools are frequently updated, and compatibility issues can arise when AI systems are not adapted to these updates. For example, a game developed using a newer version of Unity might not run properly on an AI assessment tool that is compatible with an older version of the engine, resulting in errors or incomplete evaluations. These technical barriers complicate the integration of AI into game design education, as both students and educators must navigate the technical requirements and limitations of their chosen tools alongside the AI systems in use (Zhou et al., 2023).
To address these limitations, it is essential to adopt flexible and adaptable AI tools that can interface with multiple file formats and game design platforms. AI developers must work closely with educators and game design professionals to ensure that AI systems are capable of handling the diverse range of projects and platforms used in the field. Furthermore, the development of AI systems that incorporate more sophisticated models of creative evaluation, perhaps through machine learning algorithms trained on a diverse array of creative works, may help bridge the gap between technical and creative assessment. However, until AI is capable of fully addressing these challenges, human oversight and involvement remain crucial in the assessment of creative game design projects.
5. CONCLUSION
The integration of AI into game design education holds considerable potential for revolutionizing the way assessments and feedback are delivered, particularly in terms of personalization. AI’s ability to generate customized feedback for individual students represents a significant shift from traditional, often one-size-fits-all approaches. By analyzing student performance data, AI systems can tailor feedback to address specific areas where each student may be struggling, thereby providing more actionable insights. For example, an AI tool might identify that a student consistently underperforms in game mechanics design but excels in narrative development. The system could then offer personalized guidance, suggesting resources or strategies to improve mechanics design while acknowledging the student’s strength in narrative elements. This targeted feedback can help students focus on their areas for improvement, promoting a more individualized learning path (Zouhaier, 2023).
Furthermore, the capacity to provide real-time feedback ensures that students receive timely support, allowing them to make iterative improvements more efficiently. This immediacy of feedback, paired with its specificity, fosters a more active and engaged learning process. Students are no longer left to decipher broad or delayed critiques; instead, they receive precise and timely suggestions that align with their unique learning trajectories. As a result, the learning experience becomes more dynamic and responsive, enabling students to progress at their own pace and according to their specific needs (Winstone & Carless, 2019).
By personalizing feedback in this way, AI also enhances student motivation and engagement. Tailored feedback is inherently more relevant to the individual learner, making it easier for students to see the direct link between their efforts and the improvements they need to make. This personalization fosters a sense of ownership and responsibility over their learning process, which can lead to deeper engagement with course materials and a more proactive approach to problem-solving. When students feel that the feedback they receive is relevant and specific to their learning needs, they are more likely to apply it effectively, leading to better learning outcomes (Dastyar, 2019).
Additionally, the ability to adapt feedback to different learning styles can further enhance its effectiveness. For example, some students may benefit from visual feedback, such as charts or diagrams, while others may prefer more text-based explanations. AI systems can be designed to detect these preferences and adjust the format of feedback accordingly, making the learning experience more inclusive and accessible to a diverse range of learners. This adaptability ensures that feedback is not only personalized in terms of content but also in its mode of delivery, catering to individual student preferences and learning styles.
In sum, the capacity to personalize feedback significantly enhances its role in game design education, fostering deeper student engagement and promoting improved learning outcomes. By providing targeted, real-time, and adaptable feedback, AI helps create a more responsive and student-centered learning environment. This not only streamlines the learning process but also ensures that each student receives the support they need to develop their skills effectively, ultimately leading to a more successful and fulfilling educational experience. As AI technology continues to evolve, its potential to further personalize and enhance feedback will play a crucial role in shaping the future of education, particularly in creative and complex disciplines like game design.
DATA AVAILABILITY
Data available upon request.
CONFLICTS OF INTEREST
The authors declare that there is no conflict of interest regarding the publication of this paper.
REFERENCES
2014). Fundamentals of game design. Pearson Education.
(2023). Developing a Curriculum for Ethical and Responsible AI: A University Course on Safety, Fairness, Privacy, and Ethics to Prepare Next Generation of AI Professionals. In Intelligent Communication Technologies and Virtual Mobile Networks (pp. 879-894). Springer Nature Singapore. https://doi.org/10.1007/978-981-99-1767-9_64
(2024, May). The Ink Splotch Effect: A case study on ChatGPT as a co-creative game designer. In Proceedings of the 19th International Conference on the Foundations of Digital Games (pp. 1-15). https://doi.org/10.1145/3649921.3650010
, , , , & (2011). Formative assessment: A critical review. Assessment in education: principles, policy & practice, 18(1), 5-25. https://doi.org/10.1080/0969594X.2010.513678
(2008). Adolescents and online fan fiction. Peter Lang.
(2020). How tools shape the game authoring process. https://www.diva-portal.org/smash/record.jsf?pid=diva2 %3A1449483&dswid=2939
(2010). From standards to rubrics in six steps: Tools for assessing student learning. Corwin Press.
(2023). Human-centered design to address biases in artificial intelligence. Journal of medical Internet research, 25, e43251. https://doi.org/10.2196/43251
, , , , & (2018). The struggle is real: Investigating the challenge of assigning a failing clinical grade. Nursing Education Perspectives, 39(3), 132-138. https://doi.org/10.1097/01.NEP.0000000000000295
(2023). Artificial intelligence in higher education: the state of the field. International Journal of Educational Technology in Higher Education, 20(1), 22. https://doi.org/10.1186/s41239-023-00392-8
, & (2019). The Investigation of The Effectiveness Of Participatory Learning Education On Students Motivation And Academic Achievement. International Journal of Advanced Research and Publications, 3 (8), 165-170.
(2017). Designing crowdcritique systems for formative feedback. International Journal of Artificial Intelligence in Education, 27, 623-663. https://doi.org/10.1007/s40593-016-0125-9
, , & (2011). Down and in: Assessment practices at the program level. National Institute for Learning Outcomes Assessment. Retrieved from http://www.learningoutcomesassessment.org
, , & (2021). The Gamification of Everything. DevOps.com. https://devops.com/the-gamification-of-everything/
(2023). Fairness and Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, and Mitigation Strategies. Sci, 6, 3. https://doi.org/10.3390/sci6010003
(2023). Art, assessment and uncertainty. Art, Design & Communication in Higher Education, 22(2), 263-275. https://doi.org/10.1386/adch_00077_1
(2023). Effectiveness of digital educational game and game design in STEM learning: a meta-analytic review. International Journal of STEM Education, 10(1), 36. https://doi.org/10.1186/s40594-023-00424-9
, , , , , & (2023). A systematic review of socio-technical gender bias in AI algorithms. Online Information Review, 47 (7), 1264-1279. https://doi.org/10.1108/OIR-08-2021-0452
, & (2023). Online Assessment in Higher Education: A Systematic Review. Online Learning, 27(1), 187-218. https://doi.org/10.24059/olj.v27i1.3398
, & (2023). Enhancing institutional assessment and reporting through conversational technologies: exploring the potential of AI-powered tools and natural language processing. Journal of Artificial Intelligence and Robotics, 1(1), 11-22. https://doi.org/10.59232/AIR-V1I1P102
, & (2022, March). Incorporating the Concepts of Fairness and Bias into an Undergraduate Computer Science Course to Promote Fair Automated Decision Systems. In Proceedings of the 53rd ACM Technical Symposium on Computer Science Education V. 2 (pp. 1075-1075). https://doi.org/10.1145/3478432.3499043
, , , & (2021). Value sensitive design and power in socio-technical ecosystems. Internet Policy Review, 10(3), 1-26. https://doi.org/10.14763/2021.3.1580
, , , & (2006). Convergence culture. University Press.
(2024). Effects of student engagement in peer feedback on writing performance in higher education. Interactive Learning Environments, 32(1), 128-143. https://doi.org/10.1080/10494820.2022.2081209
, , , , & (2012). The Gamification of Learning and Instruction: Game-Based Methods and Strategies for Training and Education. Pfeiffer. https://doi.org/10.1145/2207270.2211316
(2022). Algorithmic bias: review, synthesis, and future research directions. European Journal of Information Systems, 31(3), 388-409. https://doi.org/10.1080/0960085X.2021.1927212
, & (2023). Practical Game Design: A modern and comprehensive guide to video game design. Packt Publishing Ltd.
, & (2015). Strategies for effective feedback. Annals of the American Thoracic Society, 12(4), 557-560. https://doi.org/10.1513/AnnalsATS.201411-524FR
(2018, May). Gender bias in artificial intelligence: The need for diversity and gender theory in machine learning. In Proceedings of the 1st international workshop on gender equality in software engineering (pp. 14-16). https://doi.org/10.1145/3195570.3195580
. (2003). The Digital Disconnect: The Widening Gap Between Internet-Savvy Students and Their Schools. Internet and American Life. ii-30.
& . (2023). Synthesizing game levels for collaborative gameplay in a shared virtual environment. ACM Transactions on Interactive Intelligent Systems, 13(1), 1-36. https://doi.org/10.1145/3558773
, , , & (2022, April). A Case Study of Integrating Fairness Visualization Tools in Machine Learning Education. In CHI Conference on Human Factors in Computing Systems Extended Abstracts (pp. 1-7). https://doi.org/10.1145/3491101.3503568
, , & (2020). An educational intervention to increase student engagement in feedback. Medical Teacher, 42(11), 1289-1297. https://doi.org/10.1080/0142159X.2020.1804055
, , , & (2008). Educational game design for online education. Computers in Human Behavior, 24(6), 2530-2540. https://doi.org/10.1016/j.chb.2008.03.012
, , , , & (2021). Sharing the construction of assessment rubrics with students: A Model for collaborative rubric construction. Journal of University Teaching & Learning Practice, 18(4). https://doi.org/10.53761/1.18.4.9
, , , & (2020). Bias in data-driven artificial intelligence systems—An introductory survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 10(3), e1356. https://doi.org/10.1002/widm.1356
, , , , , .,... & (2020). Checklists and Rubrics: Do They Really Promote Learning? In ICERI2020 Proceedings (pp. 4906-4911). IATED. https://doi.org/10.21125/iceri.2020.1069
(2019, October). Peer feedback processes in the game industry. In Proceedings of the Annual Symposium on Computer-Human Interaction in Play (pp. 427-438). https://doi.org/10.1145/3311350.3347176
, , , , ., & (2018). A methodology for assessing the effectiveness of serious games and for inferring player learning outcomes. Multimedia Tools and applications, 77, 2849-2871. https://doi.org/10.1007/s11042-017-4467-6
, , , & (2024). Analog prototyping for digital game design. In Encyclopedia of Computer Graphics and Games (pp. 102-104). Cham: Springer International Publishing. https://doi.org/10.1007/978-3-031-23161-2_135
(2011). Video Games and Learning: Teaching and Participatory Culture in the Digital Age. Columbia University.
(2013). Interoperability strategies for serious games development. Internet Learning, 2(1), 373-378. https://doi.org/10.18278/il.2.1.4
, , , , & (2023). Introduction to rubrics: An assessment tool to save grading time, convey effective feedback, and promote student learning. Routledge.
, & (2023, July). Prompt-guided level generation. In Proceedings of the Companion Conference on Genetic and Evolutionary Computation (pp. 179-182). https://doi.org/10.1145/3583133.3590656
, , , , , & (2023). Artificial intelligence and machine learning in ophthalmology: A review. Indian Journal of Ophthalmology, 71(1), 11-17. https://doi.org/10.4103/ijo.IJO_1569_22
, , , , & (2014, July). Gamification: metacognitive scaffolding towards long term goals? In UMAP workshops (pp. 7-11).
, & (2023). Game-based assessment framework for virtual reality, augmented reality and digital game-based learning. International Journal of Educational Technology in Higher Education, 20(36). https://doi.org/10.1186/s41239-023-00405-6
, , , & (2020). Combinatory and complementary practices of values and virtues in design: A reply to Reijers and Gordijn. Filosofia, (65), 107-121.
(2023). Exploring the potential impact of artificial intelligence (AI) on international students in higher education: Generative AI, chatbots, analytics, and international student success. Applied Sciences, 13(11), 6716. https://doi.org/10.3390/app13116716
, , , , , , & (2019). Designing effective feedback processes in higher education: A learning-focused approach. Routledge. https://doi.org/10.4324/9781351115940
, & (2022). The need to disentangle assessment and feedback in higher education. Studies in higher education, 47(3), 656-667. https://doi.org/10.1080/03075079.2020.1779687
, & (2023). Explainable Feedback for Learning Based on Rubric-Based Multimodal Assessment Analytics with AI. https://aisop.de/EPEPLA/Feedback.pdf
, , & (2021). Are we there yet? -A systematic literature review on chatbots in education. Frontiers in artificial intelligence, 4, https://doi.org/10.3389/frai.2021.654924
, , , , , & (2007). Teaching game design through cross-disciplinary content and individualized student deliverables. Journal of Game Development, 2(2), 19-32.
, , & (2024). AI-powered methods for academic assessment: overcoming scalability challenges in large university classrooms and conference review. [Doctoral dissertation], University of British Columbia.
(2023). The impact of Artificial intelligence on higher education: An empirical study. European Journal of Educational Sciences, 10(1), 17-33. https://doi.org/10.19044/ejes.v10no1a17
(2022, April). Participatory design of AI systems: opportunities and challenges across diverse users, relationships, and application domains. In CHI Conference on Human Factors in Computing Systems Extended Abstracts (pp. 1-4). https://doi.org/10.1145/3491101.3516506
, , , , & (