Vishnunarayan Girishan Prabhu1, Bulent Soykan2, Roger Azevedo1, Sean Mondesire1, and Ghaith Rabadi1
1 – School of Modeling, Simulation, and Training, University of Central Florida, FL, USA
2 – Institute for Simulation and Training, University of Central Florida, FL, USA
Abstract: Building and sustaining digital twins and simulation models is a complex, resource-intensive process that requires technical fluency, validation expertise, and ongoing adaptation as systems evolve. At the School of Modeling, Simulation, and Training at the University of Central Florida, we are exploring how Artificial Intelligence (AI), particularly large language models (LLMs) can be embedded in our coursework to support students in key tasks such as abstraction, data collection and analysis, code generation, model development, model validation, design of simulation experiments and long-term model maintenance across the end-to-end development lifecycle.
By integrating AI tools into the modeling workflow, we aim to increase students’ critical thinking skills, reduce extraneous cognitive load, enhance practical skills, and accelerate the development and refinement of simulation models and digital twins. These tools will operate within a faculty-guided, expert-in-the-loop framework, ensuring human oversight reinforces academic rigor, supports learning, and maintains model quality. This approach also underscores the importance of faculty readiness, where we are developing strategies to prepare instructors to effectively guide AI-supported learning environments, such as incorporating LLMs for automated code generation, simulation scenario design, adaptive feedback on model structure, and AI-powered agents for real-time model debugging, and foster responsible, pedagogically grounded use of these technologies.
This vision is not only about improving education, but also about preparing a future-ready workforce equipped to co-develop, validate, and sustain AI-augmented systems as adoption accelerates across industries. While still in the conceptual stage, we see this as a catalyst for innovation in modeling and simulation education and a foundation for statewide collaboration on ethical, scalable, responsible, and accessible AI integration.
Integrating AI into Modeling and Simulation Education
As educators in modeling and simulation, we face a critical challenge: how do we prepare students for a workforce where AI tools play a central role in technical tasks? At the University of Central Florida’s School of Modeling, Simulation, and Training, we’re developing a practical framework that puts faculty readiness at the center of AI integration into coursework.
Foundations of AI-Enhanced Learning
Before diving into practical implementation, we must understand the cognitive implications of AI integration in education. Recent empirical evidence suggests that while AI tools can enhance productivity, they may also lead to unintended consequences such as decreased human agency, overreliance on AI outputs, and what researchers term “metacognitive laziness” – a decline in learners’ ability to monitor and regulate their own learning processes (Fan et al., 2025).
Understanding Self-Regulatory Skills in AI-Enhanced Learning
Effective learning with AI requires students to develop robust self-regulatory skills across multiple interconnected dimensions (Azevedo and Wiedbusch). These skills don’t develop automatically. Faculty must explicitly teach and model them, which is why instructor preparation is paramount. Without this foundation, students risk becoming passive consumers of AI-generated content rather than active learners who use AI to enhance their understanding, learning, and problem solving. As Azevedo and Gašević (2019) note, the integration of advanced learning technologies presents unique challenges for self-regulated learning that require careful consideration of how students monitor and control their learning processes.
Why Faculty Readiness Comes First
Before students can effectively use large language models (LLMs) in their coursework, instructors need to understand both the capabilities and limitations of these tools (Weidlich et al., 2025). Think of it like teaching simulation methodology. You wouldn’t have students build stochastic models without first ensuring instructors understand both how probability distributions can capture system uncertainty and how modeling assumptions can mask critical real-world behaviors.
Our motivation is simple yet profound. Complex simulation models require significant time and expertise to develop, validate, and maintain. By strategically integrating AI tools, we can help students build and use critical thinking skills and design decisions rather than getting bogged down in syntax errors or routine coding tasks, making the entire process faster and more educationally efficient (Takerngsaksiri, et al.).
The Faculty-in-the-Loop Framework: A Step-by-Step Guide
Start with Low-Stakes Experiments
Begin by introducing AI tools in non-critical assignments where mistakes such as conceptual misunderstandings or incorrect implementations become learning opportunities. Here’s what could work:
During the first two weeks, we recommend focusing on fundamental concept exploration and evaluation exercises. Students would use LLMs to explain existing simulation concepts, with a specific task: ask the AI to explain a discrete event simulation algorithm, a method where systems change at specific points in time rather than continuously, then identify any of their gaps in its explanation. For example, students might prompt AI with: “Explain how entities move through a queuing system in simulation” and then evaluate whether it correctly addresses concepts such as service time distributions, queue length, etc. This approach develops critical thinking skills that involve understanding and evaluation of various task elements while familiarizing students with AI capabilities. The key learning outcome is that students begin to see AI as a tool that requires verification rather than a source of absolute truth therefore requiring them to develop and use existing and newly acquired self-regulatory skills such as metacognitive monitoring and use of cognitive strategies.
In weeks three and four, we suggest progressing to assisted code generation. Students would first write pseudocode for their simulation logic, then use LLMs to translate it into working code. For instance, students might write pseudocode for an M/M/1 queue: “Initialize server as idle, Create empty queue, While simulation time < end time: Generate next arrival, If server idle then begin service, else add to queue.” They would then prompt the AI to convert this to Python or their chosen language. Faculty members could review this process by comparing AI-generated code with hand-typed examples, helping students understand that AI is a drafting tool, not a replacement for fundamental understanding. This phase should build student confidence in identifying and correcting errors in AI-generated content while developing the self-regulatory skills of monitoring and evaluation.
Build Progressive Complexity
Once students grasp basic AI interaction, we recommend scaling to more complex modeling tasks. Model abstraction, the process of simplifying real-world systems into computational representations, presents an interesting contrast between traditional and AI-enhanced approaches. Traditionally, students struggle to identify key system components when faced with complex scenarios. With our proposed AI-enhanced approach, students would use LLMs to brainstorm system boundaries and key variables, but faculty would guide verification of these suggestions through collaborative review sessions.
For example, when modeling a manufacturing system, students might prompt: “Given this manufacturing system with three production lines, two quality checkpoints, and variable demand patterns, what are the essential entities, attributes, and processes for a discrete event simulation?” The AI’s response becomes a starting point for deeper discussion about modeling choices and trade-offs, not an endpoint. Faculty would then lead discussions asking: “Why did the AI suggest these entities? What might the AI have missed? How would the model change if we included worker fatigue or machine breakdowns?
The design phase verification addresses another common challenge. Verification scenarios, test cases designed to ensure that a model accurately represents the real system, often confuse students who overlook critical edge cases. Here, we propose using AI to generate comprehensive test cases. For instance, when modeling an emergency department, students could prompt AI to suggest edge cases such as natural disasters creating mass casualties, pandemic conditions affecting staff availability, or system failures during peak hours. The faculty would then guide evaluation of which AI-suggested tests are meaningful and realistic, explicitly teaching the self-regulatory skill of critical evaluation.
Implement Structured Oversight
The “faculty-in-the-loop” isn’t just a catchphrase; it’s an essential quality control mechanism that we anticipate will require additional hours of faculty time per week during initial implementation. We recommend a structured approach involving weekly review cycles where students submit both their work and their AI interaction logs. Faculty would review not just the output, but the prompting process itself, leading to rich class discussions about what worked, what failed, and why. Through these conversations, we expect to iteratively improve our collective prompting strategies.
Our proposed assessment rubric reflects this holistic approach. We plan to allocate 40% to the technical correctness of the model, another 20% to the appropriate use of AI tools, 20% to critical evaluation of AI outputs, and 20% to documentation and justification of design decisions. Specifically, the “appropriate use” criterion would evaluate whether students: (1) used AI for concept exploration before implementation, (2) verified AI suggestions against course materials, (3) documented their prompting strategy, and (4) demonstrated awareness of when NOT to use AI. This distribution should ensure students can’t simply rely on AI to produce work without understanding it deeply (Fernández‐Sánchez et al.).
Scaling Strategies: From Pilot to Program
- Phase 1: Faculty Development (Semester 1)
We recommend starting with a cohort of interested instructors to create a foundation for success. Weekly workshops should cover both technical and pedagogical aspects through structured progression. Technical training would focus on hands-on experimentation with LLM capabilities and limitations, building effective prompts for simulation concepts, and understanding AI hallucinations in technical contexts. Pedagogical training would address the cognitive science of learning with AI tools, strategies for developing student self-regulatory skills, assessment approaches that promote deep learning, and techniques for balancing scaffolding with independence. Faculty could collaborate to develop assignment templates and build a shared repository of effective prompts and use cases, and hold “failure analysis” sessions to collectively learn from what doesn’t work and avoid common pitfalls in AI integration. building collective wisdom about AI integration pitfalls.
- Phase 2: Controlled Rollout (Semester 2)
We suggest selecting one to two courses for initial implementation while maintaining traditional sections as control groups. This would generate comparative data on learning outcomes and adjustment based on both student and faculty feedback. Key metrics to track could include conceptual understanding, practical skills, self-regulatory development, and long-term retention. The controlled nature of this rollout should help identify which aspects of AI integration translate well across different course topics and which need subject-specific adaptation.
- Phase 3: Program Integration (Year 2)
By the second year, programs should be ready to develop standardized AI literacy modules that can be integrated across the curriculum. Creating discipline-specific prompt libraries that reflect the unique needs of different modeling domains would support broader adoption. Peer mentoring systems would likely emerge naturally as early adopters help colleagues navigate AI integration. Finally, formally integrating AI competencies into program learning objectives would ensure graduates are prepared for an AI-augmented workplace.
Practical Tips for Getting Started
For individual faculty members, the journey should begin with their most routine, time-consuming tasks. Experimenting with AI for generating practice problems or creating variations of existing exercises offers low-risk starting points. For example, use AI to generate multiple scenarios for a queuing theory problem: “Create 5 variations of a bank teller queuing problem with different arrival rates, service patterns, and customer priorities”. Documenting successful prompts and sharing them with colleagues helps build institutional knowledge; what works for one instructor often sparks ideas for others. Most importantly, always verify AI outputs before classroom use to maintain students’ trust in instructor and institutional accuracy.
Department heads play a crucial role in successful implementation. Allocating dedicated time for faculty AI training, not just optional workshops, signals institutional commitment. Creating innovation grants for early adopters builds momentum and rewards experimentation. Clear guidelines on acceptable AI use provide necessary boundaries, while investment in collaborative spaces for faculty experimentation fosters innovation.
Students need guidance on treating AI as a study partner rather than an answer key. They should always validate AI-generated code or models and document their prompting process. The focus must remain on understanding why AI suggestions work or don’t work, developing critical thinking skills that distinguish competent professionals from mere tool users.
Common Pitfalls and How to Avoid Them
Over-reliance on AI emerges as a predictable pitfall. Two extreme approaches exist: fully embracing AI as an inevitable tool with emphasis on responsible use and prompt engineering, or requiring complete manual work before AI is allowed. We propose a middle ground where students first use AI to generate detailed algorithmic steps, then verify and calculate fundamentals manually before applying AI for implementation. For example, in a basic queuing system, students would validate steps like arrival, service, and exit, and compute metrics such as waiting times and utilization before using AI to code the simulation. This ensures fundamental understanding precedes automation. When students understand the underlying logic, they can better evaluate and improve AI-generated solutions (Nathaniel et al.).
Inconsistent AI outputs present another anticipated challenge. Rather than viewing this as a weakness, we propose teaching prompt engineering as a core skill. Students would practice achieving consistent results through precise language, learning how language precision affects AI responses. For example, compare results from vague prompts (“make a simulation”) versus specific ones (“create a discrete event simulation in Python using SimPy for a single-server queue with exponential inter-arrival times (mean=5 minutes) and service times (mean=3 minutes)”. This skill should prove invaluable in professional settings where clear communication with AI tools determines productivity.
Academic integrity requires thoughtful assignment design. Instead of prohibiting AI use, we recommend designing assignments that require it but assess understanding (Tolk et al.). For example, asking students to “use AI to generate three different modeling approaches for this hospital emergency room scenario, then analyze trade-offs between model complexity and computational efficiency” creates an assignment where AI use enhances rather than undermines learning objectives.
Measuring Success: Early Indicators
To evaluate the effectiveness of this framework, we plan to use multiple assessment methods. Pre- and post-implementation questionnaires will measure changes in student confidence, AI literacy, and understanding of simulation concepts, and self-regulatory skills. We’ll track specific assignment goals to see if students achieve better model validation, more comprehensive documentation, and deeper analytical thinking. Traditional metrics like assignment scores and project quality will be compared between AI-enhanced and traditional sections. Additionally, we’ll conduct focus groups with both faculty and students to gather qualitative insights about the learning experience and areas for improvement.
The Path Forward
Integrating AI into modeling and simulation education isn’t about replacing traditional teaching methods; it’s about strategic augmentation that enhances human cognitive capabilities. By prioritizing faculty readiness, grounding our approach in cognitive science, and implementing structured oversight, we can leverage AI’s power while maintaining academic rigor and developing students’ self-regulatory skills.
The future workforce will need professionals who can collaborate with AI tools to build, validate, and maintain increasingly complex models. Our role as educators is to provide the framework and guidance to develop these skills responsibly.
Next Steps for Your Institution
Consider beginning with a small faculty working group interested in exploring AI integration. This group could start by experimenting with AI tools in their own work before bringing ideas to the classroom. Once comfortable, identify a suitable course for pilot implementation, ideally one where the instructor is enthusiastic and students are open to experimentation. Develop assessment criteria that value both the learning process and outcomes, focusing on critical thinking and appropriate AI use rather than just final products. As you implement your pilot, maintain comparison opportunities with traditional approaches for understanding what works best. Most importantly, create channels for sharing experiences, challenges, and successes within your institution and the broader academic community.
The integration of AI into modeling and simulation education is not in the distant future; it’s happening now. By taking a faculty-first approach and building structured frameworks for implementation, we can ensure our students are prepared for the AI-augmented workplace while maintaining the critical thinking skills that make them truly valuable professionals.
Works Cited
Azevedo, Roger, and Dragan Gašević. “Analyzing multimodal multichannel data about self-regulated learning with advanced learning technologies: Issues and challenges.” Computers in Human Behavior 96 (2019): 207-210.
Azevedo, Roger, and Megan Wiedbusch. “Theories of metacognition and pedagogy applied to AIED systems.” Handbook of Artificial Intelligence in Education. Edward Elgar Publishing, 2023. 45-67.
Fan, Yizhou, et al. “Beware of metacognitive laziness: Effects of generative artificial intelligence on learning motivation, processes, and performance.” British Journal of Educational Technology 56.2 (2025): 489-530.
Fernández‐Sánchez, Andrea, Juan José Lorenzo‐Castiñeiras, and Ana Sánchez‐Bello. “Navigating the future of pedagogy: The integration of AI tools in developing educational assessment rubrics.” European Journal of Education 60.1 (2025): e12826.
Nathaniel, Jemimah, et al. “Investigating the impact of Generative AI integration on the Sustenance of Higher-Order Thinking Skills and understanding of programming logic in Programming Education.” Computers and Education: Artificial Intelligence (2025): 100460.
Takerngsaksiri, Wannita, et al. “Students’ perspectives on ai code completion: Benefits and challenges.” 2024 IEEE 48th Annual Computers, Software, and Applications Conference (COMPSAC). IEEE, 2024.
Tolk, Andreas, et al. “Chances and challenges of CHATGPT and similar models for education in M&S.” 2023 Winter Simulation Conference (WSC). IEEE, 2023.
Weidlich, J., et al. “ChatGPT in Education: An Effect in Search of a Cause.” Journal of Computer Assisted Learning, 41.5 (2025).
Author Note: This article was drafted with assistance from AI tools for initial structuring and editing. All pedagogical frameworks, implementation strategies, and institutional experiences described are based on actual practices at UCF’s School of Modeling, Simulation, and Training. The final content has been extensively reviewed and revised by human experts to ensure accuracy and practical applicability.