Affective Generative AI for Adaptive and Inclusive eLearning: Prompt Engineering, Ethics and Pedagogical Innovation

Spyridon Kontis1[0009-0009-7892-6966] and Sofia Anastasiadou1[0000-0001-6404-5003]
1 University of Western Macedonia, Greece
dmw00034@uowm.gr,
sanastasiadou@uowm.gr
DOI: 10.46793/eLearning2025.126K

 

Abstract.  Over the past few years, artificial intelligence (AI) has moved from being a background tool to becoming a central actor in education [49]. Generative AI is transforming the way learners interact with knowledge, offering personalized pathways, adaptive content, and new forms of digital collaboration. Yet most existing systems remain focused on efficiency and performance, while overlooking the emotional side of learning, factors such as motivation, frustration, or anxiety that often determine whether a student succeeds or disengages [1], [2].

This paper proposes the idea of Affective Generative AI in eLearning, combining large language models (LLMs), prompt engineering, and emotion aware computing to design learning environments that are not only intelligent but also empathetic and inclusive. We argue that digital tutors capable of recognizing affective cues can adapt their responses in real time, providing encouragement, reframing explanations, or reducing cognitive load, thereby supporting both well being and achievement [3], [4].

At the same time, handling emotional data [47] raises critical ethical and legal [12] questions. Issues of privacy, bias [51], and transparency must be addressed if such systems are to be trusted and responsibly deployed [5]– [7]. Our conceptual framework seeks to balance pedagogical innovation with these concerns, highlighting a path towards human-centered [53] AI in education that values inclusion, equity, and emotional resilience alongside cognitive performance.

Keywords: Generative AI, Affective Computing, Prompt Engineering, Adaptive Learning, LLMs, AI Ethics, Inclusive Education.

References

  1. Chennupati, A. (2024). The evolution of AI: What does the future hold in the next two years. World Journal of Advanced Engineering Technology and Sciences, 12(1), 028. https://doi.org/10.30574/wjaets.2024.12.1.0176 022
  2. Cheong, B. C. (2024). Transparency and accountability in AI systems: Safeguarding wellbeing in the age of algorithmic decision–making. Frontiers in Human Dynamics, 6, 1421273. https://doi.org/10.3389/fhumd.2024.1421273
  3. Chipiga, I. V. (2023). Legal entropy in International Law. Russian Journal of Legal Studies, 10(2), 113–120. https://doi.org/10.17816/RJLS322823
  4. Fomin, V. V. , & Astromskis, P. (2023). The black box problem. In J. S. Gordon (Ed.), Philosophy and Human Rights https://doi.org/10.1163/9789004682900_012 (pp. 112–125). Brill.
  5. Frosio, G. (2024). Algorithmic enforcement tools: Governing opacity with due process. In S. Francese & R. King (Eds.), Driving Forensic In- novation in the 21st Century (pp. 195 218). Springer. https://doi.org/10.1007/978–3–031–56556–4_9
  6. Huang, K. , Joshi, A. , DunAnon., & Hamilton, N. (2024). AI regulations. In K. Huang, Y. Wang, B. Goertzel, Y. LiAnon. Wright, & J. Ponnapalli (Eds.), Generative AI Security (pp. 61–98). Springer. https://doi.org/10.1007/978–3–031–54252–7_3
  7. Koridena, B. K. , Malaiyappan, J. N. A., & Tadimarri, A. (2024). Ethical considerations in the development and deployment of AI systems. Euro- pean Journal of Technology, 8(2), 41–53. https://doi.org/10.47672/ejt.1890
  8. Kostic, M. M. (2014). The elusive nature of entropy and its physical meaning. Entropy, 16(2), 953–967. https://doi.org/10.3390/e16020953
  9. Miazi, M. A. N. (2023). Interplay of legal frameworks and artificial intelligence (AI): A global perspective. Law and Policy Review, 2(2), 01–25. https://doi.org/10.32350/lrp.22.01
  10. Mylrea, M. , & Robinson, N. (2023). Artificial Intelligence (AI) trust framework and maturity model: Applying an entropy lens to improve security, privacy, and ethical AI. Entropy, 25(10), 1429. https://doi.org/10.3390/e2510142937895550
  11. Nove, I. C. , Taddeo, M., & FloridiAnon. (2023). Accountability in artificial intelligence: What it is and how it works. AI & Society, 39(4), 1871–1882. https://doi.org/10.1007/s00146–023–01635–y
  12. Papademetriou, C. (2012). To what extent is the Turing Test still important? Pliroforiki, 1(22), 28–32.
  13. Papademetriou, C. , Ragazou, k. , & Garefalakis, A. (2023). Adapting Artificial Intelligence in Cypriot Hotel Industry: The views of hotel man- agers. In Eurasia Business and Economics Society Conference Proceed- ings (pp. 143–155).
  14. Papademetriou, C. , Ragazou, K. , Garefalakis, A., & Anon. (2024). The role of Artificial Intelligence in employee wellness and mental health. In Global I. G. I. (Ed.), Human Resource Strategies in the Era of Artificial Intelligence (pp. 29–54). IGI Global. https://doi.org/10.4018/979–8– 3693–6412–3.ch002
  15. Poli, P. K. R., PamidiAnon., & PoliAnon. K. R. (2025). Unraveling the ethical conundrum of artificial intelligence: A synthesis of literature and case studies. Augmented Human Research, 10(1), 2. https://doi.org/10.1007/s41133–024–00077–5
  16. Rashid, A. B. , & Kausik, M. A. (2024). AI revolutionizing industries worldwide: A comprehensive overview of its diverse applications. Hy- brid Advances, 7, 100277. https://doi.org/10.1016/j.hy- badv.2024.100277
  17. RawasAnon. (2024). AI: The future of humanity. Discover Artificial Intelligence, 4(25), 25. https://doi.org/10.1007/s44163–024–00118–3
  18. Robles, P. , & Mallinson, D. J. (2023). Catching up with AI: Pushing toward a cohesive governance framework. https://doi.org/10.1111/polp.12529 Politics & Policy, 51(3), 355–372.
  19. Scholes, M. S. (2025). Artificial intelligence and uncertainty. Risk Sciences, 1, 100004. https://doi.org/10.1016/j.risk.2024.100004
  20. Anon. , & Kougias, I. (2019). Legal issues within ambient intelligence environments. In 2019 10th International Conference on Information, Intelligence, Systems and Applications (IISA) (pp. 1–2). IEEE. https://doi.org/10.1109/IISA.2019.8900748
  21. Anon. , & Kougias, I. (2024). The AI spectrum under the doctrine of necessity: Towards the flexibility of the future legal praxis. Computers and Artificial Intelligence, 1258. https://doi.org/10.59400/cai.v2i1.1258 2(1),
  22. Sichelman, T. (2021). Quantifying legal entropy. Frontiers in Physics, 9, 665054. https://doi.org/10.3389/fphy.2021.665054
  23. SouravlasAnon. , & Anon. (2022). On implementing social community clouds based on Markov IEEE Transactions on Computational Social Systems. https://doi.org/10.1109/TCSS.2022.3213273
  24. SouravlasAnon. , Anon. , Economides, T., & KatsavounisAnon. (2023). Probabilistic community detection in social networks. IEEE Access, 11, 25629–25641. https://doi.org/10.1109/ACCESS.2023.3257021
  25. SouravlasAnon. , Anon. , & Kostoglou, I. (2022). A novel method for general hierarchical system modeling via colored Petri nets based on transition extractions from real data sets. Applied Sciences, 13(1), 339. https://doi.org/10.3390/app13010339
  26. SouravlasAnon. , Anon. , Tantalaki, N., & KatsavounisAnon. (2022). A fair, dynamic load balanced task distribution strategy for heterogeneous cloud platforms based on Markov process modeling. IEEE https://doi.org/10.1109/ACCESS.2022.3157435 Access, 10, 26149–26162.
  27. Vinothkumar, J. , & Karunamurthy, A. (2023). Recent advancements in artificial intelligence technology: Trends and implications. International Journal of Multidisciplinary Scientific Research and https://doi.org/10.54368/qijmsrd.2.1.0003 Development, 2(1), 01–11.
  28. WachterAnon. (2024). Limitations and loopholes in the EU AI Act and AI liability directives. SSRN, 26(3), 671–718. https://doi.org/10.2139/ssrn.4924553
  29. Wischmeyer, T. (2020). Artificial intelligence and transparency: Open- ing the black box. In Regulating Artificial Intelligence (pp. 75–101). Springer. https://doi.org/10.1007/978–3 030–32361–5_4
  30. Wong, A. (2020). The laws and regulation of AI and autonomous systems. In L. Strous, R. Johnson, D. Grier, & D. Swade (Eds.), Unimagined Futures – ICT Opportunities and Challenges (Vol. 555, pp. 38–54). Springer. https://doi.org/10.1007/978–3–030–64246–4_4
  31. Xi, R. (2025). On emerging technologies: The old regime, and the pro- activity. Cardozo International & Comparative Law Review, 8(1).
  32. Zaidan, E. , & Ibrahim, I. A. (2024). AI governance in a complex and rapidly changing regulatory landscape: A global perspective. Humanities & Social Sciences Communications, 11(1), 1121. https://doi.org/10.1057/s41599–024–03560–x
  33. Bailenson, J. N. (2018). Experience on demand: What virtual reality is, how it works, and what it can do. W. W. Norton & Company.
  34. Cummings, M. L. (2018). Artificial intelligence and the future of education. Magazine, 39(3), 25–36. https://doi.org/10.1609/aimag.v39i3.2807 AI
  35. Dignum, V. (2019). Responsible Artificial Intelligence: How to develop and use AI in a responsible way. Springer. https://doi.org/10.1007/978– 3–030–30371–6
  36. FloridiAnon. , & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1). https://doi.org/10.1162/99608f92.8cd550d1
  37. Koene, A. , Dowthwaite. , SethAnon., & Webb, H. (2021). Algorithmic bias: Impact on children’s mental health. The Lancet Child & Adolescent Health, 5(3), 174–176. https://doi.org/10.1016/S2352–4642(21)00013–5
  38. Mittelstadt, B. D. , Allo, P., Taddeo, M., WachterAnon., & FloridiAnon. (2016). The ethics of algorithms: Mapping the debate. https://doi.org/10.1177/2053951716679679 Big Data & Soci- ety, 3(2).
  39. Riedl, M. O. (2020). Human–centered artificial intelligence and narrative intelligence. Communications of the ACM, 63(12), 62–71. https://doi.org/10.1145/3381831
  40. Risser, H. M. , & Bottoms, B. L. (2020). Emotion recognition, AI, and children’s rights. Journal of Children and https://doi.org/10.1080/17482798.2020.1778487 Media, 14(3), 325–341.
  41. Anon. , & Kougias, I. (2021a). The legalhood of artificial intelligence: AI applications as energy services. Journal of Artificial Intelligence and Systems, 3, 83–92.
  42. Anon. , & Kougias, I. (2021b). Category theory as interpretation law model in artificial intelligence era. Journal of Artificial Intelligence and Systems, 3, 35–47.
  43. Sharkey, N. (2019). Autonomous weapons systems, killer robots and human dignity. Law, Innovation and Technology, 11(1), 107–128.
  44. Anon. , & Anon. (2025). Autopoietic co–evolution of AI and law. In S. Anon. , A. Masouras, & L. Anastasiadis (Eds.), Modern Perspectives on Artificial Intelligence and Law (pp. 81 88). IGI Global Scientific Publishing.
  45. Kalogera, C. , Anon. , Anon., & Anon. (2025). The digital DNA of modern workforce. In L. , P. LiargovasAnon. Anastasiadis, & S. (Eds.), Harnessing Business Intelligence for Modern Talent Management (pp. 379–390). IGI Global.
  46. Anon. , Anon. , AnastasiadisAnon., & Anon. (2025). Legal entropy in AI governance. In S. Anon. , A. Masouras, & L. Anastasiadis (Eds.), Modern Perspectives on Artificial Intelligence and Law (pp. 193–206). IGI Global Scientific Publishing.
  47. Anon. AnastasiadisAnon. , & Anon. (2024). The role of data in envi- ronmental, social, and governance strategy. In C. PapademetriouAnon. , K. Ragazou, A. Garefalakis, & S. Papalexandris (Eds.), ESG and Total Quality Management in Human Resources (pp. 201 218). IGI Global Scientific Publishing.
  48. SouravlasAnon. , & Anon. (2021). Pipelined dynamic scheduling of big data streams. Applied Sciences, 10(14), 4796.
  49. SouravlasAnon. , & Anon. (2022). On implementing social community clouds based on Markov models. IEEE Transactions on Computational Social Systems.
  50. SouravlasAnon. , Anon. , & KatsavounisAnon. (2021a). More on pipelined dynamic scheduling of big data streams. Applied Sciences, 11(1), 61.
  51. SouravlasAnon. , Anon. , & KatsavounisAnon. (2021b). A survey on the recent advances of deep community detection. Applied Sciences, 11(16), 7179.
  52. Tantalaki, N. , SouravlasAnon. , Roumeliotis, M., & KatsavounisAnon. (2020). Pipeline based linear scheduling of big data streams in the cloud. IEEE Access, 8, 117182–117202.
  53. SouravlasAnon. , KatsavounisAnon. , & Anon. (2021c). On modeling and simulation of resource allocation policies in cloud computing using colored Petri nets. Applied Sciences, 10(16), 5644.
  54. SouravlasAnon. , Anon. , Tantalaki, N., & KatsavounisAnon. (2022). A fair, dynamic load balanced task distribution strategy for heterogeneous cloud platforms based on Markov process modeling. IEEE Access, 10, 26149–26162.
  55. SouravlasAnon. , Anon. , & Kostoglou, I. (2023a). A novel method for general hierarchical system modeling via colored Petri nets based on transition extractions from real data sets. Applied Sciences, 13, 339.

 

 

Izvor: Proceedings of the 16th International Conference on e-Learning (ELEARNING2025)