Artificial intelligence (AI) is a subject that presents difficulties with respect to its use in education. AI is currently going through another period of extreme hype as the panacea for education, and it is at the top of the peak of expectations mainly due to successful applications outside of education, such as in finance and marketing and medical research. Furthermore, the term ‘AI’ is increasingly used (incorrectly) as a complex term for any computational activity.

Even in education, there are different possible areas of AI application. Zeide ( 2019) makes a very useful distinction between institutional, student support and instructional applications (Figure 9.4.2 below).

While AI applications for institutional or student support purposes are very important, this chapter focuses on the pedagogical possibilities of different media and technologies (what Zeide calls “instructional” applications). In particular, the focus in this section will be on the role of AI as a form of medium or technology for teaching and learning, its pedagogical possibilities, and its strengths and weaknesses in this area.

AI is really a subset of computing. Therefore, all the general possibilities of informatics in education established in Section 5 of chapter 8 will apply to AI. This section aims to discover the additional potential that AI can offer to teaching and learning. This will mean focusing particularly on its role as a medium rather than a technology for education, and looking at it in a broader context apart from the computational aspects of AI, in particular its pedagogical role.

9.4.2 What is artificial intelligence?

McCarthy’s (1956, cited in Russell and Norvig, 2010 ) original definition of artificial intelligence is:

every aspect of learning or any other feature of intelligence can be described with such precision that a machine can be made to simulate it. It will try to find out how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.

Zawacki-Richter et al. ( 2019 ) , in a review of the literature on AI in higher education, report that those authors who defined artificial intelligence tended to describe it as:

intelligent computing systems or intelligent agents with human characteristics, such as the ability to memorize knowledge, perceive and manipulate their environment in a similar way to humans, and understand human natural language.

There are three basic computing requirements that differentiate ‘modern’ AI from other computing applications:

  • access to large amounts of data;
  • large-scale computing power to manage and analyze the data;
  • powerful and relevant algorithms for data analysis.

9.4.3 Why use artificial intelligence to teach and learn?

There are two different goals for the general use of artificial intelligence. The first is to increase the efficiency of a system or organization, primarily by reducing high labor costs, that is, by replacing relatively expensive human workers with relatively less expensive machines (automation). Politicians, business people, and policy makers see a growing movement toward automation as a way to reduce the costs of education. However, in education, the main cost is that represented by teachers and instructors.

The second objective is to increase the effectiveness of teaching and learning, in economic terms, in order to increase outcomes: better learning outcomes and greater benefits for the same cost or more. To this end, artificial intelligence would be used alongside the support role of teachers and instructors.

These are understandable goals, but we’ll see later in this section that such goals to date are mostly aspirational rather than real.

In terms of this book, a key focus is on developing the knowledge and skills required of students in the digital age. The key test for artificial intelligence is to what extent it can help in the development of these higher order skills.

9.4.4 Affordances and examples of using AI in teaching and learning

Zawacki-Richter et al. ( 2019) in a review of the literature on AI in education initially identified 2,656 research papers in English or Spanish, then narrowed the list by removing duplicates, limiting publication to articles in peer-reviewed journals published between 2007 and 2018, and removing articles concluded that they were not about the use of AI in education. The result revealed that only 145 final articles had to be analyzed. Zawacki-Richter et al. it then categorized these 145 articles into different uses of AI in education. This section is largely based on this classification. (It should be noted that within the 145 articles, only 92 focused on student instruction/support. The rest focused on institutional uses such as identifying at-risk students prior to admission.)

The Zawacki-Richter study offers insight into the main ways AI has been used in education for teaching and learning over the ten years from 2007 to 2018, as close as we can get to “possibility” . First, three main broad “instruction” categories (with considerable overlap) from the study are provided below, followed by some specific examples. (I have omitted the profiling and prediction category from Zawacki-Richter et al. related to administrative issues such as admissions, course scheduling, and early warning systems for at-risk students.) Intelligent tutoring systems (29 of 92 articles reviewed by Zawacki-Richter et al.)

Intelligent tutoring systems allow:

  • provide learning content to learners while supporting them through adaptive feedback and suggestions to resolve questions related to the content, as well as detect learners ‘ difficulties/mistakes when working with the content or exercises;
  • selecting learning materials based on students’ needs, such as providing specific recommendations on the type of reading material and exercises undertaken, as well as personalized courses of action;
  • Facilitate collaboration between students, for example, by providing automatic feedback, generating automatic questions for discussion and process analysis. Evaluation and qualification (36 of 92)

AI supports assessment and grading through:

  • automated sorting;
  • feedback, including a variety of student-facing tools, such as intelligent agents that provide prompts or guidance to students when they are confused or stuck in their work;
  • assessment of student understanding, participation, and academic integrity. Adaptive systems and personalization (27 of 92)

AI enables adaptive systems and personalization of learning by:

  • teach course content and then diagnose strengths or gaps in learner knowledge and provide automated feedback
  • recommend personalized content;
  • support teachers in designing learning by recommending appropriate teaching strategies based on student performance;
  • support the representation of knowledge in concept maps.

Klutka et al. ( 2018 ) identified various uses of AI for teaching and learning in US universities.

  • ECoach , developed at the University of Michigan, provides formative feedback for a variety of classes primarily large in the STEM field. Track students’ progress through a course and direct them to appropriate actions and activities in a personalized way.
  • sentiment analysis (using students’ facial expressions to measure their level of participation in the study),
  • an application to monitor student participation in discussion forums and
  • organize commonly shared quiz mistakes into groups so the instructor answers once to the group instead of individually. Chatbots

A chatbot is programming that simulates the conversation or ‘chat’ of a human being through text or voice interactions (Rouse, 2018 ). Chatbots in particular are a tool used to automate communications with students. Bayne ( 2014) describes one such application in a MOOC with 90,000 subscribers. Much of the student activity took place outside of the Coursera platform on social media. The five scholars teaching the MOOC were active on Twitter, each with large networks, and Twitter activity around the MOOC hashtag (#edcmooc) was high across all instances of the course (for example, a total of 180,000 tweets were exchanged in the first edition of the MOOC). A ‘Teacherbot’ was designed to cycle through tweets using the Twitter course hashtag, using keywords to identify ‘issues’ and then choosing pre-designed responses to these issues, which often involved directing students into more specific research on a theme. For a review of the research on chatbots in education,2018). Automated test qualification

Thompson (2022) offers a simple, mainly teacher-oriented explanation of how Automatic Essay Grading (AES) works.

The first and most important thing to know is that there is no algorithm that “reads” student essays. Instead, you need to train an algorithm…. In fact, you have to score the trials (or at least a large sample of them) and then use that data to tune the machine learning algorithm.

Which involves identifying the rubrics you use to grade essays. After correcting and grading a large number of essays, the score (for example, on a five-point scale) that is assigned to each rubric is determined to grade each task. You run tests with various AI auto-correction models and run your tasks through those models multiple times to identify the one that best correlates with your own score. Models will actually ‘learn’ to improve, the more trials you correct.

Thomson puts it in these words:

There is a balance between simplicity and precision. Complex models can be accurate but take days to run. The simplest model can take 2 hours but with a loss of 5% precision…. The general consensus in the research is that AES algorithms work just as well as a second human being, and therefore do very well in that role. However, you should not use them as the only proofing reference.

Natural language processing (NLP) artificial intelligence systems, often called automated essay scoring engines, are now primary or secondary on standardized tests in at least 21 states in the United States (Feathers, 2019 ). According to Feathers:

Essay scoring engines don’t actually analyze the quality of writing. They are trained on sets of hundreds of example essays to recognize patterns that correlate to higher or lower human-assigned scores. They then predict what score a human would assign to an essay, based on those patterns.

However, Feathers claims that research by psychometricians and AI experts shows that these tools are susceptible to a common flaw in AI: bias against certain demographics (see Ongweso, 2019 ).

Lazendic et al. ( 2018) provide a detailed description of the scheme for machine grading in Australian secondary schools. They declare:

It is … crucially important to recognize that the human scoring models, which are developed for each NAPLAN writing request, and their consistent application, ensure and maintain the validity of NAPLAN writing assessments. Consequently, the statistical reliability of human scoring results is fundamentally related to and is the key evidence for the validity of the NAPLAN writing mark.

In other words, the evaluation must be based on consistent human criteria. However, it was later announced (Hendry, 2018 ) that Australian education ministers have agreed not to introduce automatic essay grading for the NAPLAN writing tests, following calls from teacher groups to reject the proposal.

Perelman ( 2013) developed a program called the BABEL generator that linked strings of words and sophisticated sentences into nonsense essays. Nonsense essays consistently scored high, sometimes perfect, scores across several different rating engines. Mayfield, 2013, and Parachuri, 2013 also offer a reflective analysis of the problems that arise in the automatic marking of writing. For a good overview of where automatic essay grading is headed, see Kumar and Boulanger ( 2020 ).

AES may eventually have the potential to massively score a large
number of tasks on the NAPLAN national exams in Australia or the General Certificate of Secondary Education in the UK. However, such methods remain impractical for most individual teachers. At the time of writing, despite considerable pressure to use automated essay grading for standardized tests, the technology still has many unanswered questions. Online supervision (proctoring)

As a result of the Covid-19 pandemic, there has been a particularly rapid increase in the use of AI-based proctoring services to check whether students taking tests at home are cheating. There are a surprisingly large number of companies that offer the online monitoring service, including Examity, Mercer/Mettle, Proctortrack, OnVUE (by Pearson Publishing), Meazure Learning (formerly ProctorU), and Proctorio. Cameras installed on students’ computers or provided by the company for students to use at home or at the testing location are used. Online monitoring is mainly done in two ways: live, with a remote person watching (usually hired by the company); or automated. Sometimes,

Increasingly, these services use AI to identify deceptive behavior, such as:

  • Students’ faces do not match the photo on the ID uploaded before the exam;
  • ‘distraction’: the student’s movements during the exam outside of the camera angle;
  • other people in the room;
  • strange human sound;
  • books or other documents on the desk;
  • a 360 degree view of the room the student is taking the exam.

Some companies create, through the use of AI, a ‘credibility index’ on the result. Students are usually required to provide personal information such as name, address, file number, and sometimes credit card information. Students, or even the institution or school that requires the use of the monitoring service, have no control over the use of this personal data, which can sometimes be shared with third parties.

Confidential information collected by online proctoring companies raises many dilemmas for students, and parents, who are automatically excluded from the testing process.

Nigam et al. (2021) conducted a systematic review of 43 articles on AI-based and non-AI-based supervision systems published between 2015 and 2021. They concluded that:

‘Our analysis…reveals that security issues associated with AIPS are multiplying and are of legitimate concern. The main issues include security and privacy, ethical issues, trust in AI-based technology, lack of training in the use of technology, costs, and many more. It’s hard to know if the benefits of these online monitoring technologies outweigh their risks. The most reasonable conclusion at present is that the ethical justification of these technologies and their various capabilities requires us to rigorously ensure that a balance is struck between disadvantages and potential benefits.’

Online supervision is a good example of an attempt to adapt 19th century methods to 21st century technology. Online assessment is discussed in more detail in Chapter 6.8.4 , which indicates that assessment can be done in a number of ways in online education, using, for example, continuous assessment, automatically tracking student learning through an LMS/EVEA management system, or Electronic Portfolios, which allow students to create an authentic compilation of their work. What to avoid is the intrusion, lack of privacy and lack of transparency of AI-based monitoring services.

9.4.5 Strengths and weaknesses

There are several ways to assess the value of the unique features of particular AI applications in teaching and learning:

  • Does the application build on the three main characteristics of ‘modern’ AI: massive data sets, massive computing power; powerful and relevant algorithms
  • Does the application have clear benefits in terms of performance over other media, and in particular general computer applications?
  • Does the application facilitate the development of the necessary skills and knowledge in the digital age?
  • Is there any unintended bias built into the algorithms? Does it seem to discriminate against certain categories of people?
  • Is the application ethical in terms of student and teacher/instructor privacy and their rights in an open and democratic society?
  • Are the application results ‘explainable’? For example, can a teacher/instructor or those responsible for the application understand and explain to students how the results or decisions made by the AI ​​application were achieved?

These issues are addressed below. Is it really a ‘modern’ AI application in teaching and learning?

published in peer-reviewed journals, very few of the so-called applications of AI in teaching and learning meet the criteria of big data, massive computing power, and powerful and relevant algorithms. Much of the smart tutoring within mainstream education is what might be called ‘old’ AI: there’s not much processing involved and the data is relatively small. Many of the so-called AI documents focused on intelligent tutoring and adaptive learning are really just general computing applications.

In fact, so-called smart tutoring systems, automatic grading of multiple-choice tests, and automatic feedback on such tests have been around since the early 1980s. The closest to modern AI applications seem to be automated essay grading of standardized tests administered throughout an educational system, and the use of AI systems and online supervision even though there are serious problems with the use of both applications. More development is clearly needed to make automated essay grading and AI-based online proctoring systems more reliable and secure practices.

The main advantage that Klutka et al. ( 2018 ), identified for AI is that it opens the possibility for higher education services to be scalable at an unprecedented rate, both inside and outside the classroom. However, it is difficult to see how ‘modern’ AI could be used within today’s education system, where class sizes or even entire academic departments, and therefore data points, are relatively small, in terms of numbers. needed for ‘modern’ AI. It cannot be said to date that modern artificial intelligence has been tried and failed in teaching and learning; It hasn’t even been tried.

Applications outside of the current formal education system are more realistic, for MOOCs, for example, or for corporate training on an international scale, or for distance learning universities with large numbers of students. The big data requirement suggests that the entire education system could be massively disruptive, if the necessary scale could be achieved by offering modern AI-based education outside of existing education systems, for example by large internet corporations that could leverage and use the personal data of its mass consumer markets.

However, there is still a long way to go before AI makes it feasible. This is not to say that there might not be such applications of modern AI in the future, but right now, in the words of old English bobby, ‘Go ahead, now, there’s nothing to see here.’

However, for the sake of discussion, let’s assume that the definition of AI given here is too strict and that most of the applications discussed in this section are examples of AI. How do these AI applications meet the other criteria above? Do the applications facilitate the development of the necessary skills and knowledge in the digital age?

This does not seem to be the case in most of the so-called AI applications for teaching and learning today. They are very focused on content presentation and comprehension tests. In particular, Zawacki-Richter et al. points out that most AI development for teaching and learning, or at least research papers, is by computer scientists, not educators. Since AI tends to be developed by computer scientists, it tends to use learning models based on how computers or computer networks work (since, of course, it will be a computer that needs to operate the AI). As a result, such AI applications tend to adopt a very behavioral model of learning: presentation/test/feedback. Lynch ( 2017 ) argues that:

If AI is to benefit education, the connection between AI developers and learning science experts will need to be strengthened. Otherwise, AI will simply “discover” new ways to teach poorly and perpetuate misconceptions about teaching and learning.

Comprehending and understanding are important foundational skills, but AI so far is not helping with the development of higher order skills in students such as critical thinking, problem solving, creativity and knowledge management. In fact, Klutka et al. ( 2018 ) state that AI can handle many of the routine functions currently performed by instructors and administrators, freeing them up to solve more complex problems and connect with students at deeper levels .This reinforces the view that the role of the instructor or teacher needs to move away from content presentation, content management, and content comprehension tests, all of which can be done using computers, and focus on competency development. The good news is that the AI ​​used in this way supports the role of teachers and instructors, but does not replace them. The bad news is that many teachers and instructors will need to change the way they teach or it will become redundant. Are there unintended biases in the algorithms?

One could argue that all the AI ​​does is encapsulate existing biases in the system. The problem, however, is that this bias is often hard to detect in a specific algorithm, and that AI tends to amplify or expand such biases. These are more problems for institutional uses of AI, but the bias performed by machines can discriminate against students in an educational context as well, and especially in automated assessment. Is the application ethical?

There are many potential ethical issues that arise from the use of AI in teaching and learning, mainly due to the lack of transparency in the AI ​​software, and particularly due to the assumptions built into the algorithms. The literature review by Zawacki-Richter et al. (2019) concluded:

…a surprising result of this review is the dramatic lack of critical reflection on the pedagogical and ethical implications, as well as the risks of implementing AI applications in higher education .

What data is collected, who owns or controls it, how is it interpreted, how will it be used? Policies will need to be on the side of protecting teachers/professors (see for example US Department of Education Student Data Policies for schools, or British Columbia Ministry of Higher Education and the Digital Education Strategy in Competence Training Students and teachers/teachers need to be involved in the development of such policies. Are the results explainable?

The biggest problem with AI in general and with education in particular is the lack of transparency. Why did I get this score? Why is this reading suggested to me instead of another, or redirected to a reading I didn’t understand the first time? Lynch ( 2017 ) argues that most data collected on student learning is indirect, inauthentic, lacks demonstrable reliability or validity, and reflects unrealistic time horizons for demonstrating learning.

‘ Current examples of AI in Education often rely on…poor proxies for learning, using data that is easily collectible rather than educationally meaningful .

9.4.6 Conclusions Dream on AI enthusiasts

In terms of what AI is currently doing to teach and learn, the dream is beyond reality. What works well in finance, marketing or astronomy does not necessarily translate in teaching and learning contexts. In doing the research for this section, I found it very difficult to find compelling examples of AI for education, compared to serious gaming or virtual reality. It is always difficult to prove that it is negative, but the results to date of applying AI to teaching are extremely limited and disappointing. (see for example, Brooks, 2021 ).

This is primarily due to the difficulty of applying ‘modern’ AI at scale in a highly fragmented system that relies heavily on relatively small classes, programs, and institutions. Probably for modern AI to ‘work’, a totally different educational organizational structure would be needed. But you have to be careful what you wish for.

There is a strong affective or emotional influence on learning. Students often learn best when they feel the instructor or teacher cares. In particular, students want to be treated as individuals, with their own interests, ways of learning, and some sense of control over their learning. Although at the mass level human behavior is predictable and to some extent controllable, each student is an individual and will respond slightly differently from other students in the same context. Because of these emotional and personal aspects of learning, students need to relate in some way to their teacher or instructor. Learning is a complex activity where only a relatively minor part of the process can be effectively automated. Learning is an intensely human activity, which benefits greatly from personal relationships and social interaction. This relational aspect of learning can be handled just as well online as face-to-face, but it means using computing to support communication, as well as content delivery and content acquisition assessment. Not fit for purpose

Above all, AI has not yet progressed to the point where it can support the higher levels of learning required in the digital age or the teaching methods necessary to implement it, although other forms of computing or technology, such as simulations, games, and reality virtual can do it.

In particular, AI developers have not been aware that learning is evolutionary and builds. Instead, they have imposed an old and less appropriate method of teaching based on behaviorism and an objectivist epistemology. However, to develop the necessary skills and knowledge in the digital age, a more constructivist approach to learning is needed. To date there has been no evidence that AI can support this approach to teaching, although it may be possible. Actual AI agenda

AI advocates often argue that they are not trying to replace teachers, but to make your life easier or more efficient. This must be taken with a grain of suspicion. The key driver of AI applications is cost reduction, which means reducing the number of teachers, as this is the main cost in education. In contrast, the key lesson of all AI developments is that we will have to pay more attention to the affective and emotional aspects of life in a robotic society, so teachers will be even more important.

Another problem with artificial intelligence is that the same old advertising keeps going round and round. The same arguments for using AI in education date back to the 1980s. Millions of dollars went into AI research at the time, including educational applications, for no benefit.

Since then, there have been some significant developments in AI, in particular pattern recognition, access and analysis of large data sets, powerful algorithms, leading to decision-making within bounded limits. The trick, though, is to recognize exactly what kinds of applications these new AI developments are good for, and what they can’t do well. In other words, the context in which the AI ​​is used is important and must be taken into account. Teaching and learning is a particularly challenging environment for AI applications. Defining the role of AI in teaching and learning

However, there is ample scope for useful applications of AI in education, but only if there is an ongoing dialogue between AI developers and educators as new developments emerge. But that will require being very clear about the purpose of AI applications in education and being wide awake to the unintended consequences.

In education, AI is still a sleeping giant. ‘Innovative’ applications of AI for teaching and learning are likely not to come from major universities and colleges, but from outside the formal post-secondary system, through organizations like LinkedIn,, Amazon, or Coursera, who have access to large data sets that make AI applications scalable and valuable (to them). However, this would represent an existential threat to public schools, colleges and universities. The problem is: which system is best to protect and sustain the individual in the digital age: multinational corporations using AI to teach and learn; Or a public education system with human teachers who use AI to support students?

The key question is whether technology should aim to replace teachers and instructors through automation, or whether technology should be used to empower not only teachers but also students. Above all, who should control AI in education: educators, students, computer scientists, or large corporations? Indeed, these are existential questions if AI becomes immensely successful in reducing the costs of teaching and learning, but at what cost to us as humans? Fortunately, the AI ​​is not yet in a position to provide such a threat; but you may as well do it soon.