Skip to main content

Chatbot to improve learning punctuation in Spanish and to enhance open and flexible learning environments

Abstract

The objective of this article is to analyze the didactic functionality of a chatbot to improve the results of the students of the National University of Distance Education (UNED / Spain) in accessing the university in the subject of Spanish Language. For this, a quasi-experimental experiment was designed, and a quantitative methodology was used through pretest and posttest in a control and experimental group in which the effectiveness of two teaching models was compared, one more traditional based on exercises written on paper and another based on interaction with a chatbot. Subsequently, the perception of the experimental group in an academic forum about the educational use of the chatbot was analyzed through text mining with tests of Latent Dirichlet Allocation (LDA), pairwise distance matrix and bigrams. The quantitative results showed that the students in the experimental group substantially improved the results compared to the students with a more traditional methodology (experimental group / mean: 32.1346 / control group / mean: 28.4706). Punctuation correctness has been improved mainly in the usage of comma, colon and periods in different syntactic patterns. Furthermore, the perception of the students in the experimental group showed that they positively value chatbots in their teaching–learning process in three dimensions: greater “support” and companionship in the learning process, as they perceive greater interactivity due to their conversational nature; greater “feedback” and interaction compared to the more traditional methodology and, lastly, they especially value the ease of use and the possibility of interacting and learning anywhere and anytime.

Introduction

Chatbots or “conversational bots” are artificial intelligence-based programs that enable person-to-machine interaction based on written or oral code (Bailey, 2019; Colace et al., 2018; Fryer et al., 2019; Ho et al., 2018). In recent years, Microsoft (Cortana) and Apple (Siri) have been the prime movers in developing such programs through their operating systems as virtual oral communication assistants. Banks and companies have also incorporated these bots into their websites to establish a direct connection with the consumer. In education, chatbots are being used in a range of scenarios, albeit tentatively (Thompson et al., 2018; Vijayakumar et al., 2019; Winkler & Soellner, 2018). All the more reason to analyze the environments where chatbots can be applied and developed, and to assess their use in teaching as resources and tools that can improve the teaching–learning process in a mobile, ubiquitous format, while also personalizing and adapting learning itineraries. Therefore, we hypothesize that the educational use of chatbots, as a resource to promote more personalized and integrable learning in different virtual learning environments, can make them a very valuable resource for teaching and to stimulate ubiquitous learning and more flexible digital environments.

Chatbots for learning

A virtual assistant is a set of computer programs that can interact with humans through natural language. Schroeder et al. (2013) define agents as “on-screen characters that facilitate instruction.” The technologies that support chat are founded mainly on natural language processing, automatic deep-learning technologies and other services such as inference, recommendation and contextualized reasoning (Sheth et al., 2019; Vázquez-Cano et al., 2013). Chatbots are based on natural language processing and decision tree techniques that use algorithms. The first chatbots appeared in the 1960s (Weizenbaum, 1966), although it was not until the start of the twenty-first century that they drew the attention of researchers and companies. Now, companies such as Amazon (Alexa), Apple (Siri) and Microsoft (Cortana) have developed voice assistance to answer personal enquiries and facilitate access to resources or consumer goods. In 2018, it was calculated that Facebook Messenger could have more than 300,000 chatbots functioning at any time. Chatbots differ from traditional intelligent tutorial systems most significantly in that they are speech-based; they must be capable of interpreting the setting and proposing different solutions to problems, or interpreting our communication and redirecting its own response capacity (Nikou & Economides, 2018; Paschoal et al., 2019; Shail, 2019).

The educational use of chatbots is an incipient area of experimentation, although recently, several studies show that chatbots can support the teaching–learning process across a range of subjects with various degrees of success. New teaching proposals are being analyzed in Maths (Grossman et al., 2019), in English and Sciences (Bailey, 2019; Ruan et al., 2019), Pedagogy (Huang et al., 2019), Educational Technology (Liu et al., 2019), and software testing (Paschoal et al., 2019). These studies focus mainly on chatbot design and functionality, yet there is little research into their didactic potential and application to student assessment and feedback processes in relation to content and curricular competences. Chatbots feature four central elements (Reyes-Reina et al., 2019, p. 24): (1) they seek to simulate human speech (Ciechanowski et al., 2018); (2) they traditionally interacted via written message, hence “chat” (Io & Lee, 2018), though subsequent advances enabled the appearance of spoken interaction; (3) as opposed to robots or similar devices, chatbots have no physical presence (disembodied agents) (Araujo, 2018); (4) unlike avatars, they do not represent a human being in a virtual world (Klevjer, 2006). For the chatbots to function as part of the educational process, they need to complement the teaching–learning processes that take place outside the classroom so that students can interact with them in a natural, fluid way (Liu et al., 2019; Mohammed & Wakil, 2018; Nikou, 2019). Good chatbot design can make learning a more fluid, automatic process, and can integrate deep-learning didactic proposals. Nevertheless, consideration needs to be given to the ethical aspects of their usage, the issue of data protection and the analysis and assessment not only of how students learns with the support of these artificial intelligence processes, but also how students’ cognitive social and emotional development is affected when using simulated technology-mediated teaching–learning processes (Hsu et al., 2019; Huang et al., 2019; Jeno et al., 2019).

Currently, chatbots used in the most basic level of education still lack the capacity to self-learn because they require much more complex natural language processing techniques. Although these simple chatbots, which can be programmed by teachers or students without extensive computing knowledge, lack self-learning capacity, they can be a useful support for directing teaching–learning processes in reviewing work, broadening content and personalizing learning itineraries according to academic achievement (Bii, 2013; Farkash, 2018; Ghose & Barua, 2013). In this version, chatbots are related to micro-learning activities that give the student more control over the teaching–learning process, and they decide the speed at which they wish to do the activity. This engages students’ self-regulated learning competences that enable the student to understand how they are learning and the difficulties they might find along the way (López-Meneses et al., 2020; Mohammed & Wakil, 2018; Procter et al., 2012). This micro-learning approach helps reduce student fatigue (Cabero & Ruiz-Palmero, 2018; Cabero et al., 2020; Shail, 2019), and can boost information retention by around 20% (Giurgiu, 2017). In many cases, it helps improve students’ understanding of certain concepts, strengthens competences and improves academic outcomes (Nikou & Economides, 2017). This type of learning connects to the support technology-mediated micro-learning processes in “educational pill” format (Cordova & Lepper, 1996; Jomah et al., 2016). This type of educational approach facilitates the use of small units of content, or educational nano-capsules, to construct environments and learning that allows students to practice, learn or interact within a limited timescale. These time-limited processes oblige the student to concentrate more when doing such activities (Bruck et al., 2012; Vázquez-Cano, 2012, 2014). The interaction time lasts between 1 and 10 min and can take place on a range of digital devices written or orally (Shail, 2019). In this sense, the metanalysis of Schroeder et al. (2013) found that agents do enhance learning in comparison with learning environments that do not feature agents (Johnson and Lester (2016: p. 31).

Furthermore, chatbots could be used in a tutorial role to organize questions and answers with feedback for students, and can facilitate communication with families in support of their children’s teaching–learning process (Garcia Brustenga et al., 2018). Education in general is now starting to deploy chatbots in specific settings such as in libraries or for data administration processes, and this provides new ways for users to interact more closely with these educational services (Bentivoglio et al., 2010; Sheth et al., 2019; Tegos et al., 2014). Furthermore, language learning is one pedagogical area where chatbots are being widely used.

The influence of chatbots and virtual agents have generated scientific debates about their didactic utility (Crown et al., 2010; Heidig & Clarebout, 2011; Schroeder et al., 2017). In this sense, beyond the possible didactic utility, it has been shown that emotional and affective links can be found between pedagogical agents and learners (Beale & Creed, 2009). Likewise, there are studies that show that the use of these agents or chatbots increases motivation among students and their academic performance in virtual learning environments (Liew et al., 2017).

Chatbots for language learning: the punctuation

Chatbots for language learning have been focused on second language learning, mainly English language (Tegos et al., 2014; Winkler & Soellner, 2018). There are chatbots for learning English, such as “BookBuddy” (Ruan et al., 2019), intelligent tutor courses, like “Sammy” (Gupta & Jagannath, 2019), MOOC collaboration activities, “colMOOC” (Tegos et al., 2019) and academic information systems, “StudBot” (Vijayakumar et al., 2019). These studies have been applied to conversational tasks and writing skills with moderate results that precise more didactic experiences to analyse their effect on the improvement of language learning. Chatbots’ functionalities include: practising grammatical structures, correcting grammatical or spelling errors, and generating simulated contexts for the use of certain vocabulary and syntactic structures. For example, Coniam (2014) evaluated five renowned language chatbots concluding that the level of grammar could be improved (Smutny & Schreiberova, 2020: p. 2). In the learning and improvement of a mother tongue, there is no literature published related to this topic, but the use of chatbots to support specific aspects of language learning that require continuous training and feedback could represent a significant advance; for example, for enhancing skills associated to writing (punctuation, accentuation, spelling, etc.)

In this sense, punctuation can be defined as: “the use of standard symbols, spaces, capitalization and indentation to help the reader understand written text” (Wing Jan, 2009, p. 37). It is an essential element of writing since it helps to interpret texts, disambiguate meanings, and operates as an important discourse marker to promote correct writing (Daffern & Mackenzie, 2015; Scull & Mackenzie, 2018). Likewise, together with the accentuation of Spanish, it is one of the paralinguistic elements that determine the discursive, strategic and linguistic competence of the speakers. The punctuation error occurs mainly in two textual circumstances: omission and misuse of one of the punctuation marks. The proper use of punctuation marks is a linguistic and pragmatic requirement of the first order, and nowadays in digital writing mediated by digital devices, it acquires a new value from the participation of other paralinguistic elements such as emojis, gifs, icons, etc. Therefore, it is necessary to consolidate the teaching and use of punctuation as one of the most important linguistic elements for correct writing (Bram, 1995). The correct use of punctuation marks is one of the most determining indicators of the linguistic and communicative competence of a student and a citizen (Vázquez-Cano et al. 2018, p.14).

Thus, the teaching of punctuation and its corresponding practice by students is a topic that has been approached from more traditional didactic postulates: mainly, through correction and completion of texts (Fang & Wang, 2011; Macken-Horarik & Sandiford, 2016). The didactics of punctuation is one of the aspects least worked of the teaching of the first language, and many of the teachers limit themselves exclusively to the theoretical teaching of punctuation marks without proposing situated practices that allow students to advance in the correct and contextualized use (Angelillo, 2002). It must be taken into account that the teaching of punctuation in Spanish is a subject pending of accurate research (Polo, 1990) and it has not been sufficiently investigated which discursive practices provide better results among students (Cassany, 1999). In the teaching of Spanish as a first language, one of the most widely used teaching approaches for punctuation is developed through theoretical and practical teaching with reputable literary works (Fuente, 1993). There are approaches with a more psycholinguistic approach that takes the student as a reference and not the punctuation marks and that establishes as performance parameters the cognitive processes of comprehension and production of punctuation (Caddéo, 1998; Ferreiro, 1999; Ferreiro & Teberosky, 1979).

Spanish language has punctuation marks that, as in other languages, are easier to learn, such as periods (full stops), quotation marks or question marks, but other signs have different uses that, without an adequate practice, produce misinterpretation or incorrect meanings, for example: comma, semicolon and colon. With these punctuation marks, students precise of numerous practices in order to acquire a correct punctuation. These continuous practises allow students to better understand the use of punctuation marks and adapt them to the communicative intention and the type of text. On many occasions, these practices cannot be done in the desired number at home because they also involve a high degree of feedback and explanation in order to understand the correct or incorrect uses. In this type of specific content, that requires continuous practice outside the classroom, the use of chatbots can be of great pedagogical interest due to the possibility of designing and adapting them to different learning rhythms and itineraries, due to the feedback through learning analytics (Shail, 2019; Subramaniam, 2019).

Different studies on the acquisition of punctuation in primary and secondary education show that students have an active attitude towards this component of writing and that they build their own 'theories' or explanations about this subsystem, as they have real experiences of understanding and production of meaning with punctuation marks. This learning process is not random, imitative or mechanical, of copying of uses or applying of rules, but of creative activity, in which the learner 'discovers' how to use signs in authentic contexts of communication. When these uses have been installed in an adult speaker, the omissions or misuses become fossilized, and the change and correction of these misuses require even more practice and the combination of different teaching approaches and strategies that promote error detection (Macken-Horarik & Sandiford, 2016; Scull & Mackenzie, 2018). The objective of this study is to verify whether students improve their results in the punctuation part of the exam for the access to University, with the use of chatbot compared to a more traditional didactics based on correction of written texts. Finally, we analyse the students’ perception on the positive and negative aspects of the chatbot’s experience.

Methods

The teaching–learning experience consisted of the use of a chatbot to teach punctuation in Spanish Language in the access course of the National University of Distance Education (UNED) during two academic years (2018–19/2019–20). Specifically, the use of the period in two linguistic dimensions: the period in the sentence and the period in the shortenings). The link to operate and interact with the chatbot is: https://links.collect.chat/5e24ddd7c07d8746c2a256d2 (Fig. 1).

Fig. 1
figure 1

Chatbot home page

The resource used to design and build the chatbot was "collect-chat" in its free plan. One of the reasons to choose this tool is because you do not need to have any program knowledge to use it. It is built in “drag and drop” builder system that can be easily customized by a teacher to adapt it to educational purposes (Fig. 2). This proposal is in line with other incipient proposals that have been developed at UNED with chatbots using “drag and drop” builder system; for example, “EconBot” (Tamayo et al., 2020).

Fig. 2
figure 2

Chatbot Script (drag and drop builder system)

For our design, we selected different resources from the workspace: (1) messages, (2) text questions (list, range and multi select), (3) file upload, (4) links to other websites, among others. Figure 3 shows some of the items integrated in the chatbot.

Fig. 3
figure 3

Chatbot functionalities

The design of the chatbot is intended to be a narrative sequence in which the student progressively advances in the practice of tasks and scoring exercises, viewing content, writing assignments and uploading them as they interact with the chatbot in the conversation. Likewise, depending on the performance of each student, the chatbot offers different routes of difficulty.

The UNED teaching–learning environment corresponds to a blended learning model with face-to-face tutoring one day a week and online teaching through a virtual learning environment. The educational experience consisted of five phases: (1) face-to-face teaching of punctuation through master classes lasting five sessions (2 weeks) and equivalent to five teaching hours for 103 students in the access course. This phase was the same for both control and experimental group (2) To implement a punctuation control test similar to the access exam with test-type punctuation questions for the total sample (n = 103) (3) Segmentation of the sample into two groups: control (n = 51) and experimental (n = 52) (after checking the normality assumptions). (4) Assignment of tasks to the control group consisting of a dossier in pdf format with punctuation practice exercises and self-evaluation at the end with an estimated practice time of two hours per week. The experimental group received the link to two punctuation chatbots with an estimated time of one and a half hour of practice. Neither group received tutorial support during the practice, but both groups fully carried out the assigned practices (control group, with the delivery of the completed dossier and, experimental group, by checking the chatbot learning analytics). The theory and the exercises proposed in the pdf dossier and the interactive ones in the chatbot were the same. It should be borne in mind that the university access course in the UNED, through a blended learning modality, requires high quotas of self-regulated learning. The time allotted to practice the dossier in pdf and the chatbot was two weeks. The distribution of students in the two courses was as follows (2018–19 Control = 21 Experimental = 20 and 2019/2020 21 Control = 30 Experimental = 32). Finally, in phase (5), a final punctuation test was performed with more complex contents than that carried out in phase 2 for both control and experimental groups. This final test consisted of three exercises: (1) correction of a text with punctuation errors (2) multiple choice questions and (3) punctuation of a blank text. In the design of the exercises, the practice of the following punctuation marks was considered: period, question mark, exclamation point, comma, semicolon, colon, dash, hyphen, and brackets.

The methodological approach used was a quasi-experimental quantitative design with a pretest–posttest with convenience sampling. The median age was 31 years (range 25–58 years), and 67% were female students. All of the participants signed a consent form to participate in the study. In order to prevent subject pool contamination, the first 51 students were assigned to the control group and the following 52 students were assigned to the experimental group. Normality was calculated using the Kolmogorov–Smirnov test and, subsequently, possible differences in the results of the pretest and posttest groups were analyzed with the application of the Levene’s Test for Equalilty of Variances measures (Rosenthal & Rubin, 1982). Furthermore, we implement BESD (Binomial Effect Size Display) to show the punctuation categories results (Rosenthal, 1991). Secondly, text network analysis methods based on students’ perception about of chatbot use in a academic forum have been developed for better comprehension of the functionality of chatbots through a topic modelling analysis (Budan & Graeme, 2006; Bullinaria & Levy, 2012). Retrieving the topics from text by identifying the clusters of co-occurrent words within them, based on the bag-of-words and skip-gram models (Bruni et al., 2014; Feng et al., 2017; Jones & Mewhort, 2007). To determine the degree of agreement we apply a pairwise document similarity measure PDMS with Kendall’s Tau distance process. For this purpose, we have grouped the forum participation in three subsets: DP1 Support, DP2 Mobility and DP3 Feedback; within each group we have grouped the text-subsets DP123n. The comparison criterion is established according to the following formula:

$${D}_{1 \in }^{C} {D}^{C}$$

where index I {1... |DC|} we define DC by topics discovered using latent Dirichlet allocation or LDA (Blei et al., 2003) and a pairwise distance matrix. For this purpose, given a discourse text with m sentences (without the same sentence repeating), a pairwise distance matrix can be computed by aligning the pairs of all sentences. Finally, we calculated the can look tf-idf of bigrams across the three topics forum.

Results

First, the results of the pretest and posttest are presented. Sample normality was calculated to guarantee the reliability of the pretest and posttest. Table 1 (Experimental Group) and Table 2 (Control Group) show that the normality assumptions of the sample are fulfilled.

Table 1 Kolmogorov–Smirnov test (experimental group)
Table 2 Kolmogorov–Smirnov test (control group)

The Kolmogorov–Smirnov test’s results show that the data comes from a normal distribution in the two groups. The assumption of variance equality was also verified using the Levene test and we can observe that equal variances are assumed in the pretest and posttest groups (Table 4). In Table 3, we present the results of the descriptive statistics of the two participating groups and the differences in the means in the pretest and posttest.

Table 3 Descriptive statistics

The results of Table 4 show that the significance is 0.667 > 0.05 in the pretest of the control and experimental groups; indicating that there are no significant differences in the means (experimental group 22.9038 and control group 23.5686). Therefore, we can verify that we start from results of the initial test in punctuation without significant differences between the two groups. On the contrary, after the process of didactic intervention with the use of chatbots, the experimental group in the posttest results shows that there are significant differences compared to the control group (sig. 0.00 < 0.05 with a mean of the experimental group of 32.1346 and a mean of the control group of 28.4706). A significant difference of more than four points; which makes us to conclude that the didactic intervention with chatbot for the practice of punctuation in Spanish has substantially improved the results obtained in the final test.

Table 4 Independent samples test

To identify the importance of the effect of the chatbot educational experience, we present the Binomial Effect Size (Table 5).

Table 5 Binomial effect size displays of correctness in the usage of punctuation marks
Table 6 Forum bigrams

Table 5 shows that the effect of the use of chatbots in the experimental group have improved significantly the results in the correct usage of three punctuation marks: periods, colon y comma in different grammatical structures and uses. In Fig. 4, we can check with a boxplot graph the threshold for improvement of the experimental group in the punctuation tests using a practice-based strategy based on a chatbot didactic and linguistic interaction. This verifies its positive effect as an integrating didactic resource in virtual learning environments from a didactic approach based on mobility, ubiquity and conversational interaction between human and machine (virtual agent).

Fig. 4
figure 4

Boxplot graph experimental and control group comparison

To complement the results of the pretest and posttest, we have analysed the experimental group forum discussion group and the three threads created for this purpose: “Support”, “Mobility” and “Feedback” to determine what were the opinions and perceptions of the students with regard to the usefulness or not of chatbots in the development of their teaching–learning process. The computation of this matrix is done only for the lower triangular values and then reconstructed to form the full matrix. The values are all normalised between zero and one, so that it can be treated like a probability of semantic match. We used a pairwise document similarity measure PDMS with Kendall’s Tau distance applying the following equation:

$$PDSM {(d}_{1}{,{d}_{2},d}_{3}) =\left(\frac{{d}_{1 \cap }{d}_{2 \cap }{d}_{3}}{{d}_{1 \cap }{d}_{2 \cap }{d}_{3}}\right) \times \frac{PF {(d}_{1 ,}{d}_{2, }{d}_{3})+1}{M-AF {(d}_{1 ,}{d}_{2,} {d}_{3})+ 1}$$

The intersection and union of the forum participation: “support”, “feedback” and “mobility” are calculated as follows (wji > 0 is the ith weight in document j):

$${d}_{1 \cap }{d}_{2 \cap }{d}_{3 } = {\sum }_{i=1}^{M}Min {(w}_{1i ,}{w}_{2i}, {w}_{3i})$$

The results of the comparisons and similarities found in the full forum are presented in the pairwise distance matrix (Fig. 5).

Fig. 5
figure 5

Pairwise distance matrix of incidence of chatbots and the procesos of learning

It can be observed in Fig. 5 that the highest intermediation values among the 42,678 words analyzed in the three forum threads: “Support”, “Mobility” and “Feedback”, focus on values above 0.500 in deep yellow. We can see that the concepts with the highest values in the set of the three threads are: “improve” (0.640), “support” (0.567), “conversational” (0.601), “ubiquitous” (0.638), “mobile” (0.630), “ease to use” (0.599, “feedback” (0.645) and “interactive” (0.611). These concepts allow us to generalize that, for the students in the experimental group, the learning experience with the use of chatbots has had a positive impact on their learning experience that has allowed them to face the learning process from a more dynamic, interactive and ubiquitous context and with higher rates of feedback and support. Likewise, to complement the general perception of the students, we analyzed the bigrams associated with each of the thematic threads of the forums in order to go further into the relationships between concepts and their impact on learning. To do this, we used the following notation.

bigram_tf_idf <- bigrams_united %>%

count(forum, bigram) %>%

bind_tf_idf(bigram, forum, n) %>%

arrange(desc(tf_idf)).

We present, in Table 6, the “td_idf” with the highest results of the three most representative bigrams in each of the forum threads in order to determine their educational functionality in the three areas of “Support”, “Mobility” and "Feedback.

In Table 6, we can observe that students identify greater “support” in the learning process through the chatbot, due to its conversational nature (tf_idf 0.04349423). They also show that the interactive nature of the chatbot allows them greater quotas of improvement in learning (learning-interaction / tf_idf 0.03849211 and interactive-improve / tf_idf 0.04241470). Regarding the feedback processes, students perceive that chatbot’s feedback and interaction produces improvement in their learning (interaction-feedback / tf_idf 0.04139641 and improve-interaction / tf_idf 0.05349475). Likewise, chatbot increases students’ motivation and interest (Interest-feedback / tf_idf 0.04135673). Finally, the mobility generated by the possibility of using chatbots on laptop, tablet and smartphones is highly valued by students (mobile-design / tf_idf 0.03087341); as well as the didactic potential of ubiquity and interaction with the virtual agent (ubiquitous-interaction / tf_idf 0.04139877), along with its ease of use anytime and anyplace (ease to use-mobile / tf_idf 0.04967512).

Discussion

The results of this study from the quasi experimental approach (pretest /posttest) have shown that experimental group of students, those ones who used the chatbot outside the face to face classroom, have substantially improved the results obtained in the punctuation tests associated with the final exams for university access in the Spanish language subject. Furthermore, students’ perception on the use of the chatbot has shown that motivation, interaction, feedback, and ubiquitous possibilities for self-regulation learning in a mobile context have been enhanced. Regarding the use of chatbots for the improvement of a mother language, there are no scientific studies to discuss conveniently the results, but the chatbot’s educational functionalities have been documented in previous studies (Bailey, 2019; Ciechanowski et al., 2018; Grossman et al., 2019; Io & Lee, 2018; Liu et al., 2019; Ruan et al., 2019).

One key aspect to integrate chatbots from a language learning perspective, consists of identifying in the curriculum the contents and competences that better support a chatbot narrative. For this purpose, the design of this research focus on a specific content, punctuation, a basic indicator of the communicative competence of a person in his/her own language. Different studies pose that punctuation are the most frequent errors in the writing of students. The experimental group have improved significantly the results in the correct usage of three punctuation marks: periods, colon y comma in different grammatical structures and uses. These positive results obtained in this study for a mother tongue have been highlighted for the learning of writing skills in second languages (Fryer & Carpenter, 2006). In this sense, and in line with the students’ perceptions, the effectiveness of the approach seems to be more related to the increase of motivation and interest among students, as well as the specific type of content and skill programmed (Coniam, 2008; Hasler et al., 2013; Hill et al., 2015).

The results of this didactic experience with a chatbot have demonstrated that the implementation of a chatbot which integrates a diverse of methodological approaches based on visualizing videos, filling gaps, writing texts, and providing an automatic feedback generates greater improvement on the correct use of punctuation marks, if we compare these results which the ones obtained from a traditional learning based on correcting punctuation in written texts. On the main reasons that scientific literature points out for the punctuation errors in students writing is the lack of error corrective feedback (Ali et al., 2020). Chatbots can contribute significantly to design an enriched scenario of learning with automatic assessment and feedback of exercises through learning analytics; which reinforce students’ effort, academic performance and interest (Goda et al., 2014; Stickler & Hampel, 2015). This type of educational approach based on “chatbots narratives” enables the use of bite-sized units of content or educational nano-capsules to construct environments and learning that allows students to learn, practice or interact within an educational framework within a set time (Bruck et al., 2012).

Also, the students who participated in the experimental group have positively perceived this type of interactive activity with the chatbot; mainly in the support received, the increase in their interest on the topic, the feedback received and the mobility and ease of use of the chatbot in the development of their teaching–learning process. Although these results have been identified in some scientific works (Hill et al., 2015; Johnson & Lester, 2016), we have to proceed with caution, because other studies, such as Fryer et al. (2019) and Chen et al. (2016) found that technological novelty can produce positive effects on students’ motivation and interest at the beginning of the didactic experiences, but later this effects decay.

One of the most remarkable benefit of this didactic experience is based on the open resources and tools to design a chatbot (Chatfuel, Collect.chat, etc.). With these environments and other similar offered by other platforms, the design and code of the chatbots can be “democratized” as a few years ago happened with the design of the web pages. The teacher can create his own narrative in which to integrate the selected content modules of a subject and establish the flow of the conversation, the type of the exercises and tasks depending on the results obtained by the student, especially regarding to writing and reading comprehension tasks in language learning. As Fryer et al (2019: p. 463) point out: “Recent advances by Chatbot developers and text-to-speech/speech-to-text software have begun to make spoken Human-Chatbot interaction a growing option, opening up new possibilities for Human-Chatbot interaction and learning.” The chatbot for language learning could be adapted and programmed to reinforce and practice different skills (mainly, listening and writing by conversational interaction). It becomes a programmable resource that can be adapted to different learning and communication situations, enabling the creation of educational pathways for students according to the results obtained from interacting with the chatbot (Sheth et al., 2019; Shum et al., 2018; Subramaniam, 2019; Thompson et al., 2018). Likewise, depending on the design of the chatbot and the functionality of the platform from which it operates, or type of programming, chatbots offer a range of learning pathways, a review facility and various options for interacting with their creators, thus it can provide learning of a type that is deeper and more accurately situated (Colace et al., 2018). So, among the various benefits echoed by the scientific literature, we can say that chatbots are anonymous, asynchronous, scalable and can be personalized (Klopfenstein et al., 2017; Taraban, 2018; Van Rosmalen et al., 2012). They encourage greater student involvement in academic tasks; for example, the rate of task completion among Computing students was five points higher than in the use of other resources, such as gamification (Benotti et al., 2018). This type of student-machine interaction boosts student autonomy and intrinsic motivation in learning as it allows interaction to take place independently, with or without teacher or parental control, and establishes feedback procedures with the machine that facilitate interpretation of the content to be developed (García-Valdecasas, 2011; Reyes-Reina et al., 2019; Sha, 2009).

Another of the most notable results has been the reinforcement of self-regulation learning in a mobile context. In this line, other studies have also shown that chatbots can enhance self-regulation (Labuhn et al., 2010). They can also favor the creation of learning pathways due to their sequentiality (Hattie & Timperley, 2007). It is also true that more essential aspects of human / machine interaction need to be investigated in greater depth, such as the following: (1) the difference between cognitive and affective feedback; (2) the possibility of generating negative emotions or anxiety and (3) the type of educational activities more suitable to be converted in a chatbot experience. In this sense, there are previous studies that have shown that there is no substantial improvement in educational processes with the application of virtual agents (Heidig & Clarebout, 2011) or a small positive effect on learning performance (Schroeder et al., 2013) and that these possible benefits depend on a variety of conditions and on specific pedagogical features that agents should have (Schroeder et al., 2017). Some studies have shown that this function of tutor or coach is more useful with novice learners (Wang et al. 2008), as the ones include in this research.

Conclusion

Chatbots can become very useful resources for the teaching and learning of first and second languages. In this sense, conversational narratives can be designed for the practice and improvement of communication and linguistic skills: written expression, reading comprehension, speaking and listening. Students can use them as a tutor for mobile learning in those contents and skills that require continuous practice and constant feedback. They are also characterized by being scalable and adaptable to different learning rhythms and styles. They also allow their integration into virtual learning environments, which generates more adaptable and open educational scenarios. This type of resources connects with the lines of pedagogical research related to microlearning and allows a type of conversational activity that can motivate the student and increase their interest in studying. Nano-contents as punctuation which requires a continuous learning process based on a diversity of activities could be treated with chatbots from a more open and flexible way. Therefore, chatbots have emerged as a novel technology with a wide spectrum of commercial, but also social and educational applications. One of its great potential is associated with its ubiquitous mobile use from any device and the recent possibility of creating and designing a chatbot with minimum computer knowledge through open tools chatbots using “drag and drop” builder systems. Furthermore, regarding the teachers, a chatbot offers great possibilities due to the configuration of the type of learning analytics; which allows the teacher to obtain a photograph of the academic performance of each student with minimal effort, this feedback can be offered to the student to monitor their own learning.

This research shows how chatbots can be used as a teaching tool that promotes versatile scenarios for promoting self-regulation learning. In addition, its conversational nature allows a more dynamic and participatory experience in virtual learning environments. These environments are highly customizable and scalable, allowing teachers to design educational experiences based on different content and competences. In this sense, with a little training a teacher can design an educational itinerary with a chatbot and offer their students a much richer educational environment to study outside the classroom. The key aspect resides in identifying those contents and competences whose translation into a conversational narrative could have a greater effect on learning.

Limitations

Although our findings are encouraging and useful, the authors would like to point out the following major limitations of the study. First, our study focused only on a local sample that precise more in-depth analysis in other educational contexts. Second, the research has been approached from a blended learning model, in this sense, further analysis in online and face to face teaching methods should be implemented. These initial results should be replicated with similar experiences to verify their possible didactic applicability in other educational stages and for students of different ages. Finally, the results could also be affected by contextual factors such as professional and personal experience of students that are not taken into consideration in this research.

Availability of data and materials

The data that support the findings of this study are available from the corresponding author upon reasonable request.

References

  • Ali, S. S., Amin, T., & Ishtiaq, M. (2020). Punctuation errors in writing: A comparative study of students’ performance from different Pakistani universities. Sir Syed Journal of Education & Social Research, 3(1), 165–177. https://doi.org/10.36902/sjesr-vol3-iss1-2020(165-177)

    Article  Google Scholar 

  • Angelillo, J. (2002). Teaching young writers to use punctuation with precision and purpose. Profile Books.

    Google Scholar 

  • Araujo, T. (2018). Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Computers in Human Behavior, 85, 183–189. https://doi.org/10.1016/j.chb.2018.03.051

    Article  Google Scholar 

  • Bailey, D. (2019). Chatbots as conversational agents in the context of language learning. Proceedings of the Fourth Industrial Revolution and Education, pp 32–41. Dajeon, South Korea.

  • Beale, R., & Creed, C. (2009). Affective interaction: How emotional agents affect users. International Journal of Human-Computer Studies, 67, 775–776. https://doi.org/10.1016/j.ijhcs.2009.05.001

    Article  Google Scholar 

  • Benotti, L., Martinez, M. C., & Schapachnik, F. (2018). A tool for introducing computer science with automatic formative assessment. IEEE Transactions on Learning Technologies, 11(2), 179–192. https://doi.org/10.1109/TLT.2017.2682084

    Article  Google Scholar 

  • Bentivoglio, C. A., Bonura, D., Cannella, V., Carletti, S., Pipitone, A., Pirrone, R., Rossi, P. G., & Russo, G. (2010). Agenti intelligenti supporto dell’interazione con l’utente all’interno di processi di apprendimento. Journal of e-Learning and Knowledge Society, 2(6), 27–36

    Google Scholar 

  • Bii, P. (2013). Chatbot technology: A possible means of unlocking student potential to learn how to learn. Educational Research, 4(2), 218–221

    Google Scholar 

  • Blei, D. M., Andrew, Y. N., & Jordan, M. I. (2003). Latent Dirichlet allocation. Journal of Machine Learning Research, 3(4–5), 993–1022

    MATH  Google Scholar 

  • Bram, B. (1995). Write well improving writing skills. Kanisius.

    Google Scholar 

  • Bruck, P. A., Motiwalla, L., & Foerster, F. (2012). Mobile learning with micro-content: A framework and evaluation. Proceedings of the 25th Bled eConference, 527–543. Bled, Slovenia.

  • Bruni, E., Tran, N. K., & Baroni, M. (2014). Multimodal distributional semantics. Journal of Artificial Intelligence Research, 49, 1–47. https://doi.org/10.1613/jair.4135

    Article  MathSciNet  MATH  Google Scholar 

  • Budan, I. A., & Graeme, H. (2006). Evaluating WordNet-based measures of semantic distance. Computational Linguistics, 32(1), 13–47. https://doi.org/10.1162/coli.2006.32.1.13

    Article  MATH  Google Scholar 

  • Bullinaria, J. A., & Levy, J. P. (2012). Extracting semantic representations from word cooccurrence statistics: Stop-lists, stemming and svd. Behavior Research Methods, 44, 890–907. https://doi.org/10.3758/s13428-011-0183-8

    Article  Google Scholar 

  • Cabero, J., & Ruiz-Palmero, J. (2018). Technologies of information and communication for inclusion: Reformulating the “digital gap.” IJERI: International Journal of Educational Research and Innovation, 9, 16–30

    Google Scholar 

  • Cabero, J., Vázquez-Cano, E., López-Meneses, E., & Jaén-Martínez, A. (2020). Posibilidades formativas de la tecnología aumentada. Un estudio diacrónico en escenarios universitarios. Revista Complutense De Educación, 31(2), 143–154. https://doi.org/10.5209/rced.61934

    Article  Google Scholar 

  • Caddéo, S. (1998). L’usage de la ponctuation chez les enfants. In J.-M. Defays, L. Rosier, & F. Tilkin (Eds.), Actes du colloque international et interdisciplinaire de Liège: A qui appartient la ponctuation? (pp. 255–274). De Boeck.

    Google Scholar 

  • Cassany, D. (1999). Puntuación: Investigaciones, concepciones y didáctica. Letras, 58, 21–54

    Google Scholar 

  • Chen, J. A., Tutwiler, M. S., Metcalf, S. J., Kamarainen, A., Grotzer, T., & Dede, C. (2016). A multi-user virtual environment to support students’ self-efficacy and interest in science: A latent growth model analysis. Learning and Instruction, 41, 11–22. https://doi.org/10.1016/j.learninstruc.2015.09.007

    Article  Google Scholar 

  • Ciechanowski, L., Przegalinska, A., & Wegner, K. (2018). The necessity of new paradigms in measuring human–chatbot interaction. In M. Hoffman (Ed.), Advances in cross-cultural decision making. (pp. 205–214). Springer.

    Chapter  Google Scholar 

  • Colace, F., Santo, M. D., Lombardi, M., Pascale, F., Pietrosanto, A., & Lemma, S. (2018). Chatbot for e-learning: A case of study. International Journal of Mechanical Engineering and Robotics Research, 7(5), 528–533. https://doi.org/10.18178/ijmerr.7.5.528-533

    Article  Google Scholar 

  • Coniam, D. (2008). Evaluating the language resources of chatbots for their potential in English as a second language. ReCALL, 20(01), 98–116. https://doi.org/10.1017/S0958344008000815

    Article  Google Scholar 

  • Coniam, D. (2014). The linguistic accuracy of chatbots: Usability from an ESL perspective. Text & Talk, 34(5), 545–567. https://doi.org/10.1515/text-2014-0018

    Article  Google Scholar 

  • Cordova, D. I., & Lepper, M. R. (1996). Intrinsic motivation and the process of learning: Beneficial effects of contextualization, personalization, and choice. Journal of Educational Psychology, 88(4), 715–730. https://doi.org/10.1037/0022-0663.88.4.715

    Article  Google Scholar 

  • Crown, S., Fuentes, A., Jones, R., Nambiar, R., & Crown, D. (2010). Ann G. Neering: Interactive chatbot to motivate and engage engineering students. American Society for Engineering Education, 15(1), 1–13

    Google Scholar 

  • Daffern, T., & Mackenzie, N. (2015). Building strong writers: Creating a balance between the authorial and secretarial elements of writing. Literacy Learning: the Middle Years, 23(1), 23–32

    Google Scholar 

  • Fang, Z., & Wang, Z. (2011). Beyond rubrics: Using functional language analysis to evaluate student writing. Australian Journal of Language and Literacy, 34(2), 147–165

    Google Scholar 

  • Farkash, Z. (2018). Education Chatbot: 4 ways chatbots are revolutionizing education. Chatbot Magazine. https://chatbotsmagazine.com/education-chatbot-4-ways-chatbots-arerevolutionizing-education-33f36627964c

  • Feng, Y., Bagheri, E., Ensan, F., & Jovanovic, J. (2017). The state of the art in semantic relatedness: A framework for comparison. Knowledge Engineering Review, 32, 1–30. https://doi.org/10.1017/S0269888917000029

    Article  Google Scholar 

  • Ferreiro, E. (1999). Cultura escrita y educación. Conversaciones con Emilia Ferreiro. Fondo de Cultura Económica.

  • Ferreiro, E., & Teberosky, A. (1979). Los sistemas de escritura en el desarrollo del niño. Siglo XXI.

  • Fryer, L. K., & Carpenter, R. (2006). Bots as language learning tools. Language Learning and Technology, 10(3), 8–14. http://llt.msu.edu/vol10num3/emerging/

  • Fryer, L. K., Nakao, K., & Thompson, A. (2019). Chatbot learning partners: Connecting learning experiences, interest and competence. Computers in Human Behavior, 93, 279–289. https://doi.org/10.1016/j.chb.2018.12.023

    Article  Google Scholar 

  • Fuente, M. (1993). Los signos de puntuación: Normativa y uso. Universidad de Valladolid.

    Google Scholar 

  • Garcia Brustenga, G., Fuertes-Alpiste, M., & Molas-Castells, N. (2018). Briefing paper: Los chatbots en educación. eLearn Center. Universitat Oberta de Catalunya.

  • García-Valdecasas, J. (2011). Agent-based modelling: A new way of exploring social phenomena. Revista Española De Investigaciones Sociológicas, 136, 91–110. https://doi.org/10.5477/cis/reis.136.91

    Article  Google Scholar 

  • Ghose, S., & Barua, J. (2013). Toward the implementation of a topic specific dialogue based natural language chatbot as an undergraduate advisor. Proceedings of the International Conference on Informatics, Electronics and Vision, 1–5. Dhaka, Bangladesh. doi: https://doi.org/10.1109/ICIEV.2013.6572650

  • Giurgiu, L. (2017). Microlearning an evolving elearning trend. Scientific Bulletin, 22(1), 18–23. https://doi.org/10.1515/bsaft-2017-0003

    Article  Google Scholar 

  • Goda, Y., Yamada, M., Matsukawa, H., Hata, K., & Yasunami, S. (2014). Conversation with a chatbot before an online EFL group discussion and the effects on critical thinking. The Journal of Information and Systems in Education, 13(1), 1–7. https://doi.org/10.12937/ejsise.13.1

    Article  Google Scholar 

  • Grossman, J., Lin, Z., Sheng, H., Wei, J. T.-Z., Williams, J. J., & Goel, S. (2019). MathBot: Transforming online resources for learning math into conversational interactions. http://logical.ai/story/papers/mathbot.pdf

  • Gupta, S., & Jagannath, K. (2019). Artificially intelligently (AI) tutors in the classroom: A need assessment study of designing chatbots to support student learning. Proceedings of the Twenty-Third Pacific Asia Conference on Information Systems, 1–8. Chicago, United States.

  • Hasler, B. S., Tuchman, P., & Friedman, D. (2013). Virtual research assistants: Replacing human interviewers by automated avatars in virtual worlds. Computers in Human Behavior, 29, 1608–1616. https://doi.org/10.1016/j.chb.2013.01.004

    Article  Google Scholar 

  • Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77, 81–112

    Article  Google Scholar 

  • Heidig, S., & Clarebout, G. (2011). Do pedagogical agents make a difference to student motivation and learning? Educational Research Review, 6, 27–54. https://doi.org/10.1016/j.edurev.2010.07.004

    Article  Google Scholar 

  • Hill, J., Ford, W. R., & Farreras, I. G. (2015). Real conversations with artificial intelligence: A comparison between humanehuman online conversations and humanechatbot conversations. Computers in Human Behavior, 49, 245–250. https://doi.org/10.1016/j.chb.2015.02.026

    Article  Google Scholar 

  • Ho, A., Hancock, J., & Miner, A. S. (2018). Psychological, relational, and emotional effects of self-disclosure after conversations with a chatbot. Journal of Communication, 68(4), 712–733. https://doi.org/10.1093/joc/jqy026

    Article  Google Scholar 

  • Hsu, H.-C.K., Wang, C. V., & Levesque-Bristol, C. (2019). Reexamining the impact of self-determination theory on learning outcomes in the online learning environment. Education and Information Technologies, 24(3), 2159–2174. https://doi.org/10.1007/s10639-019-09863-w

    Article  Google Scholar 

  • Huang, W., Hew, K. F., & Gonda, D. E. (2019). Designing and evaluating three chatbot enhanced activities for a flipped graduate. International Journal of Mechanical Engineering and Robotics Research, 8(5), 813–818. https://doi.org/10.18178/ijmerr.8.5.813-818

    Article  Google Scholar 

  • Io, H. N., & Lee, C. B. (2018). Chatbots and conversational agents: A bibliometric analysis. Proceedings of the IEEE International Conference on Industrial Engineering and Engineering Management, 215–219. Singapore.

  • Jeno, L. M., Adachi, P. J., Grytnes, J. A., Vandvik, V., & Deci, E. L. (2019). The effects of m-learning on motivation, achievement and well-being: A self-determination theory approach. British Journal of Educational Technology, 50(2), 669–683. https://doi.org/10.1111/bjet.12657

    Article  Google Scholar 

  • Johnson, W. L., & Lester, J. C. (2016). Face-to-face interaction with pedagogical agents, twenty years later. International Journal of Artificial Intelligence in Education, 26, 25–36

    Article  Google Scholar 

  • Jomah, O., Masoud, A. K., Kishore, X. P., & Aurelia, S. (2016). Micro learning: A modernized education system. BRAIN Broad Research in Artificial Intelligence and Neuroscience, 7(1), 103–110

    Google Scholar 

  • Jones, M., & Mewhort, D. (2007). Representing word meaning and order information in a composite holographic lexicon. Psychological Review, 114(1), 1–37. https://doi.org/10.1037/0033-295X.114.1.1

    Article  Google Scholar 

  • Klevjer, R. (2006). What is the avatar? Fiction and embodiment in avatar-based single player computer games. Dissertation for the degree doctor rerum politicarum. University of Bergen.

  • Klopfenstein, L. C., Delpriori, S., Malatini, S., & Bogliolo, A. (2017). The rise of bots: A survey of conversational interfaces, patterns, and paradigms. Proceedings of the 2017 Conference on Designing Interactive Systems, DIS '17, 555–565. New York, United States. https://doi.org/https://doi.org/10.1145/3064663.3064672

  • Labuhn, A. S., Zimmerman, B. J., & Hasselhorn, M. (2010). Enhancing students’ self-regulation and mathematics performance: The influence of feedback and self-evaluative standards. Metacognition and Learning, 5(2), 173–194. https://doi.org/10.1007/s11409-010-9056-2

    Article  Google Scholar 

  • Liew, T., Mat Zin, N., & Sahari, N. (2017). Exploring the affective, motivational and cognitive effects of pedagogical agent enthusiasm in a multimedia learning environment. Human-Centric Computing and Information Sciences, 7(1), 1–21. https://doi.org/10.1186/s13673-017-0089-2

    Article  Google Scholar 

  • Liu, Q., Huang, J., Wu, L., Zhu, K., & Ba, S. (2019). CBET: Design and evaluation of a domain-specific chatbot for mobile learning. Universal Access in the Information Society. https://doi.org/10.1007/s10209-019-00666-x

    Article  Google Scholar 

  • López-Meneses, E., Sirignano, F. M., Vázquez-Cano, E., & Ramírez-Hurtado, J. M. (2020). University students’ digital competence in three areas of the DigCom 2.1 model: A comparative study at three European universities. Australasian Journal of Educational Technology, 36(3), 69–88. https://doi.org/10.14742/ajet.5583

    Article  Google Scholar 

  • Macken-Horarik, M., & Sandiford, C. (2016). Diagnosing development: A grammatics for tracking student progress in narrative composition. International Journal of Language Studies, 10(3), 61–94

    Google Scholar 

  • Mohammed, G. S., & Wakil, K. (2018). The effectiveness of microlearning to improve students’ learning ability. International Journal of Educational Research Review, 3(3), 32–38

    Article  Google Scholar 

  • Nikou, S. A. (2019). A micro-learning based model to enhance student teachers’ motivation and engagement in blended learning. Proceedings of the SITE 2019, Society for Information Technology and Teacher Education, 255–260. Las Vegas, United States.

  • Nikou, S. A., & Economides, A. A. (2017). Mobile-based assessment: Integrating acceptance and motivational factors into a combined model of self-determination theory and technology acceptance. Computers in Human Behavior, 68, 83–95. https://doi.org/10.1016/j.chb.2016.11.020

    Article  Google Scholar 

  • Nikou, S. A., & Economides, A. A. (2018). Mobile-based micro-learning and assessment: Impact on learning performance and motivation of high school students. Journal of Computer Assisted Learning, 34(3), 269–278. https://doi.org/10.1111/jcal.12240

    Article  Google Scholar 

  • Paschoal, L. N., Turci, L. F., Conte, T. U., & Souza, S. R. S. (2019). Towards a conversational agent to support the software testing education. Proceedings of the XXXIII Brazilian Symposium on Software Engineering, 57–66. Curitiba, Brazil.

  • Polo, J. (1990). Manifiesto ortográfico de la lengua española. Visor.

  • Procter, M., Lin, F., & Heller, B. (2012). Intelligent intervention by conversational agent through chatlog analysis. Smart Learning Environments, 5(30), 1–15. https://doi.org/10.1186/s40561-018-0079-5

    Article  Google Scholar 

  • Reyes-Reina, D., Vilaça, L., Spolidorio, S., & Martins, M. (2019). El desarrollo sociotécnico de un chatbot o ¿Cómo se construye una caja negra? Revista Tecnologia e Sociedade, 16(39), 23–40

    Article  Google Scholar 

  • Rosenthal, R. (1991). Effect sizes: Pearson’s correlation, its display via the BESD, and alternative indices. American Psychologist, 46(10), 1086–1087

    Article  Google Scholar 

  • Rosenthal, R., & Rubin, D. B. (1982). A simple, general purpose display of magnitude of experimental effect. Journal of Educational Psychology, 74(2), 166–169. https://doi.org/10.1037/0022-0663.74.2.166.

  • Ruan, S., Willis, A., Xu, Q., Davis, G. M., Jiang, L., Brunskill, E., & Landay, J. A. (2019). BookBuddy. Proceedings of the Sixth ACM Conference on Learning @ Scale - L@S '19, 1–4. New York, United States. https://doi.org/https://doi.org/10.1145/3330430.3333643

  • Schroeder, N., Adesope, O., & Gilbert, R. (2013). How effective are pedagogical agents for learning? A metaanalytic review. Journal of Educational Computing Research, 49(1), 1–39. https://doi.org/10.2190/ec.49.1.a

    Article  Google Scholar 

  • Schroeder, N. L., Romine, W. L., & Craig, S. D. (2017). Measuring pedagogical agent persona and the influence of agent persona on learning. Computers & Education, 109, 176–186. https://doi.org/10.1016/j.compedu.2017.02.015

    Article  Google Scholar 

  • Scull, J., & Mackenzie, N. M. (2018). Developing authorial skills: Child language leading to text construction, sentence construction and vocabulary development. In N. M. Mackenzie & J. Scull (Eds.), Understanding and supporting young writers from birth to 8. (pp. 89–115). Routledge.

    Google Scholar 

  • Sha, G. (2009). AI-based chatterbots and spoken English teaching: A critical analysis. Computer Assisted Language Learning, 22(3), 269–281. https://doi.org/10.1080/09588220902920284

    Article  Google Scholar 

  • Shail, M. S. (2019). Using micro-learning on mobile applications to increase knowledge retention and work performance: A review of literature. Cureus, 11(8), e5307. https://doi.org/10.7759/cureus.5307

    Article  Google Scholar 

  • Sheth, A., Yip, H. Y., Iyengar, A., & Tepper, P. (2019). Cognitive services and intelligent chatbots: Current perspectives and special issue introduction. IEEE Internet Computing, 23(2), 6–12

    Article  Google Scholar 

  • Shum, H.-Y., He, X., & Li, D. (2018). From Eliza to XiaoIce: Challenges and opportunities with social chatbots. Frontiers of Information Technology & Electronic Engineering, 19(1), 10–16. https://doi.org/10.1631/FITEE.1700826

    Article  Google Scholar 

  • Smutny, P., & Schreiberova, P. (2020). Chatbots for learning: A review of educational chatbots for the Facebook Messenger. Computers & Education, 151, 103862. https://doi.org/10.1016/j.compedu.2020.103862

    Article  Google Scholar 

  • Stickler, U., & Hampel, R. (2015). Transforming teaching: New skills for online language learning spaces. Palgrave Macmillan.

    Book  Google Scholar 

  • Subramaniam, N. K. (2019). Teaching & learning via chatbots with immersive and machine learning capabilities. Proceedings of the ICE 2019 Conference Proceedings, 145–156. Jyväskylä, Finland.

  • Tamayo, P. A., Herrero, A., Martín, J., Navarro, C., & Tránchez, J. M. (2020). Design of a chatbot as a distance learning assistant. Open Praxis, 12(1), 145–153. https://doi.org/10.5944/openpraxis.12.1.1063

    Article  Google Scholar 

  • Taraban, R. (2018). Practicing metacognition on a chatbot. Improve with metacognition. http://www.improvewithmetacognition.com/2035–2/

  • Tegos, S., Demetriadis, S., & Tsiatsos, T. (2014). A configurable conversational agent to trigger students’ productive dialogue: A pilot study in the CALL domain. International Journal of Artificial Intelligence in Education, 24(1), 62–91. https://doi.org/10.1007/s40593-013-0007-3

    Article  Google Scholar 

  • Tegos, S., Psathas, G., Tsiatsos, T., & Demetriadis, S. (2019). Designing conversational agent interventions that support collaborative chat activities in MOOCs. Proceedings of EMOOCs 2019: Work in Progress Papers of the Research, Experience and Business Tracks, 66–71. Naples, Italy.

  • Thompson, A., Gallacher, A., & Howarth, M. (2018). Stimulating task interest: Human partners or chatbots? Proceedings of the Future-proof CALL: language learning as exploration and encounters, 302–306. Jyväskylä, Finland.

  • Van Rosmalen, P., Eikelboom, P., Bloemers, E., Van Winzum, K., & Spronck, P. (2012). Towards a game-chatbot: Extending the interaction in serious games. Proceedings of 6th European Conference on Games Based Learning, 1–8. Cork, Ireland.

  • Vázquez-Cano, E. (2012). Mobile learning with Twitter to improve linguistic competence at secondary schools. The New Educational Review, 29(3), 134–147

    Google Scholar 

  • Vázquez-Cano, E. (2014). Mobile distance learning with smartphones and apps in higher education. Educational Sciences: Theory & Practice, 14(4), 1–16. https://doi.org/10.12738/est.2014.4.2012

    Article  Google Scholar 

  • Vázquez-Cano, E., Fombona, J., & Fernández, A. (2013). Virtual attendance: Analysis of an audiovisual over IP system for distance learning in the Spanish Open University (UNED). The International Review of Research in Open and Distance Learning (IRRODL), 14(3), 402–426. https://doi.org/10.19173/irrodl.v14i3.1430

    Article  Google Scholar 

  • Vázquez-Cano, E., Holgueras, A. I., & Sáez-López, J. M. (2018). An analysis of the ortographic error found in university students’ asynchronous digital writing. Journal of Computing in Higher Education, 31(1), 1–20. https://doi.org/10.1007/s12528-018-9189-x

    Article  Google Scholar 

  • Vijayakumar, R., Bhuvaneshwari, B., Adith, S., & Deepika, M. (2019). AI based student bot for academic information system using machine learning. International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 5(2), 590–596. https://doi.org/10.32628/CSEIT1952171

    Article  Google Scholar 

  • Wang, N., Johnson, W. L., Mayer, R. E., Rizzo, P., Shaw, E., & Collins, H. (2008). The politeness effect: Pedagogical agents and learning outcomes. International Journal of Human Computer Studies, 66, 96–112. https://doi.org/10.1016/j.ijhcs.2007.09.003

    Article  Google Scholar 

  • Weizenbaum, J. (1966). ELIZA—A computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36–45

    Article  Google Scholar 

  • Wing Jan, L. (2009). Write ways: Modelling writing forms. Oxford University Press.

    Google Scholar 

  • Winkler, R., & Soellner, M. (2018). Unleashing the potential of chatbots in education: A state-of-the-art analysis. Proceedings of the 78th Academy of Management Annual Meeting, 1–40. Chicago, Illinois.

Download references

Acknowledgements

Not applicable.

Funding

This research has been developed with the support of the I + D + I Project entitled: ‘‘Gamification and ubiquitous learning in Primary Education. Development of a map of teaching, learning and parental competences and resources “GAUBI”. (RTI2018-099764-B-100) (MICINN/FEDER) financed by FEDER (European Regional Development Fund) and Ministry of Science, Innovation and Universities of Spain.

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization: EV-C; Methodology: EV-C and SM-A; Analysis: SM-A, EV-C and EL-M; Writing—original draft preparation: EVC; Writing—review and editing: SM-A, EV-C and EL-M. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Esteban Vázquez-Cano.

Ethics declarations

Competing interests

The authors declare that they have no competing interests. All authors have approved the manuscript and agree with its submission to the International Journal of Education Technology in Higher Education. This manuscript has not been published and is not under consideration for publication elsewhere.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Vázquez-Cano, E., Mengual-Andrés, S. & López-Meneses, E. Chatbot to improve learning punctuation in Spanish and to enhance open and flexible learning environments. Int J Educ Technol High Educ 18, 33 (2021). https://doi.org/10.1186/s41239-021-00269-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41239-021-00269-8

Keywords