1 Introduction

In the field of education, advances in computing have been driving the research and development of learning analytics (Gašević et al., 2022; Kaliisa et al., 2021; Rose, 2018). Learning analytics refers to “the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs” (Siemens & Baker, 2012, pp. 252–253). These artifacts comprise a range of measurements regarding participant activity which are recorded in digital learning environments. Learning analytics integrates different methods, techniques, and algorithms that analyze these measurements (Kew & Tasir, 2022). The analysis of these measurements makes it easier to understand and evaluate the teaching and learning processes in such settings. In this way, educational agents can reflect and make promising educational decisions to improve learning environments and processes (Ifenthaler, 2017; Karaoglan et al., 2022; Larusson & White, 2014; Mu et al., 2019; Siemens & Baker, 2012).

Learning analytics based on multidimensional data (Mangaroska, et al., 2021) can be summarized in a dashboard which can be tailored to the type of user. This simplifies the analytic execution and interpretation of the activity in an educational context (Aguilar et al., 2021; Govaerts, 2012; Steiner et al., 2014; Verbert et al., 2014). Similar to an airplane pilot, these dashboards provide, in a “data fog”, summarized and relevant information regarding the activity of educational agents at a glance. The aim of this is to streamline and optimize educational agents’ decision-making to improve learning and teaching processes. To effectively catalyze educational processes, it is recommended that learning analytics dashboards (LADs) be designed according to a learning theory that guides pedagogical action (e.g., Buckingham & Deakin, 2012; Zheng et al., 2021).

Different educational theoretical frameworks have shown interest in the benefits and promises that this technology offers to the teaching, evaluation, and learning processes (Diez-Gutiérrez & Gajardo, 2021; Fernández-Miranda, 2022). One of these frameworks is Knowledge Building (Scardamalia & Bereiter, 1991). The Knowledge Building theory aims to empower students through the collective improvement of ideas. From the Knowledge Building theory, we can talk about analytics as a tool that provides concurrent information that supports the collective creation of knowledge of teachers and students (Chen & Zhang, 2016). A wide range of valuable technological tools have been developed so far (see Zhu & Kim, 2017). In this article, we present a dashboard developed within the conceptual framework of Knowledge Building that provides an original and complementary perspective to previous software. Based on previous studies, we consider that this dashboard helps researchers, teachers, and students to understand key information related to the activity associated with the construction of knowledge in a Knowledge Building community (Gutiérrez-Braojos et al., 2019, 2021).

2 Knowledge Building

The theory of Knowledge Building was created in the late twentieth century (Bereiter & Scardamalia, 2014) under a socio-constructivist theoretical framework. Since then, studies on Knowledge Building have grown exponentially and have been especially applied in the educational field (Gutiérrez-Braojos et al., 2020, 2022). This has generated a continuous activity of reflection to improve the pedagogy and technology associated with Knowledge Building in order to efficiently support its implementation in the classroom, and also the theory itself. Therefore, we recognize Knowledge Building as an academic and professional proposal associated with the field of education and computing which represents the improvement of theory, pedagogy, and educational technology to generate Knowledge Building communities.

KB communities consist of members who collaborate with their peers to improve ideas about authentic knowledge problems (Bereiter & Scardamalia, 2014; Chuy et al., 2011; Zhang et al., 2011). Ideas are defined as “a unit of thought that can be a question, an explanation, an observation, or an opinion” (Lee & Tan, 2020, p. 173). The Knowledge Building theory advocates that ideas are shared and accessible to the community through contributions in a public space, usually supported by a virtual environment. For that, Scardamalia (2004) developed the Knowledge Forum (based on CSILE, Scardamalia, et al., 1989), i.e., a multimedia community knowledge space designed to facilitate the shared construction of knowledge. This software provides space to upload contributions and read them at any time. In addition, the software makes it possible to visualize the way contributions are linked to each other, and also select ideas most relevant to the development of constructive discourse.

The Knowledge Building theory distinguishes between two ways of approaching knowledge, a belief mode and a design mode (Bereiter & Scardamalia, 2003). On the one hand, the belief mode can be subdivided into an uncritical or critical approach, i.e., classrooms that push students to consider knowledge as ideas that they must know, reproduce, or apply depending on the circumstances; and in the best-case scenario students are allowed to decide whether they accept or reject these ideas. On the other hand, the design mode focuses on the usefulness, adequacy, improvement, and potential development of ideas. We could state that these ways of approaching knowledge portray two different types of educational centers; those that encourage students to learn to improve ideas, and those that seek to teach knowledge beliefs that are sold to students as immutable truths (see Bereiter & Scardamalia, 2014).

According to the Knowledge Building theory, in a knowledge-based society, the educational system must empower students so that, in addition to a belief mode, they adopt a design mode that allows proposing and reflecting on ideas, as well as questioning their value regarding the subject’s purpose, and how these ideas can be improved. In Knowledge Building, students can adopt a belief-based approach from which to search and select authoritative sources, and use evidence in building their knowledge, but one belief mode is not enough. Learners must participate in communities whose knowledge is supported by progressive discourses, in such a way that they have the option to share, negotiate, and argue on how to improve ideas. As Bereiter and Scardamalia (2003, p. 6) point out, “Knowledge Building, offers the possibility of integrating all the approaches into an overarching learning environment that provides fuller and more authentic inmersion in the actual life of a knowledge society”. In this way, working with knowledge in the classroom can be performed in an analogous way to the creative activity that scientists carry out with scientific knowledge. As Bereiter (1994, p. 10) points out, “Classroom discourse can be progressive in the same sense that science as a whole is progressive […]”.

Knowledge Building requires students to assume a collective epistemic agency (Damsa et al., 2010; Scardamalia, 2002; Zhang et al., 2011). The student body is conceived as a proactive member of a community that must accept the responsibility to collaborate and participate in collective decision-making to improve ideas about a conceptual artifact. Instead of meeting rigid knowledge goals established by a teacher, students assume responsibility for identifying knowledge needs, proposing problems to work on, monitoring their actions, evaluating the progress of their ideas, and reflecting on new knowledge needs (Bereiter & Scardamalia, 2003; Chen & Zhang, 2016; Scardamalia, 2002; Yang et al., 2020). As Scardamalia and Bereiter (1999) state, “the nature of the work is essentially the same as that of a professional research group, with the students being the principal doers of the work”. Literature that summarizes supranational efforts to establish the ideal goals of knowledge-based education is clear in affirming the relationship between agency and educational benefits (e.g., see OECD, 2019).

Different studies have found that carrying out KB pedagogy generates educational high levels of collective epistemic agency in most students regardless of educational level (e.g., Ma et al., 2016; Messina & Reeve, 2006; Sigin, et al., 2015). Efforts to coordinate knowledge and individual skills contribute to building more effective communities that help advance collective and individual ideas and also improve transversal and specific skills (e.g., Oshima et al., 2018; Yang, 2019). There are also studies with student networks in Knowledge Building communities that show how small groups of students delegate cognitive responsibilities to their more committed peers (e.g., Lax et al., 2016; Mylläri, et al., 2010; Author). Measurements of student activity in the online environment show that less engaged students are characterized by low and discontinuous activity patterns. In other words, they show a certain resistance to active work that is evidenced in the activity records. These students: do not read notes written by their peers (Peters & Hewwit, 2010), do not select ideas and are therefore less aware of the value of shared ideas to advance knowledge, do not elaborate enough notes, or many non-productive notes, i.e., many notes with repeated content, or notes with low complexity and sophistication (Wise et al., 2013; Gutiérrez-Braojos, et al., 2019) and show discontinuous activity when accessing the platform, reading notes and contributing ideas (Cacciamani, 2012, Gutiérrez-Braojos et al., 2022).

Some authors indicate that changes in leadership shows collective cognitive responsibility (Ma et al. 2016). However, it is important to understand that this should not reflect an absence of collaborative activity aimed at improving the knowledge of the rest of the members, or a loss of track of the progress made by their peers. One important precondition for productive interactivity and knowledge building is engagement with the posts contributed by others (Wise et al., 2013). The lack of continuous engagement in the Knowledge Forum (reading, contribution and build-on, and selecting ideas) can cause students to lose track of the progressive discourse of ideas. This translates into greater difficulty in being able to build-on previous knowledge, thus delegating the responsibility of improving ideas to the most committed peers (Gutiérrez-Braojos & Salmerón-Pérez, 2015).

3 Reflective Evaluation in KB to Foster Collective Agency

Recent studies underline that epistemic collective agency can benefit considerably from students and teachers collaborating on iterative evaluative sessions of collective reflection that are concurrent to the inquiry process (Cacciamani et al., 2021; Yang et al., 2020). Knowledge Building recognizes the value of evaluations when carried out from a participatory, concurrent, reflective, and transformative perspective. That is, an evaluation must be nested in the process of building knowledge and engaging students themselves to boost their transformative power (Aalst et al., 2015). As Scardamalia (2002) points out, evaluation in Knowledge Building is part of the effort to Students are encouraged to reflect on a set of work principles, the process of inquiry, and the achievement of knowledge, so that they are aware of the status of their learning and can create community feedback (Yang et al., 2016). Evaluative sessions can be valuable to establish collective areas of knowledge construction, strengthen the identity of belonging to the community, and to understand non-productive dynamics of the community (lack of reading of other contributions, repetition of ideas, insufficient build-on, discontinuous and insufficient participation in the construction of notes, low efforts to reflect and select promising ideas, etc.). In other words, these reflective sessions allow students and teachers to learn to function more efficiently together in the collective building of knowledge (Scardamalia, 2002).

To support this evaluation, it is convenient for students and teachers to have access to proper information regarding working collectively with ideas. In addition, this information should be tailored to the level of understanding of those concerned. This is intended to transform the evaluation into a tool at the service of the construction of shared knowledge. Although technologies are not a requirement in Knowledge Building, they can make it easier to extract data on the activity and ideas of students in the Knowledge Forum to facilitate reflections that lead to improvements according to the principles of Knowledge Building (Scardamalia & Bereiter, 2021).

4 Analytics for Knowledge Building Forum

From the start of the Knowledge Building theory, teams of researchers and teachers have designed and built diverse sets of technologies to support Knowledge Building in educational contexts (Scardamalia & Bereiter, 2006). A main technological artifact in the implementation of Knowledge Building classrooms has been the Knowledge Forum platform (Scardamalia, 2004). It is based on another pioneer in Computer Supported Collaborative Learning, CSCL, called Computer Supported Intentional Learning Environment, CSILE (Scardamalia, et al., 1994). The Knowledge Forum offers an interface that supports progressive speech using different tools. For this, the software offers multiple ways to outsource one’s ideas. Students write their ideas using notes or a build-on, but they can also attach their ideas using other formats (visual, auditory), as well as attach links and documents that support their texts. These notes stand out (compared with those created on other platforms) because they allow the creation, configuration, and association of interaction scaffolds to express the intention of the person who prepares the note (for example, my theory, this theory cannot explain, a better theory, need to understand, rise above). These notes can be read and responded to by other members (build-on) asynchronously. The Knowledge Forum interface makes it possible to view the connections between the notes that discuss a topic, or to organize them according to interaction scaffolds used in the preparation of each note. In addition, the Knowledge Forum allows a flexible organization of these in the online workspace or to nest several notes in another meta-note called rise above.

Furthermore, researchers and education professionals have collaborated on the development of associated learning analytics for the Knowledge Forum. Although some of these tools are functional with other environments, the Knowledge Forum is the most used when implementing Knowledge Building (Gutiérrez-Braojos et al., 2021). In this article, we do not intend to carry out a review of such tools because there are recent review studies performed by Institute Knowledge Innovation and Technology (IKIT) members. Instead, we want to acknowledge some summarized ideas collected by these members. Chen and Zhang (2016) analyze tools associated with Knowledge Forum that favor collective epistemic agency (e.g., promising ideas tool; Epistemic Discourse Moves tool; Idea Thread mapper) and explain their value to facilitate decision-making and progressive discourse. Zhu and Kim (2017) provide an actual and detailed description of each learning analytics tool associated with the Knowledge Forum (authors, features, measurements, and functions). The authors identified a total of 13 publications which used analytical tools for the Knowledge Building theory. These analytical technologies are or have been integrated into the Knowledge Forum (e.g., “Activity Dashboard” “Analytic Toolkit for Knowledge Forum”, “Idea Thread Mapper”, “Promising ideas tool”, “Semantic Overlap Tool”, “Social Network Tool”, and “Vocabulary Analyzer”), or are external, but associated, or modular, in the sense that they are not currently incorporated into the Knowledge Forum software, and use the data registered in the Knowledge Forum (e.g., “Knowledge Building Discourse Explore, KBDeX”, “Knowledge Community Analysis, KCA”). The authors summarize these tools into four categories: activity monitoring tools, social network tools, discourse level analysis, and meta-discourse level analysis. Another classification could be based on the criteria of the recipients: (i) Knowledge Building researchers or experts, students (according to educational levels); (ii) finding that there is a greater number of tools for research activity.

The work done so far on Knowledge Building analytics is commmendable in its merit and value to the educational field, but Knowledge Building analytics are still being developed, or in other words, there is still room for improvement (e.g. Chen et al., 2015; Lee & Tan, 2020; Oshima et al., 2012; Zhu & Kim, 2017). In particular, there are few analytical tools that facilitate the implementation of Knowledge Building pedagogy in classrooms. This study considers the development of a dashboard with simple visualizations that can help students and teachers know the activity and knowledge developed in the Knowledge Forum in a non-invasive way and avoiding long learning times. In the context of this study, patterns of Knowledge Building participation have been repeatedly found in terms of students resisting their epistemic agency and collective responsibility. In other words, there are some students who within an activity (e.g. reading, notes, […], selection of ideas) will produce notes whose content is of little value to the community. In this study we set out to evaluate a dashboard by answering the following questions:

  1. 1.

    Does nominal information analytic technology help students become more aware of the collective activity of knowledge building without negatively interfering with individual motivation?

  2. 2.

    How can the current version of analytical technology be improved?

  3. 3.

    Does implementing a dashboard that makes it easier for users to be aware of their activity on the platform interfere with collective epistemic agency and with the perception of skills development?

In consecuence in this study, a Knowledge Building Analytics Dashboard to facilitate a thoughtful evaluation of Knowledge Building communities supported by the Knowlege Forum platform was evaluated:

O1: Assess the students’ perception of the Dashboard and collect suggestions for improvement.

O2: Assess the effects of implementing this experience in terms of students’ perception of their improvement in collective epistemic agency and educational research skills.

5 Material and Method

This study follows a design-based research methodology (Brown, 1992) in which researchers approach educational research in the same way that an engineer approaches the development of a new product (Scott et al., 2020). Collins et al. (2004) state that this method involves implementing a design in the context for which it was designed in order to assess effects and review the initial version in search of improvements. In other words, design-based research involves an iterative process in which the performance of an artifact is assessed for the purpose for which it was built, identifying promising ideas for its improvement. Mixed procedures and techniques of data collection and analysis of educational research are used for this purpose.

5.1 Description of the Software Prototype to be Evaluated

In a community, students are expected to actively contribute notes, read those made by their peers, and select ideas to learn, discuss, and build on their shared knowledge. This dashboard analyzes the activity of users of the KF platform through reading logs, notes, and note selection and offers visualizations to make it easier for students to be aware of their activity in the KF.

5.2 Data Extraction, Transformation and Load

The Knowldege Forum is able to produce tabular data in which each row is a published note. Columns contain information about the author, citations, reads, promising ideas, etc. Even though this format is intuitive and easy to read for the learning community, it is difficult to use for making complex, automatized analyses.

An extraction, transformation, and load (ETL) process (Awiti et al., 2020) was implemented to automatically retrieve the tabular data contained in the KF and to adapt its format and structure to efficiently analyze it through a multi-dimensional Online Analytical Processing (OLAP) (Jensen et al., 2010). OLAP enables data analysis by intersecting different dimensions (who, when, where, how, etc.) and summarizing (sum, average, count, etc.) the data related to those dimensions. For instance, it allows counting the number of selected notes made at a certain time, by a subset of the KB community and/or regarding a certain topic.

OLAP requires the information to be structured through a relational model, in which the analysis dimensions are tables related to the data that will be summarized. Consequently, the implemented ETL process automatically structures the KF data as shown in Fig. 1. This kind of models are known as star models (Golfarelli et al., 2009), given that the data to be analysed is surrounded by the different analysis dimensions (Table 1).

Fig. 1
figure 1

Extraction, transformation, and load

Table 1 Knowledge building activity analytics dashboard measures

Finally, the information is automatically loaded into a relational database, which serves as the information infrastructure for an OLAP cube implemented through PowerBI (Ferrari et al., 2017).

5.3 Dashboard Measures and KPI

The OLAP implemented in PowerBI allows the development of an interactive dashboard to display information about each section based on activity measures by topics and weeks.

The dashboards present a set of Key Performance Indicators (KPIs), that is, summarized information resulting from certain calculations that are used to check at a glance and through a chart, whether or not a certain learning goal has been accomplished. The set of KPIs and related charts that we considered in this work are presented in Table 2.

Table 2 Key performance indicators

5.4 Dashboard Interface

The interface consists of 5 sections with 9 graphs that make it easy for members to be aware of the collective activity in the KF.

  1. (i)

    Sect. 1. Are we participating on the KF? It comprises two graphs. Graph 1 represents the readings made to understand the contributions made by peers. Graph 2 represents the build-on activity and note selection. Both graphs help students be aware of their efforts to make others' ideas known, to incorporate improvements or move the discourse forward, and to identify significant contributions to collective knowledge. In addition, by providing a time axis, it is easier to understand the amount of weekly activity, but also the continuous work of the students.

  2. (ii)

    Sect. 2. Is everyone collaborating with each other? It comprises three graphs. Graph 3 provides a Lorenz curve that makes it easier to know how each of the activity measures is distributed among the members. Graph 4 provides an objective value of the level of inequality among members for each measure. This allows us to know if the different means of activity are concentrated in some students or are distributed among the members of the activities mentioned in the previous point. This information is relevant to understand to what extent the community is collaborating. The activities of the previous section are evenly distributed/concentrated in some students. Graph 5 extracts 4 levels of roles (from less to more activity) and reports the transition of roles per week. These roles are: (i) Peripheral participant role composed of students who do not or hardly contribute according to the records of the KF participation measure; (ii) Role of casual participant, composed of students with a low contribution in the records of the KF participation measure; (iii) Role of continuous participant, composed of students who present an acceptable contribution according to records of the KF participation measure; (iv) Role of student leaders, those located at the top of the records of the participation measure in the KF.

  3. (iii)

    Sect. 3. Are we interacting each other? Graph 6 provides patterns of interaction between students for each activity (who do I read, who do I write about, who do I select), or if there are isolated students in the community who need help. The purpose is to make students aware of the interaction network to discuss how to correct it in case of finding isolated students.

  4. (iv)

    Sect. 4. Who are the recognized members? It comprises two graphs. Graph 7 reports the impact or leadership of the members, based on the ratio between the number of notes and those that were selected by their peers. While Graph 8 indicates who you have, or as a working group have, impacted with the contributions.

  5. (v)

    Sect. 5. What notes have value to improve ideas? What is the value of our notes? Graph 9 classifies the notes according to the value given by the community for each topic of discussion and a given duration.

The user can customize the analysis according to their interests by selecting members, weeks, topics, subtopics (keywords), and type of ideas on which they want to run the data analysis in the KF. Figure 1 shows the dashboard graphics implemented through PowerBI software (Fig. 2).

5.5 Environments and Participants

A total of 126 undergrad students (mean age = 20.57 and SD = 2.27; 85.7% females) enrolled in an educational research course at the University (of Granada, Spain) in participated in this study. Participants worked for 16 weeks online, supported by the Knowledge Forum. The professor used the KB Activity Dashboard together with the students to create awarenes of their participation habits in the KF on two occasions, 6 and 11 weeks after the beginning of the course. The decision was made to use it twice to avoid being very invasive (new pedagogy, new KF technology, new dashboard, and new subject, in a short period of time). Each of the charts was submitted to debate in the class with the purpose of creating awareness of the current situation and to look for improvements in the participation habits of the platform.

5.6 Data Collection and Instruments

5.6.1 Dashboard Rating

Several studies have used indicators to assess LA from the perspective of students (Kokoç & Kara, 2021; Scheffel et al., 2014). To find out the students' perception of the LAD, we applied a mixed questionnaire with 31 items organized into 5 sections (Gutiérrez-Braojos et al., 2021, Annex-1) during week 15 of the course. The first section addresses the perception that the participants have about the degree to which the LAD facilitates having a greater awareness of the activity of the users on the KF platform. The second section asks if the LAD interferes with the motivation of the users. The third section asks if the LAD complies with ethical research principles. The fourth section requests an assessment of functional aspects of the technology. The first four sections have a Likert scale (“1” means “does not represent me at all”; “5” means “represents me totally”). The model was tested using the Amos software (Fig. 3), yielding acceptable fit values (Table 3). The dimensions presented a significant correlation (p < 0.05) and the reliability values were acceptable (ωA = 0.92; ωM = 0.88; ωT = 0.85; ωE = 0.83). Additionally, a fifth section invites users to suggest and justify improvements to the LAD in each of the four previous sections (Annex 1).

Fig. 2
figure 2

Dashboard: Sections & Charts. Example of the first reflective session. Note If required, this image can be scaled (enlarged) to view details using the zoom function on the screen.

Table 3 Good fit indices

5.6.2 Collective Epistemic Agency

To understand students' perceptions of the level of collective epistemic agency, we adopted the knowledge and collective epistemic agency (CEA) instrument developed by Zhang and colleagues (Zhang et al., 2019, 2021). This questionnaire consists of 3 dimensions “Collaboration and sharing, CS”; “Team awareness, TA”; and “Efficacy collective, EC”. This questionnaire was applied during week 6 and 15. The model was tested two times, once for each moment, using Amos software (Fig. 4). The results shown in Table 4 show acceptable fit values (chi-square-p value, SRmr, Rmsea, and CFI) for each time, Moment 1 (CEAM1), and Moment 2 (CEAM2). The dimensions presented a significant correlation (p < 0.05) regardless of the moment. In addition, the reliability values were acceptable for each dimension of the questionnaire at Moment 1 (ω CS1 = 0.82; ω TA1 = 0.76; ω EC1 = 0.72) and Moment 2 (ω CS2 = 0.94; ω TA2 = . 95; ωEC2 = 0.88).

Fig. 3
figure 3

Tested model of the dashboard validation questionnaire

Table 4 Good fit indices

5.6.3 Educational Research Skills

To find out the students’ perception of their level of educational research skills, the one-dimensional questionnaire from Holden et al. (1999) was applied during weeks 1 and 16. Reliability values were acceptable both times (ω 1 = 0.74; ω 2 = 0.98).

6 Data Analysis

Various analyses were conducted in this study: (i) descriptive analyses; (ii) non-parametric comparative analysis; (iii) test of the measurement structure by modeling with structural equations, and (iv) path analysis. To analyze the data, the Rstudio and the Amos-22 software were used.

7 Results

7.1 Student Assessment of the Tool

Likert responses to the questionnaire showed consensus. The students positively assessed the dashboard in its different dimensions (Fig. 5). According to the students, the dashboard facilitates awareness of their activity on the KF platform, positively interferes with their motivation to learn, respects users, and is a technically correct tool. The items of the ethical dimension received the best evaluations. Additionally, a comparative analysis between both groups found no significant differences.

Fig. 4
figure 4

Tested model of the dashboard validation questionnaire for each moment

The answers to the first open question show that the students considered the dashboard to be an innovative artifact for the context and that it provided value. It allowed them to verify that their continuous work can be valued and used to understand more objectively how they are working collaboratively on the KF platform.

Excerpts: S1: “we are not used to being told how our online participation is, we send in our work, and we only know that it has been received, and at the end of the course they give us the grade and the exam counts a lot. Like, if you think about it, in reality we are usually a bit lost, without knowing for sure if we are doing it right. This dashboard and the class reflection sessions have been key to situating ourselves”. S2: “So far, we have not received any feedback like in class, that's new, they don't even tell us how we're working as a group, but rather individually. I think that evaluations of the class as a community or as a whole are original, and it also helps you think and reflect to improve, without singling anyone out”. S3: “It is important to stop, think and discuss based on data and not impressions, about what things we have to improve both individually and as a group, and this is possible with the dashboard and the reflective sessions”.

The answers to the second open question are organized in two groups of opinions regarding dashboard improvements. On the one hand, most students were satisfied and considered that the dashboard didn’t need any improbements. On the other hand, 49 students suggested improvements. Of these, 38.8% suggested improving the graphs, except for Graphs 4 and 8. The open answers were categorized and inclueded answers that differed according to the type of suggestion (insert or delete) and various justifications (complexity, functionality, or graphical representation). The answers were organized according to the dashboard’s section and charts (Fig. 6).

Fig. 5
figure 5

Students’ perceptions about the KF actitivity dashboard

As shown in Fig. 2, of the 49 students, 10.2% offer merger proposals regarding Graphs 1 and 2 in Sect. 1. This 10.2% breaks down into 6.13% who suggest merging the readings and selected ideas graphs, and 4.09% who propose merging the readings and contributions graphs. In any case, the proposal is aimed at showing all this information summarized in one single graph.

Fig. 6
figure 6

Dashboard improvements

In Sect. 2, of the 49 students, 85.71% suggested eliminating the Lorenz curve graph (Fig. 2). A total of 67.34% of these students justified this by referring to the complexity of the content (59.18% thought that it was difficult to interpret the relationships between the XY axes; 16.32% said that it was too much information in a single graph; and 2.04% indicated that it was too abstract). Of the 49 students, 57.14% considered that the graph is unnecessary and could be eliminated by having another one that provided similar information, but in a simpler way (i.e., Gini thermometer graph). In addition, 24.48% highlighted that they disliked the graph’s aesthetics and therefore suggested its removal. On the other hand, 4.09% suggested that incorporating text explaining the Lorenz curve could be helpful because it is especially difficult to understand.

In addition, in Graph 5 of Sect. 2, 61.22% of the 49 students suggested eliminating some element of the roles and transitions graph. These modifications were justified with different reasons. A total of 51.02% considered that the information was of little use to improve their future activity, the information of the roles being sufficient without data on such transitions. In addition, 32.65% considered it appropriate to eliminate the arrows that connect and explain the transitions between moments because they consider that the percentages of the transitions are complex to understand and incorporate too much information in a single graph. On the other hand, 22.44% of the 49 students suggested improving graphic representation elements, 20.4% considered incorporating improvements in terms of the representation of the data provided on the transitions because they observe that the clarity and visualization of the size of the data could be improved, and 2.04% considered that the color combination was fine but could be modified for more attractive ones.

In Sect. 3, 38.78% of the 49 students suggested including a feature to allow interaction with Graph 6 to explore information on particular cases. These students considered that nominal information is not enough. They reasoned that individual information is also relevant to regulate collaborative activity, for example of other members of the community who are not working.

In Graph 7 of Sect. 4, 14.29% considered it interesting to export a table or ranking of the impact of each student on a topic or during a period of time to have a clearer idea of who they can turn to or who they can help with the KF.

In Sect. 5, 1 of the 49 students suggested incorporating a feature to export a list of promising ideas from Graph 9, but organized by topic or keyword and time that allows seeing how the ideas or concepts are improved according to their value.

Finally, some even suggested a new section. Of the 49 students, 6.13% suggested including a graph that allows them to see more clearly how they are evolving or improving their individual activity compared to the class group.

7.2 Effects of the Intervention on Educational Research Skills

First, to study the effects of using the dashboard, variables (collective epistemic agency and educational research skills) were compared at two moments, one before using the dashboard, and a second after using the dashboard.

The non-parametric Mann–Whitney U test showed nullity of significant differences for each moment between both groups (Table 5). Furthermore, the results revealed statistically significant differences in educational research skills and collective epistemic agency, with the values of Moment 2 being higher in both variables. In each of the variables, the Vargha-Delaney A effect size measurement (between 0 and 1) moves away from 0.5 and approaches 0 (Table 5). This indicates that it is large in favor of Moment 2.

Table 5 Student’s peceptions collective epistemic agency and research skills

A descriptive visual representation of the values for each variable for each moment can be found in Fig. 7. This graph, with paired measurements, provides details regarding the evolution of the measurements of the individuals between both moments. Although a large effect size is observed, cases with room for improvement are also visible in the second moment, representing those still showing low scores in both variables (Fig. 8).

Fig. 7
figure 7

Paired differences between moments for collective epistemic agency

Fig. 8
figure 8

Paired differences between moments for research skills self-efficacy

Subsequently, to study whether the information in the dashboard facilitated educational achievement, a path analysis that establishes a relationship between awareness of the activity in the KF with the epistemic agency, participation, and the development of educational research skills (Fig. 9) was conducted, showing an adequate fit (Table 6).

Fig. 9
figure 9

Relationships between main variables

Table 6 Good fit indices

The path analysis shows that being more aware of the habits of knowledge construction in the Knowledge Forum is positively related to greater collective epistemic agency, participation in the Knowledge Forum (reading other notes, and building on previous knowledge), and educational research skills. All model relationships were significant (p-value < 0.001), except the one between Reading and Research Skills, but it is maintained to study the possible significance of the indirect effect. The effects can be seen in Table 7.

Table 7 Indirect, direct y total efect

8 Conclusions and Discussion

This design-based research study was conducted to evaluate a dashboard from the student’s perspective, with the aim of generating educational benefits. The dashboard was built to assess student activity on the Knowledge Forum platform. Both technologies were used with the Knowledge Building pedagogy for the subject of educational research in two class groups. In this case, the students (KB novices) did not handle the dashboard independently. It was facilitated by the professor in two moments of reflective sessions of the course with the purpose of helping them be aware and transform their habits associated with the construction of knowledge in the Knowledge Forum.

The students answered several instruments to assess both the collaborative construction of knowledge and the development of skills to conduct educational research. The students also evaluated the tool, and indicated written suggestions to improve the dashboard. These suggestions are considered for improvements in the design of a new version of the assessment technology. We discuss the results of the study below.

On the one hand, the results from the assesment of the dashboard show a general satisfaction. Students positively evaluate the use of the dashboard in the reflective sessions. The dashboard helps them to be objectively aware of the collective activity, making it possible to transform their habits of collaborative construction of knowledge in the Knowledge Forum. These results confirm that, although students are not naturalized, using technology for reflective and concurrent assessment in online environments generates positive educational effects when implementing Knowledge Building (e.g., Scardamalia, 2002; Siemens & Baker, 2012; Yang et al., 2020).

In addition, and given that the purpose of this study was to improve the current version of the dashboard, the students were asked about possible improvements. Most students considered that the current version of the dashboard did not require any improvements. However, approximately a third of the students suggested at least one improvement. In their opinion, all sections have room for improvement, especially Sect. 2 (Lorenz curve and role transitions) in which we find the majority of suggestions.

Improvements involve removing or modifying some element of the dashboard sections, due to complexity, functionality, and/or graphical representation (Table 8). In some sections or visualizations, the students report an excessive complexity and request eliminating the graph altogether, especially when there is another one that is easier to use. In others, the students consider that the information provided is insufficient, and request merging sections or graphs. According to the students, some graphs and sections offer repeated or irrelevant features or information that may not be needed to transform the habits of construction of knowledge, while some students consider that it would be appropriate to incorporate more features. Finally, some students believe that the size and proportions in some graphs could lead to comprehension difficulties, while in other graphs they simply suggest modifying superficial aspects such as the range of colors, for aesthetic and non-functional reasons. Table 8 summarizes the improvements acoording to each graph.

Table 8 Dashboard section improvements for new design-based research Cycle

Regarding the Knowledge Building experience, the results on the effects of implementing the assessment technology show that there are notable benefits on the collective epistemic agency and on the skills of the studied subject. This confirms the results of scientific work on Knowledge Building (e.g., Ma et al., 2016; Oshima et al., 2018; Sigin et al., 2015; Yang, 2019). However, we observe that there is still a very small group of students who still have room for improvement (Lax et al., 2016). Furthermore, a path analysis was conducted to verify that the students who considered themselves to have improved their awareness of their activity using the dashboard, did in fact present high collective epistemic agency, participation, reading and educational research skills. In short, the results indicate that implementing Knowledge Building has improved the transversal transformative competences of the students, such as collaborating to build knowledge, but they also develop specific competences of a higher education subject, as is the case of the educational research subject. Therefore, these results are in line with the scientific literature which argues that spending time conducting reflective sessions supported by the dashboard applied in such conditions is beneficial. Therefore, these results are in line with the scientific literature which argues that spending time in reflective sessions supported by by the dashboard applied in such conditions is educationally beneficial (e.g., Scardamalia, 2002; van Aalst et al., 2015).

In short, these results show that even though the technology presented in this study has some room for improvement it positively interferes with implementing Knowledge Building, all of this positively influencing the development of research skills. Furthermore, we hope to improve the present version. The students' perceptions of the dashboard provide a perspective that will help balance its complexity, functionalities, and aesthetic design for a hypothetically more efficient future version. Based on these conclusions, we consider it relevant to resume a new design research cycle to make decisions about improving the current version of the dashboard. In particular, we welcome the comments that requested: (i) merging the graphs in Sect. 1; (ii) exclusively using the Gini thermometer graph, eliminating the rest of the graphs in Sect. 2 from the dashboard version to make it easier for students to understand; (iii) providing a list of selected or promising notes according to criteria of concepts, topics, and time (possibly editable); (iv) providing a graph that allows a user to privately compare their individual activity with the collective average. In addition to these contributions, in future studies we will focus on other visualizations and features that must be subjected to a new design cycle to develop a version of the dashboard that makes it easier for students to improve collective ideas.

Certain issues arised in this study that could be addressed in future research experiences. First of all, the duration of this experience is 16 weeks. This duration is an obligatory condition of our Higher Education context which could be insufficient for some students who, due to their characteristics, require more time to adapt to Knowledge Building when they are novices. In any case, and although we have to indicate that there were no cases of lost students, the lower results of some disadvantaged students could be addressed in future studies taking into account relevant harbinger variables in the scientific literature, for example, previous knowledge, or resistance towards active and collaborative learning, or the use of technology (Hew & Cheung, 2012). In addition, carrying out preventive and more specific measures (see Finelli et al., 2018) would allow performing an "enzymatic" function with these particular cases. Secondly, although each experience from the Knowledge Building has its own entity and flexible evolution that entails taking possible comparative interpretations between contexts with caution, future studies may consider quasi-experimental designs that provide internal validity, for example including a control group to compare experiences, for example, a group that does not have assessment technology to support reflective evaluation. Thirdly, given the prototype nature of this technology and the duration of the experience, it was considered appropriate for the students not to manage the dashboard independently, but together with the professor in the two aforementioned moments of the course to help them reflect on their habits associated with the construction of knowledge in the Knowledge Forum. Future studies could research the adaptation to independent and integral use of these dashboards.