Keywords

1 Introduction

Starting with the London Communiqué, ministers recognized the need of a transition towards a student-centred approach of learning and teaching, recognizing the role of students in the educational process. Their stated aim was for governments to ensure that higher education institutions (HEIs) have adequate resources to fulfil a complex range of purposes: preparing students for their future role in society, at work and at a personal level, while ensuring an advanced, knowledge-based educational system and stimulating research and innovation. (London 2007). Also, Paris Communique highlighted the importance of collaboration between states in order to enhance innovation in learning and teaching (Paris 2018).

Defining student-centred learning (SCL) goes beyond agreeing on an exhaustive definition. By trying to find an overarching definition, one can only note the main novelties brought in the educational system by the SCL. Besides switching the perspective towards the student, it introduces the concept of students’ choice in their education, passive learning turns into active learning, while describing the shift in the power relationship between the student and the teacher (O’Neill and McMahon 2005).

As SCL’s importance has been constantly growing, students’ satisfaction surveys became a common reality within many universities part of EHEA. These surveys are one of the most efficient solutions in order to assess students’ perspective on teaching and learning, but also to see their perception regarding other elements of a higher education institution (Montserrat and Gummesson 2012). Starting from a point where only a small number of universities had implemented this kind of survey, we have now several countries that conduct this exercise at national level. As students’ experience is advertised to follow the guiding principles of the SCL from the day they enter the campus (ESU and EI 2010; ESU 2018), more research is needed in order to assess their university experience (Taylor 2013).

Firstly, the present paper tries to provide an insight regarding the usefulness of a national student survey for the further development of the European Higher Education Area as, for the moment, these are not a common practice in the majority of the member states. In order to see how these national students’ surveys can be extended to a larger number of countries within EHEA, it was important to see their relevance to the Bologna Process. Secondly, this paper analyses the connection between several ministerial communiques and the content of the surveys. We tried to compare some focus points mentioned in the Paris Communique, as part of them were enounced in a continuity with previous communiques, and also with the questions and the topics that compound the selected student surveys. We also focused on how these student surveys were developed, and what is their dimensionality. The latter aspect is important for us for the purpose of observing how similar topics, such as learning and teaching, were compressed into a certain number of questions, different from country to country, as a hallmark of the national perspective at that moment. Nevertheless, we identified part of the strengths and weaknesses in order to improve, especially teaching and learning. For these, it was important to understand why and how some of the EHEA members developed a national level student survey.

The actual Standards and Guidelines for Quality Assurance in the European Higher Education Area (ESG) provide the framework for developing instruments of enhancing Quality Assurance (QA) such as student surveys. As the HEIs should publish their quality assurance policies (ESG 1.1), it is important to highlight the fact that students should be involved in designing the study programmes (ESG 1.2). Student-centred learning, as well as teaching and assessment, are also in the core of ESG (1.3), as there are also standards dedicated to teaching staff (ESG 1.5) or learning resources and student support (ESG 1.6). Moreover, ESG 1.9 mentions the fact that monitoring, reviewing or revisioning a study programme should include the evaluation of ‘student expectations, needs and satisfaction in relation to the programme’. The guidelines of the second part of ESG, regarding external quality assurance, can be related to a national student survey.

2 Methodology

In this study, we mostly used qualitative methods in order to approach two major research questions. These are:

  1. 1.

    What are the particularities of developing and implementing a national student survey?

    1. a.

      How is a national student survey implemented?

    2. b.

      Who is in charge of the implementation, the review and the improvement process?

    3. c.

      Which are the categories of eligible students?

    4. d.

      What is the period of implementation?

  2. 2.

    Can such a national student survey be integrated throughout the Bologna Process in order to gather further data from EHEA member countries?

We had an innovative approach compared with previous research which focused only on one survey, rather than making a comparison between several national student surveys (Callender et al. 2014; Damen and Hamberg 2015; Maskell and Collins 2017; Bótas and Brown 2013 and so on) (Callender et al. 2014; Damen and Hamberg 2015; Maskell and Collins 2017; Bótas and Brown 2013 and so on). We used a few research instruments, such as:

  • Review of the scientific literature.

  • Desk research on student surveys public websites (including some of organisations/institutions in charge of implementing the surveys).

  • Interviews with representatives of the organisations/institutions that are in charge of conducting and developing the student surveys (especially where the information was not available, or not available in English).

In this regard, we analysed three national students’ surveys: National Student Survey (United Kingdom) (NSS-UK), Studiebarometeret (Norway) and National Sociological Research about Students’ Satisfaction (Romania) (NSRSS-ROU). A short research was made upon how these surveys are implemented, who is in charge of the implementation process, the review and improvement process. In addition, we looked at the categories of eligible students and the period of implementation. Those dimensions are relevant for our study in order to prove the reliability and usefulness of these student surveys, as to mention several aspects in regard to their dynamics. In order to analyse the three national student surveys, we chose the last form that was implemented or, in the Romanian case, the latest available form of the survey.Footnote 1

There are several reasons for which we chose these surveys. First of all, NSS-UK and Studiebarometeret are among the most well-known examples of student consultation throughout a questionnaire in EHEA. There is a limited number of this kind of surveys, and their implementation in mainly unknown at European level. For instance, the Bologna Process Implementation Report mentions only EUROSTUDENT. Graduate tracking surveys are also mentioned, but their purpose is more suitable to be analysed in another paper. Secondly, we tried to have a diversity of student surveys from the point of view of their implementation and their maturity:

  • NSS-UK was first implemented in 2005, and the questions remained unchanged since 2017.

  • Studiebarometeret was first implemented in 2013, and little changes occurred since then.

  • NSRSS-ROU is to be launched in 2020, after one year of development.

In order to reflect the connection between the Bologna Process and the national student surveys, we selected some of the topics that are present in the Paris Communiques which are connected to learning and teaching. Part of these topics was also mentioned in previous communiques.

3 Setting the Background

3.1 Conceptual Background

In the late ‘80s and at the beginning of ‘90s, different types of students’ evaluation of teaching effectiveness were developed, such as Students’ Evaluations of Educational Quality (SEEQ), perceiving students rather as customers than partners (Guolla 1999). As they were developing the instrument, their work was undermined by several myths regarding their unreliability and validity, that included the capacity of students to make consistent judgement, the fact that students were considered “unexperienced” and “capricious”.Footnote 2 Nevertheless, these myths were systemically deconstructed (Aleamoni 1999).

Student surveys tend to provide more accurate information about issues of great importance for teachers and students, such as teaching and learning (Harvey 1995). Measuring student engagement on several key themes from a survey can determine HEIs and other stakeholders to take evidence-based decisions to improve different aspects of the educational processes (Maskell and Collins 2017). One of the earliest studies on this subject were conducted by Harvey (1995), Hill (1995) (Table 1).

Table 1 Examples of topics in a students’ satisfaction survey (Harvey 1995; Hill 1995)

Looking at the scientific literature, a clear need arises for a more comprehensive approach that goes beyond teaching effectiveness to comprise the whole student experience. In this sense, there is an impressive number of surveys in HEIs. Those questionnaires have the aim of collecting information on student satisfaction which is afterwards used in improving the services offered by higher education institutions to reach the expectations of their students or prospective students (Solinas et al. 2012). Also, student surveys are commonly used to evaluate teaching performance, while it represents one of the starting points for further debate on this process (Gaertner 2014).

3.2 National Student Survey (United Kingdom)

National Student Survey is a questionnaire designed by the Higher Education Funding Council for England (HEFCE), being implementing since 2005 with the aim of collecting data on student satisfaction and students’ perception on the quality of the courses provided by universities in the UK. It represents an important component of the external quality assurance process in the United Kingdom. Also, it serves several purposes, such as ‘informing prospective student choice’, ‘enhancing the student academic experience within HE institutions’ or ‘ensuring public accountability’ (Institute of Education 2010).

NSS-UK is addressed to students enrolled in the final year of their undergraduate studies in public universities and some private colleges (Bótas and Brown 2013; Burgess et al. 2018). The survey is to be taken by students annually, between January and April. The questionnaire has evolved over times, but its latest format comprises 27 questions with a 5-grade scale (definitely agree, mostly agree, neither agree nor disagree, mostly disagree and definitely disagree) and the not applicable option. One of the questions has the general purpose of assessing the overall student satisfaction, while the remaining 26 questions cover other aspects.Footnote 3 Students can also answer some open-ended questions, but they are not compulsory. The results are published online on the Office for Students website.

In order to maintain its relevance and to keep it updated, HEFCE, on behalf of the UK HE funding bodies, is periodically conducting reviews of the Student Survey (Callender et al. 2014). HEFCE commissions different educational bodies in order to evaluate NSS-UK and to propose different improvements. This process is not standardized, and we could not identify any suggestions about a future timeframe.

The 2013–2014 review acknowledged the fact that NSS-UK had several shortcomings at conceptual and methodological level, such some unintended consequences like the inappropriate use of the result in different league tables and in universities marketing (Callender et al. 2014). Also, the importance of NSS increased after it was included into the Teaching Excellence Framework (TEF) as ‘HE markets mechanism seek to control the quality of teaching learning and assessment through competitive ranking systems’ (Walker et al. 2019). TEF rates universities in order of quality of teaching, and three out of six indicators are measured through the National Student Survey (‘Teaching on my course’, ‘Assessment and feedback’, ‘Academic support’). Even though TEF has no consequences on public financing of HEIs, it determines the maximum tuition fee that can be charged by publicly funded universities and colleges in England (Spooren et al. 2017).

NSS was criticized for being a survey of ‘satisfaction’ rather than a survey that is focused on learning outcomes or on ‘students’ commitment to the academic and social environment’ (Gibbs 2010). Other issues that were identified through scientific literature were:

  • NSS-UK had little information about other factors that were not directly linked to teaching and learning.

  • NSS-UK neglected students’ perception of the relevance of the course in connection to employability.

  • Part-time students cannot submit relevant information about their status (Buckley 2012).

In the United Kingdom, the NSS gained such recognition and importance at national level, that universities are virtually obliged to react to the feedback received from students in order to improve their perceived quality of services, as this impacts their ability to attract future students (Thiel 2019). Moreover, it generates wide debates involving all stakeholders, often these debates being reflected by the major daily journals. (David et al. 2013).

Also, NSS-UK became more and more a useful tool for prospective students to choose better their university in relation to the desired subject. Even though the differences between institutions are relatively small, they are ‘statistically reliable’ (Burgess et al. 2018). Still, there were voices that argued that some questions might disadvantage certain types of programmes, as those in the area of Art and Design (Gibbs 2010).

3.3 Studiebarometeret (Norway)

Studiebarometeret was developed by the Ministry of Education and Research and carried out by the Norwegian Agency for Quality Assurance in Education (NOKUT) since 2013. The aim of the survey is to provide ‘concise and user-friendly information about students’ opinion of the quality of education offered at Norwegian higher education institutions. Some of the topics approached by Studiebarometeret are teaching, extent of feedback and academic counselling, feedback and academic counselling, academic and social environment, the study environment and infrastructure, organisation of the study programme, student assessment and participation or learning outcomes.

The survey is conducted in October/November among second-year bachelor and masters’ students and fifth-year students of professional degree and integrated masters. The results are published on the Studiebarometeret web portal. (Damen and Hamberg 2015). Studiebarometeret comprises questions or statements using a 5-grade scale (from 1—do not agree to 5—completely agree or from 1—not satisfied to 5—very satisfied) when assessing the satisfaction rate and 5 options when it refers to the recurrence of a statement (never, 1–2 times, 3–5 times, 6–10 times and more than 10 times). Additionally, every question or statement has the options ‘do not know’ or ‘not relevant’. Moreover, some questions include open sections for comments where the students could add further relevant information.

The Norwegian case represents an example of good practice of a link between measuring student satisfaction and the quality assurance processes. As NOKUT is the national QA agency, Studiebarometeret becomes an important instrument in order to measure the quality of higher education. Therefore, NOKUT can propose institutional measures in order to improve the student experience. The data which is collected can help the educational providers to identify the best practices and to take the proper measures (Bakken and Øygarden 2018).

Studiebarometeret has a dynamic component, of approximately 20% of all questions, approaching different topics than the standard ones, which are constant. This part treats different subjects from year to year, as for example, in 2017 approached internationalization, in 2018—transition into higher education from upper secondary education, and in 2019, it focused on practice training. Studiebarometeret website offers information in three different languages (Bokmål, Nynorsk and English), making it extremely accessible, also for the international students.

A first draft of the questionnaire was piloted with students from a few different study programs at three HEIs, summing approximately 1,000 students. It was followed by several focus groups interviews in order to gather qualitative data. This process was important for the developers as they integrated the feedback and conducted the first round of the Norwegian Student Survey in the autumn of 2013. According to the researchers who are responsible for Studiebarometeret, there is a constant review and improvement process. In charge of this process is a reference group established from representatives of different stakeholders which meets twice a year (in January and May/June). The group is mainly formed out of representatives of Higher Education Institutions. Also, there is a permanent contact between NOKUT and all educational institutions, either universities or university colleges, to coordinate activities which are related mainly to data gathering process.

As Studiebarometeret is in continuous evolution, new topics and questions are added as part of a common effort between the reference group and NOKUT. These are piloted both through qualitative and quantitative testing. The respondents of the test surveys are recruited a year before piloting the potential new questions or topics, as they can opt to be part of later follow-up studies when they are completing the survey.

3.4 National Sociological Research About Students’ Satisfaction (Romania)

Romania’s National Sociological Research about Students’ Satisfaction is part of the ‘Quality in higher education: internationalisation and databases to enhance the Romanian education system’ project, implemented jointly by the Executive Agency for Higher Education, Research, Development and Innovation Funding (UEFISCDI) and the Ministry of Education and Research (MER) and is set to be launched in April 2020.

The project, financed by the European Social Fund through the Operational Programme ‘Human Capital’, has the aim of developing and implementing measuring instruments at tertiary education level which will provide stakeholders reliable data regarding the higher education system, thus leading to evidence-based decisions concerning the improvement of higher education quality.Footnote 4 The questionnaire is set to be applied between March and May 2020. Students will receive an email via the National Student Registry in order to register for the survey completion, but they could also opt to register on the NSRSS-RO website.

The purpose of the student survey is to help both MER and HEIs to fundament future policies in order to improve the quality of student experience.

The questionnaire is the result of a series of consultations with all relevant stakeholders, ranging from university representatives, students, Ministry, consultative councils to the ministry and national and international experts. Ultimately, it will contribute to the creation of a database on students’ satisfaction on the quality of services offered by higher education institutions which, in turn, will contribute to evidence-based policy-making at national and institutional level. The survey is set to be periodically applied by UEFISCDI/MER in close collaboration with the Romanian Agency for Quality Assurance in Higher Education (ARACIS).

NSRSS-RO includes 61 questions distributed among 10 sections, of which one contains 6 questions on the general level of satisfaction. The remaining 9 sections refer to (1) social services—9 questions; (2) students’ representatives—2 questions; (3) university infrastructure—8 questions; (4) learning resources—4 questions; (5) academic support—5 questions; (6) teaching activity—9 questions; (7) learning opportunities—7 questions; (8) assessment, communication and feedback—5 questions; (9) organisation of the educational process—6 questions. Also, it includes a dynamic part, that will change from one year to another in order to assess how different policies adopted by the Ministry of Education and Research are perceived by students.

The format of the questionnaire envisages a 5-grade scale (definitely agree, mostly agree, neither agree nor disagree, mostly disagree and definitely disagree) and the not applicable option. Every student from a Romanian HEI can take the survey for a least one study programme where he or she is enrolled. A comprehensive analysis and part of the data collected will be publicly available, starting with the autumn of 2020. Also, each university will receive an individual analysis of the results, in order to maintain or improve different aspects of educational process.

4 Developing a Student Survey at National Level

Student surveys are one of the most popular methods in order to asses teaching and learning from students’ perspective as they represent an instrument that can be applied easily to many under-graduates (Tucker 2015). Nevertheless, there are voices that blame the fact that ‘in this epoch of managerialism and instrumentality’, there is a need ‘to show progress to justify consistency and funding’. As for that, there are some authors that advocate that quality in higher education ‘should extend beyond satisfaction’ (Dean and Gibbs 2015). Still, as students’ opinion became more and more important in order to improve learning and teaching, student surveys became common ground in quality assurance processes across EHEA, especially in the higher education institutions. They are basically an efficient tool to implement several guidelines from ESG, such as ESG 1.9.

Another important aspect is that the period of implementation is important to be set in strict correlation with the structure of the academic year. For instance, even though there are studies that ‘prove the grades or marks students receive in the course are not highly correlated with their ratings of the course and the instructor’ (Aleamoni 1999), neither of the student surveys that we took into consideration collide with assessment periods.

A notable difference regarding the analysed surveys is the eligible students that are able to participate. One survey targets students in their final year of undergraduate studies, another second-year bachelor and masters’ students and the third all students in bachelor studies. As it is clear that the more students are taking the survey, the more accurate the results are going to be, it is relevant to point out the fact that even though the UK Government tries to increase the number of eligible students, it faces harsh opposition from different stakeholders, including universities (Havergal 2019) (Table 2). At the same time, studies have shown that every year students are required to fill in a high number of questionnaires that can lead to a decrease in the number of respondents due to “survey fatigue”.

Table 2 Comparison between selected national student surveys

In all three cases, we have identified an important input from the governmental structure that oversees higher education affairs. Also, in Norway and in Romania, the national QA agency is involved in the process of developing the student survey. Students are also involved in this process through the national unions of students. They have a significant role especially in the United Kingdom. Student bodies play an important role in developing and promoting these student surveys.

In order to analyse the topics approached by the student surveys that we selected, we will use the typologies identified by Hill (1995), as shown in Table 3. As a result, we understand that NSS-UK has questions from 8 topics (40%), Studiebarometeret points out questions from 13 topics (70%), and NSRSS-RO has questions from 16 topics proposed (84%). Travel agency and University bookshop are the topics that cannot be identified in the student surveys that we chose for this paper. Items connecting to Health service can help both HEIs and national authorities to provide, for instance, a better picture regarding how students are aware of those support services (Storrie et al. 2010).

Table 3 Comparison of topics in a students’ satisfaction survey (Hill 1995) with selected national student surveys

As we mentioned, NSS-UK has an important role in Teaching Excellence Framework. It represents an example of how such a national student survey is to be integrated in developing national policies. TEF is supposed to enhance student-centred learning in British universities. Metrics for Teaching Excellence Framework come from three data sources: National Student Survey, data from Higher Education Statistics Agency and from Destination of Leavers from Higher Education Survey (Gunn 2018). Even though highly criticized for this by students, NSS-UK represents an example of how to integrate the results of such a survey into HE policies.

5 Student Surveys as Tools to Assess Learning and Teaching in the Context of the Bologna Process

As previously stated, the Bologna Process has promoted learning and teaching as a key part of the European Higher Education Area. As such, it is important to see how much the national students’ surveys are able to monitor the main areas connected to L&T.

Looking at the main topics in the selected student surveys, one can expect that teaching and learning will be covered extensively. The NSS-UK Survey includes three categories designed for monitoring L&T. Those are ‘The teaching of my course’, ‘Learning opportunities’ and ‘Learning resources’. Some of these questions are inviting students to evaluate, for instance, if the staff have made the subject interesting or if the courses are intellectually stimulating. Also, IT and library resources are assessed. Studiebarometeret includes several categories on L&T, as well, such as ‘Teaching’, ‘The study environment and infrastructure’, ‘Your learning outcome’, ‘Time spent on academic activities’, ‘Teaching and learning methods—usage’ or ‘Teaching and learning methods—contribution’. At the same time, NSRSS-RO has categories such as ‘Teaching’, ‘Academic infrastructure’, ‘Learning opportunities’ or ‘Evaluation, communication and feedback’.

From a comparative point of view, all three surveys include common topics such as:

  • Availability of adequate spaces and proper equipment for classes and laboratories.

  • Staff/teachers support for students when needed.

  • Availability of individualized learning paths.

  • Teaching and counselling sessions to reduce the learning gap.

  • Staff/teachers engagement in teaching activities.

  • Conducting class hours.

  • Group work with other students.

  • Learning outcomes.

At the same time, it is important to see if new dimensions can also be monitored through national student surveys. In this respect, the authors have selected the main topics included in the latest Ministerial Communique. The 2018 ministerial communique is extremely relevant for the subject as it has dedicated an entire chapter to innovation in teaching and learning (Table 4).

Table 4 Paris Communique references related to learning and teaching from selected national student surveys

Largely, all three selected student surveys approach several topics that are mentioned in the Paris communique. Similarly, NSRSS-RO is the only questionnaire that tackles inter-disciplinary programmes.

6 Conclusions

A national student survey is an important tool to assess teaching and learning in HEIs. Even though we expected to identify a larger percentage of questions directly linked to these two categories, NSS-UK has 37.03% of the items connected to T&L, while Studiebarometeret has 28.57%, and NSRSS-RO has 29.51%. There are several categories of questions that are common for all three questionnaires, such as availability of adequate spaces and proper equipment for classes and laboratories, staff/teachers support for students when needed, availability of individualized learning paths, teaching and counselling sessions to reduce the learning gap, staff/teachers engagement in teaching activities, conducting class hours, group work with other students or learning outcomes.

Both in the case of the United Kingdom and Norway, the results tend to improve as higher education institutions are pushing for changes in order to increase students’ satisfaction. Even though there are some risks, such as students fatigue when they have to take part to several surveys, the data coming from these national surveys is important for a broad number of categories, including prospective students. The latter category shows interest especially on student satisfaction and graduate employment (Loukkola and Zhang 2010).

National Student Surveys can play an important role in gathering data from HEIs at country level based on the same methodology. As the importance of enhancing data collection was mentioned both in Leuven/Louvain-la-Neuve (2009) and in Bucharest (2012) communique, it is to be taken into consideration if such an instrument could become a general one for the European Higher Education Area. If so, besides the common part for all countries, every state could add several questions in order to respond to their national priorities. Therefore, the latter can lead to more in-depth research on the aspects influencing students’ satisfaction and where universities need to do more in order to improve their services.

Also, the subjects approached by student surveys are more than relevant both for the stakeholders and for individuals. HEI can use the results in a benchmarking process, which is promoted through Standards and Guidelines for Quality Assurance in the European Higher Education Area (ESG). Measuring constantly the students’ satisfaction on these items can show in what degree a university has improved, from year to year. Governing bodies of higher education can improve their evidence-based decisions and evaluate how students’ perception is evolving periodically. We still do not have enough data to conclude exactly what was the impact of Studiebarometeret or National Student Survey (after TEF was implemented) on enhancing student-centred learning, for instance.

Since the surveys we analyse compound a significant percent of the topics approached by the Paris communique related to teaching and learning, we consider that in the future, a student survey that can be applied in all EHEA countries is a desirable purpose and should be discussed in the Bologna Follow-Up Group. It is also the most plausible and the most effective action that EHEA member states could take in order to question the students’ perception on changes triggered by the Bologna reforms and how they perceive the educational realities at grassroots level.

As policy-makers are starting to adopt educational policies based on the research in the field rather than different Lisbon Strategy indicators (Ion and Iucu 2015), a national student survey represents a middle way between the two perspectives, as it has an important public impact and it also has relevant results that can lead to substantially improved policies.

Additionally, adding a dynamic part to the questionnaire, as in the case of Studiebarometeret and NSRSS-RO, can be extremely useful for the ministries responsible for higher education and for other national stakeholders when they are developing or revising public policies.

National student surveys can become an important instrument in the process of monitoring the enhancement of teaching and learning in particular EHEA countries that can also be extended to the whole EHEA. They represent an instrument that includes a significant number of the topics assumed in the ministerial communiques.

Also, the compliance of national student surveys to several ESG items is remarkable. Such questionnaires should definitely be used in order to enhance the standards and guidelines that are eligible for that. As both HEIs and QA agencies struggle in trying to provide a vision as close to reality as possible, these types of questionnaire represent a robust solution.

Based on the examined good practices, a set of Guiding principles can be set for such endeavours, for countries that would like to develop their own national survey, but also for a further survey that could be jointly implemented throughout EHEA. Stakeholders should be involved from the design/development stage to the promotion, implementation and review stage, as this offers greater consistency to the whole process. Also, the frequency of its application needs to be carefully planned to take into account other reporting processed that students need to provide, in order to avoid ‘survey fatigue’.

These questionnaires should include clear reviewing processes. These should be predictable and should follow certain goals to improve the student surveys. The use of the results of the survey should be clear because their improper use (e.g. in the funding mechanism) can lead to unintended consequences towards the most critical students while moving away from an improvement approach. It is very important to know from the beginning, for instance, what audience do the results target or what was the purpose of designing such a questionnaire. Also, there is a need to set out clearly how the results will be integrated in the decision-making or policy-making processes, if there is the case.

Furthermore, student surveys should aim at providing universities with information that could be used in a reflexive way, as it is a valuable source in order to improve the quality of learning and teaching and other related services. Elements concerning diverse learning methods, flexible learning and open education, items regarding encountering research or activities linked to research and innovation should be considered by both old and new national student surveys.

As in order to have an efficient learning and teaching process, student support services also need to be of high quality. This should include proper accommodation, access to counselling services or university recreation and intramurals. Health services should be part of any national student survey as the number of students that are dealing with such problems is increasing, especially in terms of mental health issues. Moreover, student surveys should include topics such as availability of adequate spaces and proper equipment for classes and laboratories, staff and/or teachers support for students when needed, availability of individualized learning paths, teaching and counselling sessions to reduce the learning gap, staff and/or teachers engagement in teaching activities, such as conducting class hours/group work with other students or learning outcomes, as it was observed by reviewing the three surveys and the literature concerning this topic.