Next Article in Journal
The Perception Scale for the 7E Model-Based Augmented Reality Enriched Computer Course (7EMAGBAÖ): Validity and Reliability Study
Previous Article in Journal
Sustainable Food Waste Recycling for the Circular Economy in Developing Countries, with Special Reference to Bangladesh
Previous Article in Special Issue
Using Artificial Intelligence to Predict Students’ Academic Performance in Blended Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research Perceived Competency Scale: A New Psychometric Adaptation for University Students’ Research Learning

by
César Merino-Soto
1,
Manuel Fernández-Arata
1,
Jaime Fuentes-Balderrama
2,
Guillermo M. Chans
3 and
Filiberto Toledano-Toledano
4,5,6,*
1
Instituto de Investigación de Psicología, Universidad de San Martín de Porres, Surquillo 15036, Peru
2
Steve Hicks School of Social Work, The University of Texas, Austin, TX 78712, USA
3
Tecnologico de Monterrey, School of Engineering and Sciences, Mexico City 01389, Mexico
4
Unidad de Investigación en Medicina Basada en Evidencias, Hospital Infantil de México Federico Gómez, National Institute of Health, Márquez 162, Doctores, Cuauhtémoc, Mexico City 06720, Mexico
5
Unidad de Investigación Sociomédica, Instituto Nacional de Rehabilitación Luis Guillermo Ibarra Ibarra, Calzada México-Xochimilco 289, Arenal de Guadalupe, Tlalpan, Mexico City 14389, Mexico
6
Dirección de Investigación y Diseminación del Conocimiento, Instituto Nacional de Ciencias e Innovación para la Formación de Comunidad Científica, INDEHUS, Periférico Sur 4860, Arenal de Guadalupe, Tlalpan, Mexico City 14389, Mexico
*
Author to whom correspondence should be addressed.
Sustainability 2022, 14(19), 12036; https://doi.org/10.3390/su141912036
Submission received: 20 July 2022 / Revised: 13 August 2022 / Accepted: 15 September 2022 / Published: 23 September 2022

Abstract

:
This research aimed to adapt and validate a measuring scale of perceived research competencies among undergraduate students. Perceived research competencies of undergraduate learning can be measured with a new scale adapted from self-determination theory. We assessed the validity of this new measure applied to 307 participating undergraduates from Lima (Peru). The instrument’s survey items in the perceived competencies scale were first translated from English to Spanish and then adapted to focus on participation in research activities. We obtained evidence for (a) content validity (through item analysis), (b) internal structure with Mokken Scaling Analysis and structural equation modeling to examine the item–construct relationship, differential item functioning, and reliability, and (c) association with external variables. The items were found to function one-dimensionally, with strong item–construct relationships and no differential functioning (academic semester and general self-esteem groups). Theoretically consistent associations were found between study satisfaction and anxiety symptoms (controlling for gender, semester, and social support). We also discussed the theoretical implications and practices of this newly adapted measurement instrument.

1. Introduction

Learning and developing research skills implies approaching scientific knowledge and managing research methodology through university studies [1]. Although participating in research is rare in undergraduate student activities learning [2], academic environments provide the best opportunities to approach reading and writing strategies, codes and meanings associated with research, and elements related to positive changes in self-efficacy, research interest, and perceived competency to complete research [3]. Various methodologies have been proposed for research-based learning, oriented to developing students’ skills in research. One is research-based learning (RBL), a didactic strategy that connects teaching with research, allowing students to be active researchers, develop competencies, and prepare to be lifelong inquirers [4,5]. Among the most important advantages are stimulated reading and critical thinking through self-directed learning, problem-solving, and greater interest and curiosity for learning.
The motivation to become involved in research activities seems to be associated with the educational institution’s culture, teaching strategies, or psychosocial aspects [6,7,8,9]. Achieving good research skills and knowledge requires an approach that facilitates understanding how the environment and motivation interact. In this sense, self-determination theory postulates that motivated behavior varies depending on the level of autonomy or control a person has regarding their tasks [10]. Unlike contextually controlled behaviors, which appear due to interpersonal pressures or demands, autonomous behaviors are intrinsically motivated. These arise out of self-interest and are accompanied by spontaneous thoughts and feelings [10,11]. When students enter a lecture with high autonomous motivation, they report more positive experiences, greater perceived competency and interest, and less anxiety at the end of the class [10]. Research on self-determination has found that approaches that influence autonomy yield better educational results than controlled approaches. Considering that the satisfaction of basic psychological needs of students, such as competency, autonomy, and affinity, promotes greater participation, better learning processes, and the well-being of students, more practical applications of self-determination theory in education are needed [12].
Competency, in general, refers to the cognitive, motivational, and social conditions necessary for successful learning [13]. Competencies involving effective interaction with others, teamwork, self-efficacy, and decision-making comprise some valuable soft learning skills [14]. Acquired in the learning process, these stabilize over time; they are precursors of possible behavioral actions. Evidence indicates that the focused perception of these competencies also influences the intention to engage in related activities [15]. Therefore, student perception of their competency to carry out research activities is complex and is associated with several possible components, such as the practical ability to complete research with minimal help and the knowledge of what is expected of them during evaluation processes [6]. In an applied context, the perception of high levels of competency and trust in developing research projects is a precursor to the effort invested in the quality of the project [6].
The concept of competency in skill development is related to self-efficacy, which is the belief that the student has that they are capable of performing a task or achieving an objective [16]. There is already an established relationship between academic self-efficacy and educational results, which, together with autonomy and competency, maintain intrinsic motivation in learning [17,18]. Moreover, to improve self-efficacy and competency, students require opportunities to experience academic achievement in various tasks since the experience of success enhances beliefs of self-efficacy [17]. Moreover, perceived competency can subsume the student’s sense of self-confidence, developed from accumulated experiences of achievement and effective coping with problems, and their monitoring of these processes [19,20].
In the self-determination theory, academic self-efficacy and constructive feedback help students develop research skills and the confidence to use them. Simultaneously, they allow the development of feelings of well-being [12], given that in the students’ perception, competency, autonomy, and affinity are fundamental elements for this task, promoting their independence in self-evaluation and improvement. The development of academic skills, particularly those required for research, is also linked to interpersonal goals that facilitate the career path [21]. These domains may be involved in the perception of competencies to execute them. Moreover, self-efficacy beliefs partially mediate the effects of research skills [22]. For example, we can mention the course-based undergraduate research experiences (CUREs), learning experiences where students address a research problem with an unknown solution. These large-scale, original, hands-on research practices are primarily used in laboratory courses [23]. They can be offered as early as the first years of the student’s major [24,25], providing advantages such as enhanced confidence and skill development in doing research [26,27], significantly increased retention among science, technology, engineering, and mathematics (STEM) majors [28,29], and more inclusion in the sciences of underrepresented populations [30].
Regarding the approaches to measure research competency, the predominant method has been self-report measures. Due to the possible complexity of the construct, instrumental studies have pointed to the specific content of low factorial order, possibly sensitive to the interaction between individual variability and the demands of the educational environment in the USA [7], Germany [9], and Malaysia [31]. Although the constructs evaluated in instruments have a relative convergence in identifying the structure of research competencies, the psychometric methodology used is highly heterogeneous with sample size (between these studies, the sample size ratio was 3.8). On the other hand, the indicated evaluation instruments are oriented to study research competencies from a cognitive and pedagogical approach, formulated to identify academic strengths and weaknesses in the teaching-learning process. However, motivational aspects, such as perceived competency or interest in research, were not considered and represented a significant metric limitation.
Perceived self-competency is installed in self-determination theory as one of the three crucial psychological needs for a person to maintain their behavior towards adaptive goals. Consequently, an instrument was developed in educational and health research and intervention [32,33] to quantify an essential mediating component within the self-determination model: Perceived Competency Scale (PCS). Items from the Perceived Competency Scale (PCS) were developed for a particular behavioral domain. The high psychometric consistency usually found in these domains indicates that sampled behaviors are strongly correlated but with content that is not necessarily redundant. According to this, applying the PCS in specific thematic contexts requires relevant content modifications. For example, the content has been modified for randomized interventions in dental health [34], glucose control with diabetic patients [35,36,37,38], weight loss strategies [39], and tobacco dependence [40]. In general, the empirical evidence of the mediating role of perceived competency was corroborated in understanding the effects of interventions based on the model of self-determination.
The objective of this study was to evaluate the psychometric properties of the PCS applied to research activities during university education through item validity, analysis of their internal structure, and their convergent association with other constructs. This objective is fundamental to (a) establish the first psychometric evidence for the interpretation and use of its scores and (b) because, to date, no adaptation of the PCS has been made to assess self-observed competency to conduct research activities, and the ways in which adapting its content would work in this regard is still unknown. As part of the assessment of the internal structure, this objective also focuses on the assessment of item similarity and their relationship to their construct (i.e., tau-equivalence) and item-level reliability. Content adaptations of the PCS have essentially occurred in health interventions, which can usually be identified as a field of application and research different from it. An additional motivation that reinforces the objective of this study is that in Latin American countries, scientific research activity needs to be strengthened in the classroom. There is a growing interest in its inclusion, as shown in the cases of Chile [41], Mexico [4,42], and Peru [43,44]. On the other hand, assessment of perceived research competency can be incorporated into initiatives to increase research participation and monitor the change in undergraduate [45,46] and graduate [47,48] students’ skills in academic courses.
The application of the PCS in other contexts of behavioral functioning, such as the subject’s participation in scientific research, can be a way to associate motivation and maintenance in research activities with efficiency and scientific productivity. In higher education, the perceived research competency can be conceived within a general perception of competency related to learning because research skills usually develop in university studies. The university’s contextual activities and opportunities affect the perceptions about conducting it effectively. Examples include individual or group work on research projects and presentation of results or projects in intramural or external events, as highlighted in the literature (e.g., [21,49]). Consequently, educational strategies can arise from evaluating the competencies to complete research. According to the above, the hypothesis is raised that perceived research competency has a linear, positive effect on satisfaction with studies, the latter being understood as the perception of satisfaction with performance, the set of activities and results of the study, and the way of approaching the studies [50].
Due to the corroborated covariation of social support, persistent emotional reactions (e.g., anxiety symptoms), and self-esteem on perceived personal competencies and academic performance [51,52,53,54,55], we explored its effects with the construct developed in this study (i.e., perceived research competencies). The aim was to accumulate evidence of the construct’s conceptual network and evaluate if it was relevant for student intervention, monitoring, and promotion strategies.

2. Materials and Methods

2.1. Participants

The population for this study comprised Peruvian students in private universities in Metropolitan Lima (Peru), predominantly of medium socioeconomic status. The study used non-probabilistic sampling, and we decided the selection of the participating university by the opportunity of access and the exploratory initiative of the study. Participants who did not sign the informed consent form or enrolled in the first five semesters of their academic program were excluded from the sample because research methodology courses and intramural or extramural experiences occur around the 6th semester in Peruvian universities.
The effective sample comprised 307 psychology undergraduate students with a mean age of 23.08 (S.D. = 3.8) and between 19 and 44 years of age. The sample was predominantly female (n = 222, 72.3%); a little more than a quarter were men (n = 84, 27.4%), and a single participant chose not to disclose their gender (n = 1, 0.3%). The males were older (t = 2.29, df = 301, p < 0.05, d = 0.296). Moreover, 155 participants (50.5%) were employed. The student’s current semester varied between the 6th (n = 29, 9.7%), 7th (n = 74, 24.1%), 8th (n = 44, 14.3%), 9th (n = 84, 27.4%), 10th (n = 63, 20.5%), and 11th (n = 7, 2.3%). Only 6 participants (2%) did not report their semester. A total of 267 Participants were predominantly born in Lima (87%), 280 were single (91.2%), while the rest were married or cohabiting (n = 16, 5.2%). Eight participants (2.6%) did not provide information in this regard.

2.2. Instruments

Research Perceived Competencies Scale (RPCS). This measuring instrument assessed student perceptions of competency for research activities. It was composed of four items derived from the generic content model proposed by Williams and Deci [32]: confidence perception, ability, goal achievement, and overcoming challenges. The response format was scaled with seven options, from “Not at all true” to “Very true”, and grouped into three steps (the first two options, the next three, and the last two). The answer instructions required the examinee to consider their perception of the research activities. The score was obtained using all items and adding or averaging the responses. The interpretation was linear, where the increase in the score indicated a greater intensity of perceived competency. In reported studies of adaptations for health interventions, internal consistency has tended to be high (α > 0.80) [34,35,36,37,38,40].
Study Satisfaction Scale—Brief (SSS-B) [50]. This scale identifies the students’ degree of general satisfaction with participation in the academic activities in their universities. It comprises three items that quantify their satisfaction with general performance, studying, and studies. Some research works have been reported, and their results expanded into the construct validity of the BSSS with procrastination measures and its content equivalence among men and women [56]. In the present study, the internal consistency for the total sample was α = 0.71 (Bootstrap 95% CI = 0.63, 0.78).
Single-Item Social Support—Revised (SSS). This single-item measure used to quantify tangible social support derived from the proposal by Blake and McKay [57]. It refers to the social network or structural support identified as the number of people available in problematic situations [58]. Initially, the item was constructed for its application in epidemiological studies, with the following content: “How many close people do you have, upon whom you can really count, if you need help (for example, taking care of children or pets, being taken to the hospital or shopping, providing help if sick)?” However, the original item was modified to reduce the possible differential functioning in men and women detected in the childcare situation [57]. To express generic examples with little effect due to the differential functioning, we modified the item to: “How many people close to you do you have, upon whom you can really count, if you need help (for example, if you are sick, shopping, etc.)?” The response options were not modified and consisted of the four original options: 1 person, 2 to 5 people, 6 to 9 people, and 10 or more people. Although for descriptive analyses, responses to the SSS can be categorized into two (low social network: 1 person, and high social network: >2 people) [57]. The full range of responses was used for this study.
Generalized Anxiety Disorder-2 (GAD-2) [59]. Self-report scale included a short four-item measure of psychological distress. It was designed to identify the frequency of the main symptoms of generalized anxiety disorder, with two items questioning the presence of anxiety and worry in the subject during the last two weeks. It uses an ordinal format of four response types, from “not every day” to “almost every day.” In the present study, reliability was acceptable (α = 0.72; Bootstrap 95% CI = 0.62, 0.79).
Single-Item Self-Esteem Scale (SISE) [60]. Evaluates global self-esteem through a single item (i.e., “I have high self-esteem”); it is scaled ordinally, according to the degree of agreement (Strongly disagree, Disagree, Between one and the other, Agree, and Strongly agree). It was created as an alternative to the more extensive self-esteem measurements. There is strong evidence of its convergent and divergent validity with more than 20 behavioral and personality criteria [60,61,62], finding that the SISE is an indicator of general self-esteem sufficient for research and group descriptive purposes. In Peruvian adults, there is evidence that corroborates the validity of their scores [63,64].
Sociodemographic questionnaire. A form was developed with questions that investigated data on gender, chronological age, place of birth, and others whose percentages were reported here.

2.3. Procedure

Instrument development. The content base of the RPCS originated from the rational evaluation of previous research on using the PCS [32,35], where the pattern of content changes was adapted to the studied contextual theme. Therefore, no new items were created, but the PCS content was adapted for the perceived competency of participating in scientific research. As the objective of the instrument was to identify the level of perceived competencies in research activities in general, the modification consisted of (1) emphasizing a general perspective of tasks that the scientific research process usually involves, (2) maintaining content initially sampled by the relevant theory, on the perception of self-confidence, capacity, goal orientation, and overcoming challenges, (3) introducing changes in the specific content of the items per the objective, and (4) preserving the number of items of the original scales.
For the elaboration of the content of the RPCS, we considered: (a) that developing sampled content for specific research tasks would create an extensive instrument not recommended for massive evaluations; (b) using a parsimonious instrument with a general perspective of the perceived competency, which subsumes all the specific tasks; (c) a general content that links all the research tasks involved, aligned with the general approach of PCS applications in other areas such as earlier studies. These criteria should be maximized with the practical value of their use, i.e., reduced time to complete, low cost, and comprehensibility [65]. After a review of the literature on good translation practices [65,66,67,68], several general steps were deduced to start adapting the PCS to the context of research activities, starting with translation from English to Spanish.
After an independent search by two authors of the present study (C.M.-S. and M.F.-A.), which turned up no translation of the items, the adaptation of the PCS began with the translation from English to Spanish. First, all authors identified and agreed upon the new context of the PCS (i.e., scientific research activities). Second, the items were translated into Spanish by one of the authors, and in this translation, content changes were incorporated to adapt them to the new context. Third, two research psychologists independently reviewed the translation and were instructed to focus on non-regional phrasing and non-literally interpreted content. During this review, the translators’ questions were minor and resolved in one meeting; all were considered independently. Both translators indicated that the translated content could be interpreted without direct reference to a specific Hispanic population and directly into the new use context.
As a result of the preceding, the first modification was aimed at changing the RPCS response instructions, with the following content: “Please, respond to each of the following statements according to the degree to which they are true for you, regarding performing scientific research activities.” The second modification consisted of adding explicit references to the sampled contents of the PCS, obtaining the following items: “I feel confident in my ability to carry out research activities” (Spanish: “Siento confianza en mi habilidad para hacer las actividades de investigación”), “I feel capable of carrying out necessary research activities” (Spanish: “Me siento capaz de realizar las actividades de investigación necesarias”), “I have ability to achieve goals that are set when doing research” (Spanish: “Tengo habilidad para lograr las metas que se plantean al hacer una investigación”), and “I feel that I can face the challenge of doing research activities well” (Spanish: “Siento que puedo enfrentar el desafío de hacer bien las actividades de investigación”). Other structural aspects were maintained to facilitate comparability with the line of research with the PCS, for example, the scaling of the answers (seven options), their grouping (three labels, Not at all true, Somewhat true, Very true), and the ordering of the content sampled in the items (starting from confidence to facing the challenges).
Data recollection. After carrying out the respective coordination (among others, requesting authorization) with the relevant university directors, the tests were administered in their classrooms. The authors and collaborators of the application (e.g., collaborating researchers) gave standardized instructions regarding the form of response, the purpose of the research, the confidential nature of the results, and voluntary and anonymous participation. The instrument package was kept in the same order, and the first document was the informed consent, whose response conditioned the students’ participation.
Ethical Considerations. This study is a part of a research project (HIM/2015/017/SSA.1207; “Effects of mindfulness training on the psychological distress and quality of life of the family caregiver”) approved by the Research, Ethics, and Biosafety Commissions of the Hospital Infantil de México Federico Gómez, National Institute of Health, in Mexico City. The ethical rules and considerations regarding research with humans currently enforced in Mexico [69] and those outlined by the American Psychological Association [70] were followed. All participants were informed of the research’s objectives, scope, and rights under the Declaration of Helsinki [71]. The participants who agreed to participate in the study signed an informed consent letter. Participation in this study was voluntary and did not involve payment.
Analysis. The quantitative study focused on obtaining evidence supporting content validity (the univariate properties of the items), the internal structure, the differential functioning of the items, internal consistency, and validity concerning other constructs. The analysis’s general strategy was to apply several approaches to reduce the dependence of the conclusions on a single analytical procedure [72,73].
Content irrelevant responses. Potential careless responses were evaluated, as surveys applied in person or via the web platform have generally been associated with this unrelated pattern [74,75]; they can commonly be expressed as multivariate outliers [74,76]. To identify this problem, we used the D2 distance [77], and to corroborate this identification, we calculated the intra-individual response variability (IRV) [74]. Both are efficient techniques for this problem [76]. This analysis was made with the R package Careless [78].
Item analysis. A descriptive analysis of the items’ distributional, correlational characteristics and the response trend was made using non-parametric procedures due to the ordinal level of the responses [79]. The analyses were conducted with the Langtest [80] and MVN [81] R packages.
Non-parametric analysis. Before using latent variable modeling, we evaluated the RPCS’ fundamental properties with the Mokken Scaling Analysis (MSA) [82,83], a non-parametric framework for analyzing measurement properties based on direct scoring. It does not require the substantial restrictions of SEM modeling [82,84]. Four essential characteristics were explored [85]. The first three are fundamental for the score to work with monotone homogeneity (MHM): scalability (using the H coefficient), local independence (item responses are not mutually influenced), and monotonicity (incremental function between item and latent attribute, evaluated by comparing the actual and expected number of violations to the monotonic model). The fourth characteristic, linked to the invariant item order (IIO) model, was the differentiated response function of the item response options [85]. The analysis was performed with the Mokken package [84,86].
SEM analysis. After the MSA non-parametric analysis, dimensionality was evaluated parametrically with confirmatory factor analysis for categorical data, with the weighted least square mean and variance adjusted estimator (WLSMV) [87]. Dimensionality fit was assessed with approximated indices: CFI (≥0.95), TLI (≥0.95), RMSEA (≤0.05), and SRMR (≤0.05). The R package used was Lavaan [88].
Differential item functioning. To verify item differential functioning, which is equivalent to measurement invariance from a non-parametric approach in categorical variables, we applied the partial gamma coefficient (γp) [89] using the magnitude levels in weak (0.00 to 0.150), moderate (0.16 to 0.30), and strong (>0.31). There are general interpretation suggestions for this coefficient (e.g., >0.60: strong > 0.30: moderate, and ≤0.30: weak) [90], but the former tends to be commonly applied in the study of IDF. The R package used was Iarm [91].
Reliability. Reliability was estimated at the item level and score level. Regarding the item level, we used the attenuation-corrected coefficient [92], given its lower bias and computational ease [93]. The minimum acceptable value is around 0.30 [94]. For score-level reliability, we used the MSrho coefficient [95], derived from non-parametric modeling MSA; and linear SEM modeling, ω for categorical variables [96]; the α coefficient was also estimated.
Association with other variables. To obtain validity evidence of students’ satisfaction with their academic studies (SSS-B score) and anxiety symptoms (GAD score), we applied a hierarchical multiple linear regression analysis, in which the semi-partial correlation (rsp) was used as the effect size [97] of the single RPCS contribution, adjusted for the effects of gender, semester, and tangible social support (SSS score). Here, the Bodner proposal [98] was followed to qualify the semi-partial correlations as trivial (<0.14), small (≥0.14), moderate (≥0.42), and large (≥0.71). The R package used was Lm (R Core Team, 2021).

3. Results

3.1. Descriptive Analysis

Preliminary Analysis. Three cases were detected with D2 values higher than the established nominal level of 0.01 (i.e., F (4,305) = 25.05) (74.73, 57.18, and 44.72, respectively); and one case (D2 = 22.78) was detected at the nominal 0.05 level (F (4,305) = 21.81). The inspection of these first three cases showed an inconsistent response pattern (e.g., some items were answered with answer option 1 while the rest had responses around answer option 5), but the last one did not seem to fit an inadequate pattern. These results coincided with individual variability (IRV); therefore, these three subjects were removed from the database.

3.2. Psychometric Analysis

Item analysis. Results are shown in Table 1. Item response trends, according to the reported measures, generally around answered point 4, but they were statistically different: Friedman-χ2(3) = 45.96, p < 0.01 approximated between small and moderate discrepancy (rtotal = 0.35, 95% CI [0.25,0.45]). Post-hoc differences (Wilcoxon test, two dependent samples) occurred between items 2 and 3 (Wilcoxon test = 2956, z = 2.63, r = 0.11, 95% CI [−0.01,0.22]) and 3 and 4 (Wilcoxon test = 1737.5, z = 0.990, r = 0.04, 95% CI [−0.07,0.15]), which can be considered small differences between these items. Regarding the distribution, the items showed varying magnitudes of skewness and kurtosis, but they were similar concerning their distributional trend. Only items 3 and 4 did not fit the theoretical normal distribution (K2 > 9.0).
Regarding the correlations, we observed that the covariation between the items was high, varying between 0.81 and 0.89, indicating approximately 71.6% of common variance. This inter-item correlation matrix was statistically different between its items (Lawley-χ2(5) = 65.05), although the difference between the minimum (z = 1.12) and maximum (z = 1.41) correlation was relatively small (q = 0.292; Cohen, 1992). The gender test (Jennrich-χ2(6) = 9.35, p > 0.10) showed the similarity of this correlation matrix.
As for sociodemographic variables, all items displayed a similar correlational pattern of magnitude and direction; with age and gender, the magnitude was positive but essentially small. About gender, the negative correlations were due to coding effects in that males tended to score slightly over females. Regarding the academic semester, the covariation was essentially zero.

3.3. Internal Structure Evidence

Non-parametric modeling (Mokken Scaling Analysis). Table 2 displays the results of this non-parametric modeling. It can be observed that items consistently maintained high magnitudes regarding their scalability (H > 0.82). Similarly, the inter-item scalability distribution ranged from 0.82 to 0.91 (not shown). For the total score, H was also higher than 0.82, (s.e. = 0.022; 95% CI [0.81,0.90]). In the test of local independence, the W1 index varied between 0.59 and 2.55; with W2, it varied between 9.99 and 12.05; and with W3, it varied between 0.54 and 5.59. Consistently, the results did not indicate significant magnitudes of local independence violations between the items. No violations were detected when the monotonicity and invariant item ordering (IIO) models were examined (see Table 2). The estimated reliability with the MS coefficient was 0.95. In summary, the RPCS items and score adequately satisfied scalability, local independence, monotonic item–score relationship, and invariant ordering across scores. Additionally, the item–test correlations were high (>0.80).
Parametric modeling. The linear fit to the one-dimensional RPCS model was satisfactory: WLSMV-χ2(2) = 31.28, p < 0.01, CFI = 0.999, SRMR = 0.024. Specific item parameters (Table 2), in relation to the factor loadings (λ > 0.89) and explained variances (h2 > 0.81) were statistically significant (z > 75.00) and high. These factor loadings indicate a strong relationship between the items and their construct. Due to the magnitude of these parameters and the fit indices, no modifications were introduced to improve the model. The tau-equivalent model (equal factor loadings, estimated at 0.94) had a slightly lower fit than the unrestricted model (congeneric model), WLSMV-χ2(5) = 42.98, p < 0.01, CFI = 0.99, SRMR = 0.029; however, the scaled difference between both models was statistically significant (∆WLSMS-χ2(3) = 20.33, p < 0.01).

3.4. Internal Consistency

With the total sample, the internal consistency ω was 0.96 (bca bootstrap 95% CI [0.95,0.96], s.e. = 0.005), while the internal consistency α was 0.96 (bca bootstrap 95% CI = [0.95,0.96], s.e. = 0.004). Both yielded indistinguishable values and were high in population terms. In the items, the reliability for each one was greater than 0.65 and similar. They can all be considered as units with appropriate individual consistency.

3.5. Item Differential Functioning

Results are shown in Table 3. With the academic semester grouping variable, the homogeneity of the γp coefficients in the score strata was established (χ2 < 9.0, p > 0.10). The point estimate of γp and its confidence intervals had trivial magnitudes and included the parameter 0, respectively, indicating that the magnitude of DIF was predominantly trivial and not statistically significant. With the general self-esteem variable (SISE), homogeneity was achieved on items one through three but not on item four. In item four, score 3 was the level at which strong γp occurred (.71, 95% CI [0.50,0.93], p < 0.01), very discrepant from the rest of γp of each stratum (between −0.42 and 0.33), but none were statistically significant. In items one and three, γp was not statistically significant, having a trivial magnitude. Its confidence intervals indicated that its population variation might be substantial. Its negative orientation suggests that the proportion of responses is dissimilar between the semesters. On the other hand, items two and four demonstrated statistically significant, strong coefficients (i.e., between 0.35 and 0.45), and their population variation can produce small coefficients. The interpretation of γp in item four should be taken with caution due to the slight heterogeneity of γp across RPCS score strata.

3.6. Association with Other Variables Evidence

Results are shown in Table 4. Regarding satisfaction with studies (SSS-B score), the baseline model, with gender, semester, and tangible social support (SSS score) as predictors, was not statistically significant: F(3, 249) = 0.66, f2 = 0.1 (small effect) [99]. The inclusion of the RPCS produced explained variance beyond the sampling error, F(4, 248) = 2.843, p < 0.01) with a moderate effect size (f2 = 0.26). The raw difference from baseline variance (∆ = 0.04) was statistically significant: F(1, 248) = 9.31, p < 0.01. The magnitude of this localized difference, relative to the baseline model [100], was approximately moderate (f2 = 0.15).
Regarding anxiety symptoms (GAD-2 score), the model in block 1, with the predictors of gender, semesters, and tangible support (SSS score), the model with the PCRS (F [4,247] = 5.86, p < 0.01) presented a large effect size (f2 = 0.42), while without the PCRS, it was approximately moderate (f2 = 0.30; F [3,247] = 4.73, p < 0.01). The difference with the baseline variance ( Δ R 2 = 0.033) [100] was statistically significant F(1, 246) = 8.83, p < 0.01), and had small local magnitude (f2 = 0.087). According to the semi-partial correlation (rsp = 0.19), 3.5% is the amount of unique contribution of the perceived competency to investigate general anxiety symptoms (GAD-2).

4. Discussion

The study’s objective was to evaluate the psychometric properties of the Perceived Research Competencies Scale, a construct developed to conceptually approximate undergraduate students’ motivation for research. Regarding the internal structure, a high linear relationship was found between the items (>60% of shared variance). This finding could indicate redundancy in the content because its contents may present repeated behaviors phrased differently. However, the content was derived from the original version with few modifications, and they expressed variations of behavior that concurred with each other. Since the instrument’s constitution was adapted similarly in other situations (see cited literature), the variant made here was an extension of the possible versions in different behavioral areas. The changes to the instrument served to contextualize students’ self-reporting appropriately.
Given the high statistical similarity between items in almost all the parameters (e.g., factor loadings, distributions, and variability), obtaining a measure based on a single item could be considered. Evaluation using a single item of a construct is recommended when it is unidimensional and with similar psychometric properties between the items in a complete measurement [101]. As these characteristics appear to be fulfilled by the RPCS, choosing an item that is psychometrically interchangeable with the rest but sensitive to criteria of interest in research or self-efficacy and involvement in research activities seems feasible.
The high inter-item correlations may not seem surprising for several reasons: the small number of items, the tendency to high inter-item covariation in other studies, and the high specificity of the measured construct. However, empirical verification is required not only to ensure the dimensionality of the measure but also to identify other psychometrically relevant characteristics. One explored in the present study was the psychometric similarity of the items (tau-equivalence), in which a discrepancy was found between statistical identification and its practical consequence. Although the tau-equivalent model was statistically inferior to the congeneric model (i.e., unconstrained factor loadings), this did not impact the internal consistency estimate made by the α coefficient in any serious manner. Usually, the discrepancies in the magnitude of the factor loadings between the items decrease the alpha coefficient’s size [96]. Nevertheless, in the present study, the alpha was not different from the omega coefficient. Therefore, the α coefficient can be calculated with little risk of underestimating it because the items similarly represent its construct.
Regarding item differential functioning, this did not seem to be associated with psychometric differences caused by the semester of study because the relationship between the two was trivial or around zero. This conclusion implies that if there are variations in perceived competency among students, these variations are mainly due to variations in the latent construct. On the other hand, general self-esteem did produce differential functioning in two items (“I feel capable of carrying out the necessary research activities” and “I feel that I can face the challenge of doing the research activities well”) at levels that varied between small and large; this variation occurred among those with higher self-esteem. This finding indicates that both items can co-vary with self-esteem even by keeping the research competencies constant. Presently, it is not clear how these specific contents of the new instrument function differentially concerning generally perceived self-esteem. Still, self-esteem is an identifiable moderator that should be considered in future studies to investigate the differential functioning of items. Other studies [102,103] have verified its moderating impact, and it may be a variable that requires further research.
While estimating the RPCS’ validity concerning students’ satisfaction with undergraduate studies, we found a significant statistical and positive contribution when the variability of the students’ gender, semester of study, and tangible support were controlled. This discovery suggests that experiencing student satisfaction with undergraduate studies is linked to at least two things: (a) students’ perceived effectiveness when participating in research activities and (b) the experiences associated with it. If one characteristic that describes an educational institution is the student’s satisfaction with their behavior, then it is apparent that the experiences of effective participation have an explanatory role in this emotional experience. On the other hand, the RPCS also contributed independently to the intensity (rsp = −0.18, 3.2%) and direction (i.e., negative correlation) of anxious symptomatology and positively to students’ satisfaction with undergraduate studies. Both results suggest that perceived competency in research contributes to a small extent to reducing general anxiety behavior, possibly caused by the link between anxiety and academic performance [52].
Due to these psychometric validity findings, the perceived competency to undertake research, estimated by the new instrument (RPCS), maintained theoretically consistent relationships of university student behaviors aligned with the literature that directly or indirectly links perceived research competency with anxiety [12]. Due to the link between motivation and perceived competency [10], the latter can promote the students’ enjoyment of research courses, increased effort to complete assigned tasks, and possibly a persistent orientation to achieve academic goals.
One association was the linear link between perceived self-efficacy scores for research and tangible social support. It can indicate that students perceive the importance of people’s assistance and their (quantitative) availability to provide support at the needed time. This moment potentially includes those involved in academics on and off-campus and the social experiences and the consequent benefits. Indeed, there is evidence that available tangible support linearly relates to health effects [104].
The present study implies that evaluating motivational factors in research teaching, such as perceived competency, can help monitor the acquisition of student research competencies in developing research projects and writing publishable manuscripts [2,9]. In this way, and employing the RPCS, the change between training periods in research skills, the establishment of a baseline, and the variability of perceived research competence in multilevel groupings can be measured. Another implication is that this study adds theoretical content to constructing the perceived research competency as fertile for research and teaching research skills. Indeed, the initial results of this study define a construct directly derived from a motivational approach that can be added to the intra-individual variables to explore research interest and participation. Because this construct is derived from a model with apparent cross-cultural validity, research-perceived competency may also be cross-cultural. Finally, another implication is that these competencies can be integrated into the objectives of high-impact experiences aimed at stimulating engagement with individual and institutional goals, depth and breadth of learning, and collaboration [49,105,106]. This scope of activities is an opportunity possibly not exclusive to the context of the present study sample (Peru) but rather is universal in the university setting.
The results of this research may lead the user to consider the adapted instrument as feasible, i.e., it meets several characteristic practices that make it an ideal measurement: reduced time to completion, low implementation costs, and comprehensibility [65]. The latter is maximized because a) the new context of using the PCS for student research activities is not foreign to students, and they can connect directly with the content of the RPCS, and b) during the process of construction (translation and adaptation) and administration to the students in the sample, questions and concerns about its interpretability and clarity were of low severity and insubstantial. On the other hand, the RPCS maintains high factor loadings (>0.70) [107], an indicator of the strength of item validity in the construct measured.
As a corollary, the measurement instrument developed here was brief. One can reasonably induce that there is little likelihood of significant changes in the magnitude of the factor loadings because they are high (>0.70) [107], and the interaction between the small number of items may reduce the capitalization of chance. However, the optimal results need to be evaluated for safe generalization [108,109].
The findings obtained must be interpreted based on their limitations. First, the sampling of the participants does not ensure their population representativeness. Therefore, the generalization of the descriptive and correlational information can only indicate the variability of the constructs in the sample. Second, the sample size needs to be more significant to strengthen the stability of the results. Third, a measure of social desirability was not included; consequently, the extent to which this attribute added irrelevant systematic variance cannot be estimated.

5. Conclusions

This study developed a new adaptation of a perceived competencies measurement instrument focused on research skills: the Research Perceived Competencies Scale (RPCS). The RPCS was applied to a sample of undergraduate students, and good internal structure properties were obtained and evaluated by non-parametric and parametric methodologies. Specifically, the RPCS showed strong item relationships (high factor loadings) with its latent construct, a high level of internal consistency reliability at the score level (greater than 0.90) and the item level (greater than 0.50), and theoretically coherent associations, with associations close to zero in demographic aspects (gender, age, and semester of study), and low but statistically significant correlations in general self-esteem. Differential item functioning was non-existent or low magnitude for some items, influenced by perceived general self-esteem. The items in this new version have statistical similarities in their distribution and function as a unit but with non-redundant content. The RPCS score is associated with study satisfaction and general anxiety symptoms. The brevity of this instrument, and the satisfactory validity evidence obtained, indicate that this new adaptation can significantly contribute to the teaching of research and the efficiency of student participation in research activities.

Author Contributions

Conceptualization: C.M.-S. and M.F.-A.; methodology: C.M.-S.; software: C.M.-S.; validation: C.M.-S. and J.F.-B.; formal analysis: C.M.-S.; investigation: C.M.-S., M.F.-A. and F.T.-T.; resources: F.T.-T.; data curation: C.M.-S.; writing—original draft preparation: C.M.-S. and M.F.-A.; writing—review and editing: C.M.-S., M.F.-A., J.F.-B., G.M.C. and F.T.-T.; visualization: C.M.-S., J.F.-B. and F.T.-T.; supervision: C.M.-S., M.F.-A., G.M.C. and F.T.-T.; project administration: M.F.-A., G.M.C. and F.T.-T.; funding acquisition: F.T.-T. All authors have read and agreed to the published version of the manuscript.

Funding

This work is one of the results of the HIM/2015/017/SSA.1207 research project: “Effects of mindfulness training on the psychological distress and quality of life of the family caregiver”; Main researcher: Filiberto Toledano-Toledano. Federal funds for health research supported the present research. It was approved by the Research, Ethics and Biosafety Commissions (Comisiones de Investigación, Ética y Bioseguridad), Hospital Infantil de México Federico Gómez National Institute of Health. The source of federal funds did not influence the study design, data collection, analysis, interpretation, or decisions regarding publication.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Research, Ethics and Biosafety Commissions (Comisiones de Investigación, Ética y Bioseguridad), Hospital Infantil de México Federico Gómez National Institute of Health.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors without undue reservation.

Acknowledgments

The authors acknowledge the financial and technical support of the Writing Lab, Institute for the Future of Education, Tecnologico de Monterrey, Mexico, in the production of this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Murtonen, M.; Olkinuora, E.; Tynjälä, P.; Lehtinen, E. “Do I need research skills in working life?”: University students’ motivation and difficulties in quantitative methods courses. High. Educ. 2008, 56, 599–612. [Google Scholar] [CrossRef]
  2. Merino-Soto, C.; Chávez-Ventura, G.; López-Fernández, V.; Chans, G.M.; Toledano-Toledano, F. Learning Self-Regulation Questionnaire (SRQ-L): Psychometric and Measurement Invariance Evidence in Peruvian Undergraduate Students. Sustainability 2022, 14, 11239. [Google Scholar] [CrossRef]
  3. Kahn, J.H. Research training environment changes: Impacts on research self-efficacy and interest. In Proceedings of the Annual Conference of the American Psychological Association, Washington, DC, USA, 4–8 August 2000. [Google Scholar]
  4. Noguez, J.; Neri, L. Research-based learning: A case study for engineering students. Int. J. Interact. Des. Manuf. 2019, 13, 1283–1295. [Google Scholar] [CrossRef]
  5. Tecnológico de Monterrey. Aprendizaje Basado en la Investigación; Tecnológico de Monterrey: Monterrey, Mexico, 2020. [Google Scholar]
  6. McCarthy, G. Motivating and Enabling Adult Learners to Develop Research Skills. Aust. J. Adult Learn. 2015, 55, 309–330. [Google Scholar]
  7. Swank, J.M.; Lambie, G.W. Development of the Research Competencies Scale. Meas. Eval. Couns. Dev. 2016, 49, 91–108. [Google Scholar] [CrossRef]
  8. Wenger, E. Communities of Practice and Social Learning Systems: The Career of a Concept. In Social Learning Systems and Communities of Practice; Blackmore, C., Ed.; Springer: London, UK, 2010; pp. 179–198. [Google Scholar]
  9. Böttcher, F.; Thiel, F. Evaluating research-oriented teaching: A new instrument to assess university students’ research competencies. High. Educ. 2018, 75, 91–110. [Google Scholar] [CrossRef]
  10. Black, A.E.; Deci, E.L. The effects of instructors’ autonomy support and students’ autonomous motivation on learning organic chemistry: A self-determination theory perspective. Sci. Educ. 2000, 84, 740–756. [Google Scholar] [CrossRef]
  11. Ryan, R.M. Control and information in the intrapersonal sphere: An extension of cognitive evaluation theory. J. Pers. Soc. Psychol. 1982, 43, 450–461. [Google Scholar] [CrossRef]
  12. Deci, E.L.; Ryan, R.M. Optimizing Students’ Motivation in the Era of Testing and Pressure: A Self-Determination Theory Perspective. In Building Autonomous Learners: Perspectives from Research and Practice Using Self-Determination Theory; Liu, W.C., Wang, J.C.K., Ryan, R.M., Eds.; Springer: Singapore, 2016; pp. 9–29. [Google Scholar]
  13. Weinert, F.E. Concept of competence: A conceptual clarification. In Definition and Selection of Competencies-Theoretical and Conceptual Foundations; Rychen, D.S., Sagalnik, L.H., Eds.; Hogrefe & Hube: Kirkland, WA, USA, 2001; pp. 45–65. [Google Scholar]
  14. Rodríguez Martínez, A.; Sierra Sánchez, V.; Falcón Linares, C.; Latorre Cosculluela, C. Key Soft Skills in the Orientation Process and Level of Employability. Sustainability 2021, 13, 3554. [Google Scholar] [CrossRef]
  15. Shen, B.; Centeio, E.; Garn, A.; Martin, J.; Kulik, N.; Somers, C.; McCaughtry, N. Parental social support, perceived competence and enjoyment in school physical activity. J. Sport Health Sci. 2018, 7, 346–352. [Google Scholar] [CrossRef]
  16. Bandura, A. Self-efficacy mechanism in human agency. Am. Psychol. 1982, 37, 122–147. [Google Scholar] [CrossRef]
  17. Deci, E.L.; Ryan, R.M. The “What” and “Why” of Goal Pursuits: Human Needs and the Self-Determination of Behavior. Psychol. Inq. 2000, 11, 227–268. [Google Scholar] [CrossRef]
  18. Rowell, L.; Hong, E. Academic Motivation: Concepts, Strategies, and Counseling Approaches. Prof. Sch. Couns. 2013, 16, 158–171. [Google Scholar] [CrossRef]
  19. Jackson, S.A.; Kleitman, S.; Howie, P.; Stankov, L. Cognitive Abilities, Monitoring Confidence, and Control Thresholds Explain Individual Differences in Heuristics and Biases. Front. Psychol. 2016, 7, 1559. [Google Scholar] [CrossRef]
  20. Stankov, L.; Lee, J.; Luo, W.; Hogan, D.J. Confidence: A better predictor of academic achievement than self-efficacy, self-concept and anxiety? Learn. Individ. Differ. 2012, 22, 747–758. [Google Scholar] [CrossRef]
  21. Landrum, R.E.; Nelsen, L.R. The Undergraduate Research Assistantship: An Analysis of the Benefits. Teach. Psychol. 2002, 29, 15–19. [Google Scholar] [CrossRef]
  22. Adedokun, O.A.; Bessenbacher, A.B.; Parker, L.C.; Kirkham, L.L.; Burgess, W.D. Research skills and STEM undergraduate research students’ aspirations for research careers: Mediating effects of research self-efficacy. J. Res. Sci. Teach. 2013, 50, 940–951. [Google Scholar] [CrossRef]
  23. Indorf, J.L.; Weremijewicz, J.; Janos, D.P.; Gaines, M.S. Adding Authenticity to Inquiry in a First-Year, Research-Based, Biology Laboratory Course. CBE Life Sci. Educ. 2019, 18, ar38. [Google Scholar] [CrossRef]
  24. Sandquist, E.J.; Cervato, C.; Ogilvie, C. Positive Affective and Behavioral Gains of First-Year Students in Course-Based Research across Disciplines. Scholarsh. Pract. Undergrad. Res. 2019, 2, 45–57. [Google Scholar] [CrossRef]
  25. Wolkow, T.D.; Jenkins, J.; Durrenberger, L.; Swanson-Hoyle, K.; Hines, L.M. One Early Course-Based Undergraduate Research Experience Produces Sustainable Knowledge Gains, but only Transient Perception Gains. J. Microbiol. Biol. Educ. 2019, 20, 10. [Google Scholar] [CrossRef]
  26. Szteinberg, G.A.; Weaver, G.C. Participants’ reflections two and three years after an introductory chemistry course-embedded research experience. Chem. Educ. Res. Pract. 2013, 14, 23–35. [Google Scholar] [CrossRef]
  27. Winkelmann, K.; Baloga, M.; Marcinkowski, T.; Giannoulis, C.; Anquandah, G.; Cohen, P. Improving Students’ Inquiry Skills and Self-Efficacy through Research-Inspired Modules in the General Chemistry Laboratory. J. Chem. Educ. 2015, 92, 247–255. [Google Scholar] [CrossRef]
  28. Rodenbusch, S.E.; Hernandez, P.R.; Simmons, S.L.; Dolan, E.L. Early Engagement in Course-Based Research Increases Graduation Rates and Completion of Science, Engineering, and Mathematics Degrees. CBE Life Sci. Educ. 2016, 15, ar20. [Google Scholar] [CrossRef] [PubMed]
  29. Nagda, B.A.; Gregerman, S.R.; Jonides, J.; von Hippel, W.; Lerner, J.S. Undergraduate Student-Faculty Research Partnerships Affect Student Retention. Rev. High. Educ. 1998, 22, 55–72. [Google Scholar] [CrossRef]
  30. Bangera, G.; Brownell, S.E. Course-Based Undergraduate Research Experiences Can Make Scientific Research More Inclusive. CBE Life Sci. Educ. 2014, 13, 602–606. [Google Scholar] [CrossRef]
  31. Meerah, T.S.M.; Osman, K.; Zakaria, E.; Ikhsan, Z.H.; Krish, P.; Lian, D.K.C.; Mahmod, D. Developing an Instrument to Measure Research Skills. Procedia-Soc. Behav. Sci. 2012, 60, 630–636. [Google Scholar] [CrossRef]
  32. Williams, G.C.; Deci, E.L. Internalization of biopsychosocial values by medical students: A test of self-determination theory. J. Pers. Soc. Psychol. 1996, 70, 767–779. [Google Scholar] [CrossRef]
  33. Williams, G.C.; Grow, V.M.; Freedman, Z.R.; Ryan, R.M.; Deci, E.L. Motivational predictors of weight loss and weight-loss maintenance. J. Pers. Soc. Psychol. 1996, 70, 115–126. [Google Scholar] [CrossRef]
  34. Halvari, A.E.M.; Halvari, H. Motivational Predictors of Change in Oral Health: An Experimental Test of Self-Determination Theory. Motiv. Emot. 2006, 30, 294. [Google Scholar] [CrossRef]
  35. Williams, G.C.; Freedman, Z.R.; Deci, E.L. Supporting Autonomy to Motivate Patients With Diabetes for Glucose Control. Diabetes Care 1998, 21, 1644–1651. [Google Scholar] [CrossRef]
  36. Williams, G.C.; McGregor, H.A.; Zeldman, A.; Freedman, Z.R.; Deci, E.L. Testing a Self-Determination Theory Process Model for Promoting Glycemic Control Through Diabetes Self-Management. Health Psychol. 2004, 23, 58–66. [Google Scholar] [CrossRef] [Green Version]
  37. Williams, G.C.; McGregor, H.A.; King, D.; Nelson, C.C.; Glasgow, R.E. Variation in perceived competence, glycemic control, and patient satisfaction: Relationship to autonomy support from physicians. Patient Educ. Couns. 2005, 57, 39–45. [Google Scholar] [CrossRef]
  38. Williams, G.C.; Lynch, M.; Glasgow, R.E. Computer-assisted intervention improves patient-centered diabetes care by increasing autonomy support. Health Psychol. 2007, 26, 728–734. [Google Scholar] [CrossRef]
  39. Silva, M.N.; Vieira, P.N.; Coutinho, S.R.; Minderico, C.S.; Matos, M.G.; Sardinha, L.B.; Teixeira, P.J. Using self-determination theory to promote physical activity and weight control: A randomized controlled trial in women. J. Behav. Med. 2010, 33, 110–122. [Google Scholar] [CrossRef]
  40. Williams, G.C.; McGregor, H.A.; Sharp, D.; Levesque, C.; Kouides, R.W.; Ryan, R.M.; Deci, E.L. Testing a self-determination theory intervention for motivating tobacco cessation: Supporting autonomy and competence in a clinical trial. Health Psychol. 2006, 25, 91–101. [Google Scholar] [CrossRef]
  41. Bonilla, H.; Ortiz-Llorens, M.; Barger, M.K.; Rodríguez, C.; Cabrera, M. Implementation of a programme to develop research projects in a school of midwifery in Santiago, Chile. Midwifery 2018, 64, 60–62. [Google Scholar] [CrossRef]
  42. Cáceres, M.G. Neither boring nor difficult … just unattractive. Challenges of training in research methodology at higher level. Rev. Latinoam. Metodol. Investig. Soc. 2021, 39–53. Available online: http://relmis.com.ar/ojs/index.php/relmis/issue/view/paradigmas_teorias_metodologias_abordajes (accessed on 17 June 2022).
  43. Chávez Vera, K.J.; Calanchez Urribarri, Á.d.V.; Tuesta Panduro, J.A.; Valladolid Benavides, A.M. Formación de competencias investigativas en los estudiantes universitarios. Rev. Univ. Soc. 2022, 14, 426–434. [Google Scholar]
  44. Rueda Milachay, L.J.; Torres Anaya, L.; Córdova García, U. Desarrollo de habilidades investigativas en estudiantes de una universidad peruana. Conrado 2022, 18, 66–72. [Google Scholar]
  45. Hosein, A.; Rao, N. Students’ reflective essays as insights into student centred-pedagogies within the undergraduate research methods curriculum. Teach. High. Educ. 2017, 22, 109–125. [Google Scholar] [CrossRef]
  46. Pavlova, I.V.; Remington, D.L.; Horton, M.; Tomlin, E.; Hens, M.D.; Chen, D.; Willse, J.; Schug, M.D. An introductory biology research-rich laboratory course shows improvements in students’ research skills, confidence, and attitudes. PLoS ONE 2021, 16, e0261278. [Google Scholar] [CrossRef]
  47. Lachance, K.; Heustis, R.J.; Loparo, J.J.; Venkatesh, M.J. Self-Efficacy and Performance of Research Skills among First-Semester Bioscience Doctoral Students. CBE Life Sci. Educ. 2020, 19, ar28. [Google Scholar] [CrossRef]
  48. Sebastian, M.; Robinson, M.A.; Dumeny, L.; Dyson, K.A.; Fantone, J.C.; McCormack, W.T.; Stratford May, W. Training methods that improve MD–PhD student self-efficacy for clinical research skills. J. Clin. Transl. Sci. 2019, 3, 316–324. [Google Scholar] [CrossRef]
  49. Elgren, T.; Hensel, N. Undergraduate research experiences: Synergies between scholarship and teaching. Peer Rev. 2006, 8, 4. [Google Scholar]
  50. Merino-Soto, C.; Dominguez-Lara, S.; Fernández-Arata, M. Validación inicial de una Escala Breve de Satisfacción con los Estudios en estudiantes universitarios de Lima. Educ. Medica 2017, 18, 74–77. [Google Scholar] [CrossRef]
  51. Arshad, M.; Zaidi, S.M.I.H.; Mahmood, K. Self-Esteem & Academic Performance among University Students. J. Educ. Pract. 2015, 6, 156–162. [Google Scholar]
  52. Eisenberg, D.; Golberstein, E.; Hunt, J.B. Mental Health and Academic Success in College. B.E. J. Econ. Anal. Policy 2009, 9, 40. [Google Scholar] [CrossRef]
  53. Mahmoud, J.S.R.; Staten, R.T.; Lennie, T.A.; Hall, L.A. The Relationships of Coping, Negative Thinking, Life Satisfaction, Social Support, and Selected Demographics With Anxiety of Young Adult College Students. J. Child Adolesc. Psychiatr. Nurs. 2015, 28, 97–108. [Google Scholar] [CrossRef]
  54. Salmela-Aro, K.; Nurmi, J.-E. Self-esteem during university studies predicts career characteristics 10 years later. J. Vocat. Behav. 2007, 70, 463–477. [Google Scholar] [CrossRef]
  55. Stallman, H.M. Psychological distress in university students: A comparison with general population data. Aust. Psychol. 2010, 45, 249–257. [Google Scholar] [CrossRef]
  56. Dominguez-Lara, S.A.; Campos-Uscanga, Y. Influencia de la satisfacción con los estudios sobre la procrastinación académica en estudiantes de psicología: Un estudio preliminar. Liberabit 2017, 23, 123–135. [Google Scholar] [CrossRef]
  57. Blake, R.L., Jr.; McKay, D.A. A single-item measure of social supports as a predictor of morbidity. J. Fam. Pract. 1986, 22, 82–84. [Google Scholar] [PubMed]
  58. Menéndez Villalva, C.; Montes Martínez, A.; Gamarra Mondelo, T.; Núñez Losada, C.; Alonso Fachado, A.; Bujan Garmendia, S. Influencia del apoyo social en pacientes con hipertensión arterial esencial. Aten. Primaria 2003, 31, 506–513. [Google Scholar] [CrossRef]
  59. Kroenke, K.; Spitzer, R.L.; Williams, J.B.W.; Monahan, P.O.; Löwe, B. Anxiety Disorders in Primary Care: Prevalence, Impairment, Comorbidity, and Detection. Ann. Intern. Med. 2007, 146, 317–325. [Google Scholar] [CrossRef] [PubMed]
  60. Robins, R.W.; Hendin, H.M.; Trzesniewski, K.H. Measuring Global Self-Esteem: Construct Validation of a Single-Item Measure and the Rosenberg Self-Esteem Scale. Pers. Soc. Psychol. Bull. 2001, 27, 151–161. [Google Scholar] [CrossRef]
  61. Bagley, C. Robustness of Two Single-Item Self-Esteem Measures: Cross-Validation with a Measure of Stigma in a Sample of Psychiatric Patients. Percept. Mot. Skills 2005, 101, 335–338. [Google Scholar] [CrossRef] [PubMed]
  62. Robins, R.W.; Trzesniewski, K.H.; Tracy, J.L.; Gosling, S.D.; Potter, J. Global self-esteem across the life span. Psychol. Aging 2002, 17, 423–434. [Google Scholar] [CrossRef]
  63. Dominguez-Lara, S. Primeras evidencias de validez y confiabilidad de la Single-Item Self-Esteem Scale (SISE) en universitarios peruanos. Educ. Medica 2020, 21, 63–64. [Google Scholar] [CrossRef]
  64. Dominguez-Lara, S.A.; Merino-Soto, C.; Gutiérrez-Torres, A. Estudio Estructural de una Medida Breve de Inteligencia Emocional en Adultos: El EQ-i-M20. Rev. Iberoam. de Diagnostico y Evaluacion Psicol. 2018, 49, 5–21. [Google Scholar] [CrossRef]
  65. Hall, D.A.; Zaragoza Domingo, S.; Hamdache, L.Z.; Manchaiah, V.; Thammaiah, S.; Evans, C.; Wong, L.L.N. A good practice guide for translating and adapting hearing-related questionnaires for different languages and cultures. Int. J. Audiol. 2018, 57, 161–175. [Google Scholar] [CrossRef]
  66. Beaton, D.E.; Bombardier, C.; Guillemin, F.; Ferraz, M.B. Guidelines for the process of cross-cultural adaptation of self-report measures. Spine (Phila Pa 1976) 2000, 25, 3186–3191. [Google Scholar] [CrossRef]
  67. Hawkins, M.; Cheng, C.; Elsworth, G.R.; Osborne, R.H. Translation method is validity evidence for construct equivalence: Analysis of secondary data routinely collected during translations of the Health Literacy Questionnaire (HLQ). BMC Med. Res. Methodol. 2020, 20, 130. [Google Scholar] [CrossRef]
  68. Wild, D.; Grove, A.; Martin, M.; Eremenco, S.; McElroy, S.; Verjee-Lorenz, A.; Erikson, P. Principles of Good Practice for the Translation and Cultural Adaptation Process for Patient-Reported Outcomes (PRO) Measures: Report of the ISPOR Task Force for Translation and Cultural Adaptation. Value Health 2005, 8, 94–104. [Google Scholar] [CrossRef]
  69. Sociedad Mexicana de Psicología. Código ético del psicólogo, 5th ed.; Trillas: Mexico City, Mexico, 2010. [Google Scholar]
  70. American Psychological Association. Ethical Principles of Psychologists and Code of Conduct. Available online: https://www.apa.org/ethics/code (accessed on 18 May 2019).
  71. World Medical Association. World Medical Association Declaration of Helsinki: Ethical Principles for Medical Research Involving Human Subjects. JAMA 2013, 310, 2191–2194. [Google Scholar] [CrossRef]
  72. Silberzahn, R.; Uhlmann, E.L.; Martin, D.P.; Anselmi, P.; Aust, F.; Awtrey, E.; Bahník, Š.; Bai, F.; Bannard, C.; Bonnier, E.; et al. Many Analysts, One Data Set: Making Transparent How Variations in Analytic Choices Affect Results. Adv. Meth. Pract. Psychol. Sci. 2018, 1, 337–356. [Google Scholar] [CrossRef]
  73. Del Giudice, M.; Gangestad, S.W. A Traveler’s Guide to the Multiverse: Promises, Pitfalls, and a Framework for the Evaluation of Analytic Decisions. Adv. Meth. Pract. Psychol. Sci. 2021, 4, 1–15. [Google Scholar] [CrossRef]
  74. Dunn, A.M.; Heggestad, E.D.; Shanock, L.R.; Theilgard, N. Intra-individual Response Variability as an Indicator of Insufficient Effort Responding: Comparison to Other Indicators and Relationships with Individual Differences. J. Bus. Psychol. 2018, 33, 105–121. [Google Scholar] [CrossRef]
  75. Johnson, J.A. Ascertaining the validity of individual protocols from Web-based personality inventories. J. Res. Pers. 2005, 39, 103–129. [Google Scholar] [CrossRef]
  76. Meade, A.W.; Craig, S.B. Identifying careless responses in survey data. Psychol. Methods 2012, 17, 437–455. [Google Scholar] [CrossRef]
  77. Mahalanobis, P.C. On the generalized distance in statistics. Proc. Natl. Inst. Sci. 1936, 2, 49–55. [Google Scholar]
  78. Yentes, R.D.; Wilhelm, F. Careless: Procedures for Computing Indices of Careless Responding. R Packages Version 1.1.0. Available online: https://github.com/ryentes/careless (accessed on 17 June 2022).
  79. Tomczak, M.; Tomczak, E. The need to report effect size estimates revisited. An overview of some recommended measures of effect size. Trends Sport Sci. 2014, 1, 19–25. [Google Scholar]
  80. Mizumoto, A. Langtest, version 1.0; 2015; Available online: https://langtest.jp/shiny/npt/ (accessed on 17 June 2022).
  81. Korkmaz, S.; Goksuluk, D.; Zararsiz, G. MVN: An R Package for Assessing Multivariate Normality. R J. 2014, 6, 151–162. [Google Scholar] [CrossRef]
  82. Mokken, R.J. A Theory and Procedure of Scale Analysis: With Applications in Political Research; De Gruyter Mouton: Berlin, Germany; New York, NY, USA, 2011. [Google Scholar]
  83. Brodin, U.B. A ‘3 Step’ IRT Strategy for Evaluation of the Use of Sum Scores in Small Studies with Questionnaires Using Items with Ordered Response Levels; Karolinska Institutet: Stockholm, Sweden, 2014. [Google Scholar]
  84. van der Ark, L.A. New Developments in Mokken Scale Analysis in R. J. Stat. Softw. 2012, 48, 1–27. [Google Scholar] [CrossRef]
  85. Sijtsma, K.; van der Ark, L.A. A tutorial on how to do a Mokken scale analysis on your test and questionnaire data. Br. J. Math. Stat. Psychol. 2017, 70, 137–158. [Google Scholar] [CrossRef]
  86. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2021; Available online: https://www.R-project.org/ (accessed on 17 June 2022).
  87. Muthén, B. Goodness of Fit with Categorical and Other Non-Normal Variables. In Testing Structural Equation Models; Bollen, K.A., Long, J.S., Eds.; Sage Publications: Newbury Park, CA, USA, 1993; pp. 205–243. [Google Scholar]
  88. Rosseel, Y. lavaan: An R Package for Structural Equation Modeling. J. Stat. Softw. 2012, 48, 1–36. [Google Scholar] [CrossRef]
  89. Schnohr, C.W.; Kreiner, S.; Due, E.P.; Currie, C.; Boyce, W.; Diderichsen, F. Differential Item Functioning of a Family Affluence Scale: Validation Study on Data from HBSC 2001/02. Soc. Indic. Res. 2008, 89, 79–95. [Google Scholar] [CrossRef]
  90. Healey, J.F. Statistics: A Tool for Social Research, 9th ed.; Wadsworth: Belmont, CA, USA, 2012. [Google Scholar]
  91. Mueller, M. IARM: Item Analysis in Rasch Models. R Package, version 0.4.2; 2020; Available online: https://CRAN.R-project.org/package=iarm (accessed on 17 June 2022).
  92. Wanous, J.P.; Reichers, A.E. Estimating the Reliability of a Single-Item Measure. Psychol. Rep. 1996, 78, 631–634. [Google Scholar] [CrossRef]
  93. Zijlmans, E.A.O.; van der Ark, L.A.; Tijmstra, J.; Sijtsma, K. Methods for Estimating Item-Score Reliability. Appl. Psychol. Meas. 2018, 42, 553–570. [Google Scholar] [CrossRef] [Green Version]
  94. Zijlmans, E.A.O.; Tijmstra, J.; van der Ark, L.A.; Sijtsma, K. Item-Score Reliability in Empirical-Data Sets and Its Relationship With Other Item Indices. Educ. Psychol. Meas. 2018, 78, 998–1020. [Google Scholar] [CrossRef] [PubMed]
  95. Molenaar, I.W.; Sijtsma, K. Mokken’s approach to reliability estimation extended to multicategory items. Kwantitatieve Methoden 1988, 9, 115–126. [Google Scholar]
  96. Green, S.B.; Yang, Y. Reliability of Summed Item Scores Using Structural Equation Modeling: An Alternative to Coefficient Alpha. Psychometrika 2009, 74, 155–167. [Google Scholar] [CrossRef]
  97. Aloe, A.M.; Becker, B.J. An Effect Size for Regression Predictors in Meta-Analysis. J. Educ. Behav. Stat. 2012, 37, 278–297. [Google Scholar] [CrossRef]
  98. Bodner, T.E. Standardized Effect Sizes for Moderated Conditional Fixed Effects with Continuous Moderator Variables. Front. Psychol. 2017, 8, 562. [Google Scholar] [CrossRef] [PubMed]
  99. Cohen, J. Statistical Power Analysis for the Behavioral Sciences, 2nd ed.; Lawrence Earlbaum Associates: Hillsdale, NJ, USA, 1988. [Google Scholar]
  100. Selya, A.; Rose, J.; Dierker, L.; Hedeker, D.; Mermelstein, R. A Practical Guide to Calculating Cohen’s f2, a Measure of Local Effect Size, from PROC MIXED. Front. Psychol. 2012, 3, 111. [Google Scholar] [CrossRef]
  101. Fisher, G.G.; Matthews, R.A.; Gibbons, A.M. Developing and investigating the use of single-item measures in organizational research. J. Occup. Health Psychol. 2016, 21, 3–23. [Google Scholar] [CrossRef]
  102. Corning, A.F. Self-esteem as a moderator between perceived discrimination and psychological distress among women. J. Couns. Psychol. 2002, 49, 117–126. [Google Scholar] [CrossRef]
  103. Mirjalili, R.S.; Farahani, H.A.; Akbari, Z. Self-esteem as moderator of the relationship between self-estimated general intelligence and psychometric intelligence. Procedia-Soc. Behav. Sci. 2011, 30, 649–653. [Google Scholar] [CrossRef]
  104. Williamson, H.A.; LeFevre, M.L. Tangible assistance: A simple measure of social support predicts pregnancy outcome. Fam. Pract. Res. J. 1992, 12, 289–295. [Google Scholar]
  105. Bielecki, C.; Wingenbach, G.; Koswatta, T. Undergraduates’ perceived interest and factors affecting participation in selected high-impact experiences. Res. High. Educ. 2018, 34. [Google Scholar]
  106. Giuliano, T.A.; Kimbell, I.E.; Olson, E.S.; Howell, J.L. High impact: Examining predictors of faculty-undergraduate coauthored publication and presentation in psychology. PLoS ONE 2022, 17, e0265074. [Google Scholar] [CrossRef]
  107. Ximénez, C. A Monte Carlo Study of Recovery of Weak Factor Loadings in Confirmatory Factor Analysis. Struct. Equ. Modeling 2006, 13, 587–614. [Google Scholar] [CrossRef]
  108. Coppock, A.; Leeper, T.J.; Mullinix, K.J. Generalizability of heterogeneous treatment effect estimates across samples. Proc. Natl. Acad. Sci. USA 2018, 115, 12441–12446. [Google Scholar] [CrossRef]
  109. Mullinix, K.J.; Leeper, T.J.; Druckman, J.N.; Freese, J. The Generalizability of Survey Experiments. J. Exp. Political Sci. 2015, 2, 109–138. [Google Scholar] [CrossRef] [Green Version]
Table 1. Item descriptive and correlation statistics.
Table 1. Item descriptive and correlation statistics.
Descriptives
RPCS1 RPCS2 RPCS3 RPCS4Total
M4.62 4.75 4.87 4.9119.15
SD1.40 1.37 1.37 1.385.18
Skew.−0.21 −0.29 * −0.26 −0.47 *−0.25
Kurt.−0.26 −0.27 −0.55 * −0.14−0.31
K23.18 5.26 10.14 11.154.54
Correlations
RPCS1RPCS2RPCS3RPCS4AgeGenderSemesterSISE
RPCS11 0.14 *−0.20 **0.020.23 **
RPCS20.89 **1 0.17 **−0.20 **0.0010.29 **
RPCS30.82 **0.84 **1 0.18 **−0.15 **0.0050.25 **
RPCS40.81 **0.83 **0.88 **10.15 **−0.11 **0.010.24 **
Total Score0.18 **−0.20 **0.0030.21 **
Note. Skew: Skewness. Kurt.: Kurtosis. K2: D’Agostino normality test. SISE: Single-Item Self-Esteem scale. RPCS1, RPCS2, RPCS3, RPCS4: items from Perceived Research Competencies Scale. * p < 0.05. ** p < 0.01.
Table 2. Non-parametric (Mokken Scaling Analysis) and linear model results.
Table 2. Non-parametric (Mokken Scaling Analysis) and linear model results.
Mokken Scaling Analysis (MSA)
HMonotonicityIIOLinear Modeling
#vi#zcrit#vi#zcritRitcFrii
RPCS10.860000000.810.950.68
RPCS20.860000000.840.940.73
RPCS30.860000000.820.940.70
RPCS40.830000000.810.910.68
Total0.85
Note. H: Scalability coefficient. IIO: Item Invariant Ordering. #vi: Number of model violations. #z: Number of statistically significant violations. crit: Combined count of #vi and #zsig. F: factor loading. Ritc = item–test correlation. rii: item reliability. RPCS1, RPCS2, RPCS3, RPCS4: items from Perceived Research Competencies Scale.
Table 3. Non-parametric analysis of item differential functioning (partial gamma coefficient).
Table 3. Non-parametric analysis of item differential functioning (partial gamma coefficient).
SemesterGeneral Self-Esteem (SISE)
γpγp
95% CI
Homogeneity
χ2 (df)
γpγp
95% CI
Homogeneity
χ2 (df)
RPCS10.12−0.04, −0.287.52 (8)−0.18−0.37, 0.0134.86 (7)
RPCS2−0.08−0.27, 0.105.25 (8)0.44 **0.27, 0.6111.73 (8)
RPCS30.02−0.18, 0.213.72 (6)−0.10−0.32, 0.124.90 (7)
RPCS4−0.10−0.28, 0.078.86 (7)0.36 **0.20, 0.5126.13 (7) **
Note. γp: Partial gamma coefficient. SISE: Single Item of Self-Esteem. RPCS1, RPCS2, RPCS3, RPCS4: items from Perceived Research Competencies Scale. df: degree free.** p < 0.01.
Table 4. Hierarchical regression to estimate validity.
Table 4. Hierarchical regression to estimate validity.
Dependent VariableStudy Satisfaction
(BSSS)
Anxiety Symptoms Score
(GAD-2)
Step 1Step 2 Step 1Step 2
R20.090.21 0.230.30
βBrspβBrsp
Gender−0.030.0060.0060.120.080.08
Semester0.080.080.08−0.21 **−0.20 **−0.20
Tangible support (SSS)0.030.030.030.010.020.02
RPCS0.19 **0.19−0.19 **−0.18
Note. RPCS: Research Perceived Competencies Scale. BSSS: Brief Study Satisfaction Scale. GAD-2: Generalized Anxiety Disorder-2. SSS: Single Item Support Scale. rsp: semi-partial correlation. ** p < 0.01.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Merino-Soto, C.; Fernández-Arata, M.; Fuentes-Balderrama, J.; Chans, G.M.; Toledano-Toledano, F. Research Perceived Competency Scale: A New Psychometric Adaptation for University Students’ Research Learning. Sustainability 2022, 14, 12036. https://doi.org/10.3390/su141912036

AMA Style

Merino-Soto C, Fernández-Arata M, Fuentes-Balderrama J, Chans GM, Toledano-Toledano F. Research Perceived Competency Scale: A New Psychometric Adaptation for University Students’ Research Learning. Sustainability. 2022; 14(19):12036. https://doi.org/10.3390/su141912036

Chicago/Turabian Style

Merino-Soto, César, Manuel Fernández-Arata, Jaime Fuentes-Balderrama, Guillermo M. Chans, and Filiberto Toledano-Toledano. 2022. "Research Perceived Competency Scale: A New Psychometric Adaptation for University Students’ Research Learning" Sustainability 14, no. 19: 12036. https://doi.org/10.3390/su141912036

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop