Development and Validation of the ‘Mentoring for Effective Teaching Practicum Instrument’

• In the context of improving the quality of teacher education, the focus of the present work was to adapt the Mentoring for Effective Primary Science Teaching instrument to become more universal and have the potential to be used beyond the elementary science mentoring context. The adapted instrument was renamed the Mentoring for Effective Teaching Practi-cum Instrument. The new, validated instrument enables the assessment of trainee teachers’ perceived experiences with their mentors during their two-week annual teaching practicum at elementary and high schools. In the first phase, the original 34-item Mentoring for Effective Primary Science Teaching instrument was expanded to 62 items with the addition of new items and items from the previous works. All items were rephrased to refer to contexts beyond primary science teaching. Based on responses on an expanded instrument received from 105 pre-service teachers, of whom 94 were females in their fourth year of study (approx. age 22–23 years), the instrument was reviewed and shortened to 36 items classified into six dimensions: personal attributes, system requirements, pedagogical knowledge, modelling, feedback, and Information and Communication Technology due to outcomes of Principal Component and Confirmatory Factor analyses. All six dimensions of the revised instrument are unidi-mensional, with Cronbach alphas above 0.8 and factor loadings of items above 0.6. Such an instrument could be used in follow-up studies and to improve learning outcomes of teaching practice. As such, specific and general recommendations for the mentee, mentors, university lecturers, and other stakeholders could be derived from the findings to encourage reflection and offer suggestions for the future.


Introduction
The practices and curricula of pre-service teacher education in the global context differ in almost all practical aspects, from the length of study, admission criteria, the ratio between subject content knowledge and pedagogy, time spent in practicum, and other factors. However, almost all curricula have similar basic blocks of subjects in common. The first block consists of subjects covering the content knowledge of a subject or subjects to be taught in the future professional career. The second block includes pedagogical subjects and professional courses accompanied by faculty-based exercises and practical work under the guidance of teachers and teaching assistants. The facultybased subjects are sometimes accompanied by short visits to schools and educational institutions to observe a variety of teaching practices and to conduct initial teaching experiments under the supervision of teacher educators. In the third block, long-term visits to schools are led by institutional mentors and supervised by faculty members (Kundu & Basu, 2022;Nikocevig-Kurti & Saqipi, 2022;Ploj Virtič et al., 2021a).
To become a primary or secondary school teacher in Slovenia (Dolenc et al., 2021), a master's degree is required. A constituent part of educational programmes is delegated to pedagogical subjects and teaching practice in the quantity of at least 60 ECTS credits (European Credit Transfer and Accumulation System). A teaching practicum accompanied by institutional mentors is compulsory, usually lasting four weeks during the course of study. After working in schools for about a year, prospective teachers can take a state exam that grants them a lifetime teaching licence.
The primary intention of the paper was to find a way to assess complex school-based teaching practices. An authentic school-based learning experience that involves 'learning by doing' with support from institutional mentors enables pre-service teachers not only to gain authentic classroom experience but also to test how the theory and practices they have learned about on the faculties relate to actual practice. While cooperating with experienced teachers in and out of the classroom, they can learn in a variety of ways in an authentic school environment and improve their pedagogical (technological) content knowledge (Ambrosetti & Dekkers, 2010;Hobson, 2016;Mishra & Koehler, 2006;Shulman, 1987) and identity (Izadinia, 2016).
Mentees, by working in a school, have the opportunity to: a) participate in school life outside the classroom in the school they visit; b) observe the work of mentors at all stages, from preparing a lecture or lab, to teaching in a classroom, to assessment and grading; and c) review their own instructions observed by mentors who can provide feedback and advice on a lesson or similar activity. In this process, it is important to establish a trusted relationship with a mentor based on 'encouragement and support, an open line of communication, and feedback as the most significant elements' (Izadinia, 2016, p. 387), who can help them by providing feedback in various ways (Hobson, 2016). Simultaneously, they can experience or recognise in a classroom (1) critical incidents or warning signs regarding what skills and attributes seem useful and what they should be wary of when alone in their classroom; (2) a sense of their abilities, including self-efficacy; (3) recognition of their current limitations in a mentoring context, as they receive feedback on what went well and what areas need attention; (4) first-hand experience of classroom management in all its diversity; (5) insight into the school as a professional community, a hidden aspect to which they were not exposed as learners; and (6) working with a diversity of students with their own interests and abilities (Jobling & Moni, 2004). Therefore, the benefits of regular mandatory institution-based practice mark such school-based learning engagements as a mandatory essential part of teacher education programmes worldwide (Shanks et al., 2020;Zuljan Valenčič & Marentič Požarnik, 2014).
Mentoring and the mentor-mentee relationship should not be left to chance but should be carefully planned by faculty because 'a positive mentormentee relationship is essential for the mentee's development of teaching practices' (Hudson, 2016, p. 30) and to prevent harmful practices (Hudson, 2016). Part of this experience is honest and trusted feedback in both directions, from mentor to mentee and from mentee to mentor, as well as self-evaluation (Ferk Savec & Wissiak Grm, 2017;Hobson, 2016;Van Ginkel et al., 2018;Stîngu et al., 2016;Vršnik Perše et al., 2015). Since good mentors are not only in the interest of the mentees but also of the faculty, feedback from both the mentors and the mentees to the faculty educators is necessary. In a wider perspective and in order to provide feedback that allows for comparison between different practices and experiences not only horizontally but also longitudinally and even internationally and across disciplines, one needs reliable and validated instruments that enable qualitative and summative assessment of practice. Such instruments can not only identify the strengths and weaknesses of an individual mentorship that enable interventions to improve practice but also overcome the deep-rooted problem of reproducibility and replicability of studies in the social sciences (e.g., Baker, 2016;Laraway et al., 2019;LeBeau et al., 2021).
Mentoring is an undoubtedly important but not necessarily adequately addressed issue. For example, Chen et al. (2016, citing Crisp & Cruz, 2010Jacobi, 1991) note that there does not seem to be a theoretical framework for mentoring, including different contexts, such as mentoring teachers, student teachers, and postgraduates. These authors also point out the need to develop tools to assess and evaluate mentoring in educational contexts. In line with their conclusions, Da Rocha (2014) highlighted that concepts like mentoring must also be considered on a regional or even local level, in contrast to the wider perspective. She states, 'It is necessary to keep an eye on cultural contexts and fitting when transferring one model to another European nation' (Da Rocha, 2014, p. 115). Kram (1983Kram ( , 1985 developed a mentoring theory that provides the conceptual framework for Hudson's work and assumes that mentors perform career (professional) and psychosocial functions. Career functions mean becoming familiar with certain behaviours within an organisation, such as 'coaching protégés, promoting their advancement, increasing their positive attention and visibility, and providing protection and challenging tasks' , while psychosocial functions mean 'providing acceptance and affirmation and offering guidance, friendship, and role modelling' (Ragins & Kram, 2007, p. 5). She further divides mentoring into four phases: the initiation phase, the cultivation phase, the separation phase, and the redefinition phase (Ploj Virtič et al., 2021a). It is argued that Hudson's (2004aHudson's ( , 2004bHudson's ( , 2005 and Hudson et al. 's (2005) Mentoring for Effective Primary Science Teaching (MEPST) model for mentoring can be integrated as follows, albeit with differences within the mentor-mentee in the school context: the student teacher as a mentee is in an initiation phase when interacting with the mentor, but the mentee is also exposed to cultivation within the classroom and school system through interaction with the mentor. In addition, there are periods of separation; as the mentee builds their skills, the mentor takes more of a back seat. The redefinition aspect in our context could mean that the mentee reconsiders his or her position vis-à-vis the mentor, meaning whether or not he or she wants to continue to be in contact with the mentor and vice versa. Given the findings from international studies (e.g., Abed & Abd-El-Khalick, 2015;Hudson et al., 2009;Tarekegn et al., 2020), MEPST can be used by prospective teachers in different cultural contexts and disciplines, addressing five major factors essential to pre-service teachers: personal attributes (PA), system requirements (SR), pedagogical knowledge (PK), modelling (MOD), and feedback (FB). The personal attributes in the MEPST (Hudson 2004a(Hudson , 2004b(Hudson , 2005Hudson et al., 2005) framework refer to professional relationships and include aspects such as supporting the mentee and building trust. System requirements, for example, refer to curriculum-related requirements and school policies. Pedagogical knowledge refers to aspects such as planning, scheduling, teaching strategies, classroom management, and knowledge about teaching. Modelling refers to pedagogical knowledge and relates to authentic experiences observed by the mentee. Feedback, in contrast, refers to constructive feedback from the mentor to the mentee regarding the mentee's practice, both verbally and written.
To reflect the complex interplay of content, pedagogy, and technology in education, Mishra and Koehler (2006) extended Shulman's (1987) model of pedagogical content knowledge to the technological pedagogical content knowledge (TPACK) framework, which reflects the importance of technology, more specifically digital technology in education. In the MEPST, technology is not considered to be a dimension. The authors added an Information and Communication Technology (ICT) dimension to Hudson's model (Ploj Virtič et al., 2021b), based on empirical evidence that digital technologies have become a ubiquitous part of almost every type of school work ( Van't Hooft & Swan, 2007).
Hudson's framework includes five dimensions included in the MEPST instrument. The sixth ICT dimension (included in METPI) was added by the authors. In his works, Hudson uses the word 'factor'; however, we renamed this to 'dimension' to prevent confusion over usage in reports of exploratory factor analysis.

Personal Attributes (PA)
Mentors must possess several personal attributes in order to promote their mentees' progress in acquiring the skills necessary for teaching and classroom management. The mentoring process can be strengthened by recognising that learning takes place in a social context, and a mentor's personal attributes facilitate this learning. According to Hudson et al. (2005), mentors must be (1) supportive, (2) attentive, and (3) willing to discuss specific teaching practices and should (4) provide their mentees with a positive attitude toward teaching key learning areas, (5) provide their mentees confidence in teaching, and (6) support the mentee in thinking constructively about improving instructional practices.

System Requirements (SR)
The work of each school and all stakeholders within a school system is influenced by a network of interconnected levers from the macro-level (e.g., legislation, curriculum) to the micro-level (e.g., teacher accessibility to parents) that form school policies that balance normative measures with teacher autonomy. Knowledge and understanding of system requirements  can be identified as an important part of the mentee's career development in their generic and subject (specialist) track. University teachers do not necessarily have all the most recent information from the field, so providing information about system requirements by teaching mentors is a must.

Pedagogical Knowledge (PK)
Pedagogical knowledge developed at university and tested and developed in the school environment is essential to support effective teaching. Mentors must have pedagogical knowledge to guide their mentees in a range of generic and specific instructional practices. Eleven mentoring attributes and practices can be associated with Pedagogical Knowledge to develop specific instructional practices : (1) planning for teaching, (2) timetabling, (3) preparation, (4) teaching strategies, (5) classroom management, (6) questioning skills (7), assisting with problem-solving (8), content knowledge, (9) implementation, (10) assessment, and (11) providing viewpoints.

Modelling (MOD)
Mentees' teaching skills are learned more effectively when they observe and try out for themselves the teaching practices and models applied by their mentors. Just as important as observing the art of teaching is observing questionable, outdated, and flawed classroom practices and incidents, which can help student teachers to avoid these in their practice. Eight attributes and practices can be associated with modelling instruction : (1) enthusiasm, (2) teaching, (3) effective teaching, (4) rapport with students, (5) hands-on lessons, (6) well-designed lessons, (7) classroom management, and (8) syllabus language.

Feedback (FB)
Mentors who provide honest feedback enable pre-service teachers to reflect on and improve their instructional practices and behaviour. Six characteristics and practices that can be associated with the feedback factor for mentees' instructional development, which requires a mentor, are as follows (Hudson et al., 2005): (1) to set expectations, (2) to review lesson plans, (3) to observe and reflect on practice, (4) to provide verbal feedback, (5) to provide written feedback, and (6) to assist the mentee in evaluating teaching practice.

Information and Communication Technologies ICT (New)
ICT is a new dimension, consisting of seven statements that were not part of the Hudson framework. Four statements related to pedagogical ICT knowledge and three to ICT modelling. The reason for inclusion was the importance of ICT as a ubiquitous tool in education, affecting all aspects of school life.

Aims and scope
It is suggested that more attention should be paid to preparing students and mentors for their role in the practicum (Leshem, 2012), which is the responsibility of faculties/teacher education institutions. To do this, they need feedback from mentors on all issues related to a practicum and feedback from mentees on their experiences visiting schools. Teacher educators can often gain insight into the quality of practice from written reports or interviews. However, it is often beneficial to have a standardised instrument to quickly identify missing and weak parts of the practicum and bring them to the mentors' attention.
Since school-based practice is rapidly evolving and there is no adequate original or translated instrument in the Slovenian language and context to monitor mentoring in an educational setting, we set ourselves the challenge of compiling such an instrument. Such an instrument provides the opportunity to test teaching practicum in an international constructivist mentoring framework (Hudson 2004a(Hudson , 2004b(Hudson , 2005Hudson et al., 2005). However, the associated instrument, called Mentoring for Effective Primary Science Teaching (MEPST), as a foundation that specifically addresses five major factors essential to pre-service teachers, namely personal attributes (PA), system requirements (SR), pedagogical knowledge (PK), modelling (MOD), and feedback (FB), was not adequate for our preservice teacher population. The reasons for the adaptation and validation of the original Hudson's 34-item MEPST instrument (Hudson, 2004a(Hudson, , 2004bHudson et al., 2005) can be summarised as follows: (1) Hudson's original instrument aimed to assess various dimensions of mentoring for elementary school science education, and our goal was to extend it so that it could be used to assess secondary and non-science placements; (2) the original instrument does not include ICT, which is ubiquitous in education today; and (3) the existing instrument was validated to test whether it was still valid 15 years after its development.
Following the above-stated reasons, the present work's main goal was to validate the existing MEPST instrument, adapt it, and validate the instrument tentatively named the Mentoring for Effective Teaching Practicum Instrument (METPI). The importance of having such an updated instrument for pre-service teacher educators is twofold. The first is descriptive to allow recognition of actual sources of problems in the mentee-mentor relationship, and the second is prescriptive. Thus, specific and general recommendations for the mentee, mentors, university lecturers, and other stakeholders could be derived from the findings to encourage reflection and suggestions for the future. Additionally, following Lawson et al. 's (2015, p. 392) suggestion and following Lawson et al. 's (2015, p. 392) suggestion, 'more large-scale studies are needed in the field in order to provide greater insight into teaching practicum. '

Method
To obtain answers to the question of interest, a quantitative, non-experimental methodology based on pre-service teachers' self-reports of their teaching practice was used to validate the instrument. No names or school names were requested to ensure anonymity.

Sample and sampling
The research was conducted among 4 th -year pre-service teachers of various subjects at the University of Maribor in Slovenia, who are required to visit primary and secondary schools for two weeks annually. The whole population of such students at the University of Maribor is approximately 250; however, we would like to apply the instrument to the assessment of practicum for future generations. At the schools, they are accompanied by a teacher-mentor from the subject they are studying. We distributed an anonymous paper-and-pencil questionnaire to approximately 200 students, mostly between the ages of 22 and 23, from the three teacher preparation faculties at the University of Maribor after their return from the teaching practicum. The names and syllabi of the teacher preparation courses including mentorship differ between faculties, but their aims are for the greatest part similar. However, not all 200 returned these questionnaires; of those that were returned, only 105 questionnaires (94 females) were completed. Thus, the response rate was 53%, which is well above the acceptable numbers reported in refereed journals (Johnson & Owens, 2003). Nevertheless, self-selection and convenient sampling can be regarded as the biggest weaknesses of the study.

Structure of the questionnaire
The questionnaire as a data collection instrument consisted of three parts: the first part asked for information about the demography, mentor teacher and feedback from the classroom. It has 10 items asking, for example, the subject of mentoring, gender, and similar. The second, 36-item part asks for the student's experience with the mentor. The order of the items covering all six dimensions was random. The second part comes in two copies, allowing each two-stream student to answer about two mentors of different subjects if applicable. The third section consisted of an item asking respondents to indicate whether they would choose the same mentor again, including an explanation or rationale. Only the second, central part is considered in this paper. All items included in the questionnaire are presented in Tables 2, 3, 4, 5, 6, and 7. Data sets are available online under the CC licence in the ZENODO database (Ploj Virtic et al., 2021b).

Creation of the 62-item initial questionnaire about the students' experiences with the mentors
The framework includes six dimensions, five applied from the MEPST instrument and included in an adapted version in the METPI. The sixth ICT dimension was added by the authors. In his works, Hudson uses the word 'factor'; however, we renamed this to 'dimension' to prevent confusion over usage in reports of exploratory factor analysis.
In the first phase, the original 34-item MEPST instrument by Hudson et al. (2005) was revised and expanded to 62 items. The new items were included after the discussion of the experts, authors of the paper, all employed as university teachers who had previous experience in mentoring for their content evaluation when items were not yet used. The dimensions of the MEPST instrument were used as organising concepts, to be in the first phase expanded and later shortened following the procedures of descriptive, principal component, and confirmatory factor analysis. The adaptation process shown in detail in Appendix A was performed by taking the following steps: (1) Deletion of three items from the 34-item Hudson et al. (2005) instrument, based on the redundancy of items; (2) Addition of twenty-four items from Hudson (2004a) that were not included in Hudson et al. (2005); (3) Addition of 17 new items created by the authors to Hudson's existing five dimensions; (4) Addition of 7 items in a new ICT domain created by the authors; (5) Rephrasing the items by removing the word 'science' from them to make the instrument more universal; and (6) Changing a five-point Likert scale with 'strongly disagree-strongly agree' anchors to a six-point scale reflecting the frequency of an experience. The random order of items was used in all studies considered.
A six-point scale was used with the ranks of no opinion (0), never (1), rarely (2), sometimes (3), often (4), and always (5). The scale differed from the 'strongly disagree-strongly agree' format used by Hudson et al. (2005). The reason for this change was the desire to record not only agreement or disagreement with the statements but also the frequency of occurrence so that possible future interventions could be made to the practicum as needed based on the findings.

Procedure and Data analysis used in transition of MEPST to METPI
The data analysis adhered to the following procedure.

Data collection and clearing of data
Responses collected with a paper-and-pencil 62-item instrument were transferred to a spreadsheet manually. After the initial inspection of the data matrix, all data were analysed to identify respondents with large portions of missing data, outliers, and those who responded automatically by following the same pattern.

Calculation of descriptive statistics
Based on the frequency of the responses, the means (M), standard deviations (SD), modes (Mod), and medians (Med) were calculated and are reported in Tables 2, 3, 4, 5, 6, and 7. The calculated measures of central tendencies were interpreted in terms of the main heading, which stated 'How often do you think your mentor... ' followed by the responses for each statement. Therefore, interpretations were ranked: (1) below 2.00, as not at all and at a very low level, (2) from 2.00 to 2.59, as rarely or at a low level, (3) from 2.60 to 2.99, as sometimes or at a medium level, (4) 3.00 to 3.74, as often or at a high level, and (5) 3.75 and above, as always or at a very high level.

Validity of the scales, Principal Component Analysis (PCA), Confirmatory Factor Analysis (CFA) and Reliability Analysis
The content validity of the scales was assured by the use of previously tested items and consultations of experts from the field during the formation of the 62-item questionnaire.
PCA, CFA, and Reliability Analysis were used to assess each of the six organising dimensions. All dimensions were assessed dimension by dimension. In the exploratory phase, PCA analysis was used to extract component loadings, and in combination with the procedure, Cronbach's alpha if item deleted, offered by SPSS, used to shorten the questionnaire by the exclusion of the redundant items. After that, CFA follows with procedures to confirm the theoretically predicted dimensions (latent constructs).
The analysis of the collected data followed the traditions of Exploratory Factor Analysis (EFA) (Field, 2013). Each of the six theoretically predicted constructs was explored and tested separately for uni-dimensionality and reliability. Principal Component Analysis was used to test the uni-dimensionality, and Cronbach's alpha was calculated as a measure of reliability.
Correlations between the potentially extracted components from each of the dimensions revealed by PCA were reasonably expected; therefore, Direct Oblimin rotation was chosen. Component loadings below the threshold of .5 and significant loading on two or more components were considered as exclusion criteria for an item to be included in a component. An initial criterion for retaining a component in cases where two or more components were extracted within an explored dimension was an eigenvalue above one. All Eigenvalues were later compared to values generated by the Parallel Analysis Engine, following Patil et al. (2008), as a criterion for retaining a component. Several methods determine how many components to retain after PCA (e.g., eigenvalue > 1, scree plot review). Recently, parallel analysis has been preferred. The computer program creates a random data set with the same number of observations and variables as the original data and calculates the theoretically predicted eigenvalues. If the eigenvalues calculated by a program are larger than the eigenvalues of the PCA, it only means that such components are mainly random noise and should not be retained. The reliability of the components was calculated in terms of Cronbach's alpha (see Table 1).
Confirmatory Factor Analysis (CFA) using AMOS 27 software was chosen to test the fit of the data to the hypothesised dimensions (e.g., personal attributes represented as latent variables). Measurement models for which all items from a questionnaire will be subjects of EFA in search of unidimensional latent constructs or PCA in search of a combination of items explaining maximal variance were not performed. The reasons were twofold. The first was a too small sample (N = 105) to allow conclusions inside reasonable confidence intervals, and the other was that we want to follow Hudson's dimensions as well as established theoretical frameworks.

Results
In the first step, measures of central tendencies were calculated. This was followed by PCA with Direct Oblimin rotation. Means, Standard deviations, Modes, Medians and reported, as well as factor loadings, Eigenvalues, and the percentage of explained variance, are provided in Tables 2, 3, 4, 5, 6, and 7. Descriptive statistics for experiences with mentors on all items are presented in Table 3. Each dimension was initially tested by the inclusion of all items in the PCA. Based on the exclusion of items that did not meet thresholds, we were left with 36 items in six constructs, as presented in Table 3 Fit indices were calculated both for a sample with random missing data and when all respondents with missing variables were deleted (Kline, 2011). Among the offered Fit Measures and Indices for CFA (Byrne, 2016), our choices were as follows: (1) the likelihood-ratio Chi-square index (basic absolute fit measure), and the Chi-square to degrees of freedom ratio (CMIDF or χ2/df < 3); (2) Comparative Fit Index (CFI), with values closer to one indicating a better fitting model; (3) Standardised Root Mean Square Residual (SRMR), and Root Mean Square Error of Approximation (RMSEA), with an acceptable range of .08 or less.
For improvement of the unidimensional models, two procedures as proposed by Byrne (2016), were examined: (1) inspection of the standardised residual covariance matrix and (2) application of the modification indices. Based on the examination of values, error terms were connected within some of the constructs.

Feedback
Two components were extracted for feedback (see Table 2). The first comprises statements on immediate feedback and the second on delayed, written feedback. Only the first component was considered for CFA analysis, because two items forming the second component were also cross-loaded on the first component. The outcomes of the one-factor model test of the first component (with excluded FB5) resulted in excellent goodness-of-fit indices (χ2/df = 1.419; CFI = .987; RMSEA = .065; SRMR = .032).

Personal attributes
Two principal components were extracted for personal attributes (see Table 3). According to the parallel analysis, ten items form one principal component (positive attitudes), while the second (flexibility) component could not be retained. The outcome of the one-factor model test of the first component resulted in appropriate goodness-of-fit for most indices (χ2/df = 1.936; CFI = .946; RMSEA = .101; SRMR = .051).

System requirements
Regarding system requirements (see Table 5), two components were extracted. According to the parallel analysis, ten items form one principal component, while the second component could not be retained. The outcome of the one-factor model test of the first component, with the exclusion of item SR38, resulted in appropriate goodness-of-fit for most indices (χ2/df = 3.256; CFI = .922; RMSEA = .151; SRMR = .052).

Pedagogical knowledge
Three components were extracted related to pedagogical knowledge (see Table 6); however, only the first component could be retained after the parallel analysis. The outcome of the one-factor model test of the first component, with exclusion of items PK28 and PK51, resulted in excellent goodness-of-fit-indices (χ2/df = 1.433; CFI = .971; RMSEA = .067; SRMR = .043). PK49 … show you how to assess the learners' learning effectively? 2 3.94 4 5 1.00 Hudson et al., 2005 ICT Concerning ICT (see Table 7), only one component was extracted, showing uni-dimensionality and a high proportion of explained variance. The outcome of the one-factor model test resulted in excellent goodness-of-fit indices (χ2/df = 1.513; CFI = .989; RMSEA = .0205; SRMR = .0216). The new questionnaire is presented in Appendix B.

Discussion and conclusions
Following the general aim to construct an instrument allowing assessment of feedback of the teaching practicum regardless of the study stream of the preservice teachers, the work on the task and outcomes are discussed. After reviewing the literature on mentoring in teacher education and preliminary testing of our adapted 62-item questionnaire on the population of pre-service teachers who completed their teaching practicum, it became clear that the instrument needed adaptation. Through the application of PCA, it was found that four theoretically predicted dimensions out of six were not unidimensional, and some of the items did not load exclusively on one component or above the threshold of .6, which had been set as the threshold value. After cleaning up the instrument by deleting redundant items, 36 items remained in six dimensions, five of which were from Hudson's work, and one (ICT) was added. The theoretical background and rationale for including these dimensions in an instrument are provided in the Introduction section. It should be mentioned that we changed the term 'factor' to the term 'dimension' to avoid confusion between the names of the latent variables (constructs) and the results of factorial analyses. Based on the changes to Hudson et al. 's (2005) MEPST instrument and the removal of the word 'science' , the revised instrument was renamed the Mentoring for Effective Teaching Practicum Instrument (METPI). With this change, the instrument appears to have the potential to be used beyond the elementary science mentoring context. Therefore, the instrument can be used in evaluating a teaching practicum as part of different teaching programs that differ not only between universities but also among faculties within a single university. Beyond quantitative comparisons, such an instrument can be used to improve practicum at the individual levels, showing satisfactory and unsatisfactory aspects of a practicum as a sharing experience of a student and mentor.
All six constructs have Cronbach's alphas above the value of .80; three of them are equal to or higher than .90. These alphas can be considered good or even very good and show adequate reliability of the revised instrument (Field, 2013). When comparing Cronbach's alphas of the items of the original Hudson instrument, after PCA, the reliability of the newly adapted questionnaire is greater in four of the five constructs, and in one construct, the difference is negligible at the .01 level. According to the findings, Hudson's factors (dimensions in our text) can be recognised as valid organising concepts. As such, items can be adapted to different contexts, for example, asking about experiences in one particular subject or at different school levels. What is noteworthy is that the variation of the items reflects the same core idea of a dimension. Moreover, because dimensions are entities, it would be possible to ask only for one or another dimension and omit the others.
From the results of the descriptive statistics, the highest positively reported experience is an aspect related to personal attributes, while the lowest is related to ICT. Even if descriptive values do not directly indicate the effects, it can be argued that the personal attributes of a mentor seem to be crucial for a positive experience for the mentee (See Table 3). The results show that mentees' experiences as participants were positive on all five of the six constructs: personal attributes, system requirements, pedagogical knowledge, modelling, and feedback. ICT was the only construct that did not receive the attention it could have. The implications of this finding and because of the ubiquitous use of digital technologies should be the subject of follow-up research. It is suggested that the METPI questionnaire, based on the MEPST model and tested with the EFA and CFA, can be used in follow-up studies. This questionnaire and its quality could be improved with evidence based on real data from international studies and feedback from mentors (Hudson, 2010). The other aspect that should be considered is the possible difference between the establishment of short-and long-term relationships between mentors and mentees (Kram, 1983;Kram, 1988;Lynn & Nguyen, 2020). In the context of the study, only a snapshot of relatively short-term experiences was explored, with the possibility that mentors were providing only a survival course for their mentees.
Two kinds of METPI use can be suggested. The first one is in large-scale studies as an anonymous instrument to explore and find general patterns in mentoring. The second use is as a part of the students' portfolio. In the second case, university tutors could intervene to add missing dimensions to the student's pedagogical content knowledge and identify mentors working in a 'laissez-faire' mode.
The limitations of the study are diverse. The first involves the opinions of the invisible majority, meaning those who did not respond to the questionnaire. The second is a comparative analysis of whether the instrument has adequate qualities for each subject field and the generalisability of the findings in the international arena. At this point, it is important to point out that some potentially important factors were unintentionally not considered in the study (Kline, 2011). The common method bias (Podsakoff et al., 2003;Podsakoff et al., 2012) can hamper the results of this type of study; therefore, all measures were taken to prevent it (Kline, 2011;Wolf et al., 2013). Due to the low number of respondents, some analyses were omitted. For example, PCA and CFA on the whole datasets and search for covariances between dimensions were not performed. An additional drawback of using either type of scale is that it is difficult to infer the quality of the mentor-mentee relationship from the agreement and frequency of an incident. In both cases, we obtain information about missing parts of the practice. Therefore, we suggest that a follow-up interview be conducted to address issues if they were identified through the initial screening.

ICT (ICT) (4 items)
Cronbach's Alpha = .87, Variance = 71.74, Eigenvalue = 2.87 ICT10 … discuss with you how to use ICT for teaching and learning in your lessons? ICT13 … assist you with using ICT in non-traditional (innovative) ways for teaching and learning in your lessons?
ICT16 … show you how to use ICT for teaching and learning? ICT31 … develop your strategies for teaching with ICT?

Biographical note
Mateja Ploj Virtič, PhD, is an associate professor in the field of didactics of technics and technology at the Faculty of Natural Science and Mathematics, University of Maribor, Slovenia. She works in the area of Teacher education, specifically in the engineering and technology field, therefore, she prepares preservice teachers of engineering and technology to teach in secondary schools. Her research interests are teaching strategies in science and technology, the role of a teacher in contemporary education, distance education, and mentoring preservice teachers.
André du Plessis, PhD, is an associate professor at the Faculty of Education, Intermediate Phase, at the Nelson Mandela University in South Africa. His expertise is in Information Communication Technologies in Education. He also teaches didactical approaches and content related to Intermediate Phase Mathematics (grades 4 to 6) and has converted all his modules to online learning. In addition, his passion is the development of school-based learning practical modules and mentoring pre-service student teachers and in-service teachers. His research interests include ICT in Education, Behavioural Intention and Actual Usage of innovations and school-based mentoring of pre-service student teachers and in-service teachers.
Andrej Šorgo, PhD, is a full professor in the field of didactics of biology at the Faculty of Natural Sciences and Mathematics, University of Maribor. Recently, he has been teaching didactics of biology and various related subjects in the field of biology and environmental education at the University. As a parttime researcher, he is an employee at the Faculty of Computer Sciences and Electrical Engineering, UM, and a visiting professor at Charles University in Prague. His research interests include the utility of technology in science and environmental education and the factors that influence intentions and actual behaviour in numerous areas of human activity not necessarily directly related to education.