The construction of a quick TPACK evaluation tool and comparison of an integrative and transformational model

It is well recognized that the model of Technological Pedagogical Content Knowledge (TPACK) is one of the most prominent frameworks for describing teachers' skills to successfully educate students via the use of technology. Self-report questionnaires are often used in TPACK assessment, limiting the measures' validity, reliability, and practical application. The TPACK framework's underlying structure also causes confusion among participants. An integrated or transformational picture of how the TPACK knowledge domains interact was determined by the framework's inherent linkages. Methods and findings One hundred and seventeen pre-service elementary school teachers were issued a 42-item pilot questionnaire. Reliability analysis and confirmatory factor analysis were used to reduce the number of items on each subscale and to ensure that the model was well-fitting. Structural equation modeling was used to analyze the internal connections that existed between the components. In conclusion, the 28-item final TPAC questionnaire is a feasible and reliable tool for assessing pre-service teachers' knowledge, skills, and attitudes toward learning (TPACK). The intrinsic links between knowledge components in the TPACK paradigm also allow for a transformational interpretation to be applied.


Introduction
Pedagogical content knowledge, Shulman (1986Shulman ( , 1987 claims, has dominated the discussion for the last three decades, and it's a combination of both pedagogical and content knowledge that makes a good teacher. Despite the fact that Mishra and Koehler (2006) expanded this approach in recent years to add technological competence as a third crucial component of successful teaching in the digital world, this is not the first time this has been expressed. PCK (pedagogical content knowledge), CK (content knowledge), and TK are the three fundamental components of TPACK, which also includes four hybrid components formed at their intersections: PCK (pedagogical content knowledge), TCK (technology content knowledge) and TCK (technology pedagogical knowledge) (TPCK). Even though numerous adaptations (Lee & Tsai, 2010) and extensions have been proposed (Porras-Hernández & Salinas-Amescua, 2013), the original framework is still the consistent core for representing teacher knowledge today. TPACK has inspired a tremendous amount of research. Many of the many components of knowledge are not clearly linked or how they may be quantified, which has sparked debate and worry in recent years. Each of these themes will be explored in further depth in the next chapters.

Contrasting integrative and transformative views on TPACK
It has been said that the TPACK framework is one of the most widely used concepts in educational technology research (e.g., Angeli & Valanides, 2009;Brantley-Dias & Ertmer, 2013) and that it has "fuzzy boundaries," despite the fact that it has been widely used in educational technology research Even though the TPACK framework has been widely used, it has been criticized for containing information that is "fuzzy" (Graham, 2011;Kimmons, 2015). A wide range of TPACK definitions and interpretations have emerged as a result of these considerations (e.g., Voogt, Fisser, Pareja Roblin, Tondeur, & van Braak, 2013;Petko, 2020). TPACK components have been linked in a variety of ways, resulting in two opposing viewpoints. Particularly (e.g., Angeli & Valanides, 2009;Graham, 2011). When it comes to the first perspective, referred to as an integrative view, the main component, TPCK, emerges as a result of the integration and relationship of all the other components of teacher knowledge and is thus tied to each domain. As a result of high levels of TK, PK, PCK, CK, and CKK, TPCK is likely to be present, as are TPK, TCK, PCK, TK, PK, and PC.
The transformational approach, on the other hand, explains how knowledge components interact to produce unique bodies of knowledge that are greater than the sum of their individual parts. To put it another way, the transformational approach maintains that the TPCK components cannot simply be combined to describe TPCK as a distinct kind of knowledge; rather, it transcends the components that provide its basis. According to this view, TK, PK, and CK have no direct effect on TPCK, but that TPK, TCK, and PCK do. In spite of the fact that Mishra and Koehler (2006) theoretically proposed the concept of TPACK using the transformative perspective, only a few researchers have attempted to empirically test their assumptions up to this point. This question has been examined in a small number of studies using structural equation modeling, but the results have been inconclusive when it comes to finding an answer. Rather than establishing the fundamental TPACK model, the ambiguities have led to multiple extensions to the original model, further confusing the fundamental difficulties. TPACK stands (Barbar, & Abourjeili, 2012). There is a need for empirical data to reconcile TPACK and the paradigm proposed by Mishra and Koehler (2006). As a researcher in the field of TPACK, it is imperative that assessment methodologies are both credible and easy to administer. For example, TPACK can be evaluated across many settings and in conjunction with other critical factors, which could result in significant advantages (e.g., beliefs, selfefficacy).

Developing valid, reliable, and economical TPACK measures
When thinking about the integration of technology into educational institutions, it is critical to establish a theoretical framework that can be traced back to and make sense of the real world (Frigg & Hartmann, 2018;Grnfeldt Winther, 2015). Technology integration into the classroom necessitates the development of a theoretical framework. An important part of TPACK research is determining the validity and reliability of the instruments used in the study of theory and practice (Koehler, Shin, & Mishra, 2012;Niess, 2012). There are numerous theoretical perspectives available on TPACK, and empirical evidence from TPACK data is critical for developing consensus and bridging the gaps between them (Fisser, Voogt, van Braak, & Tondeur, 2015).
Self-reporting and performance-based instruments are the two main categories of instruments that have been used so far in this study to evaluate the TPACK of students (Fisser et al., 2015). First, we can distinguish between self-report surveys and interviews (which may or may not include open-ended or closed-ended questions). Lesson preparation, classroom performance, and specific task performance are all included in performance-based assessments. Self-report approaches are now the most frequently used methodology for TPACK assessment (Koehler et al., 2012;Willermark, 2018). When properly randomized, self-report instruments, according to Demetriou and colleagues (2015), are an effective way to collect large amounts of quantitative data that can be used to generate generalizations. But there are a number of disadvantages to taking this approach. Although self-reporting has some obvious drawbacks, there are also a number of advantages. In order to accurately measure TPACK, current self-report methods must be improved (Abbitt, 2011).
The proliferation of self-report instruments is a cause for concern because of the lack of standard criteria and imprecise boundaries. Most lack proof of reliability and validity. Koehler  (2012) revealed that roughly two-thirds of TPACK research utilizing self-report lacked validity and reliability. Therefore, empirical evidence supporting a seven-factor structure is inconsistent in the literature because of this issue; However, despite several studies that successfully confirmed the TPACK's sevenfactor structure, other studies have found that these components are highly correlated and thus distinguish different factor structures (e.g. Deng, Chai, so Qian, and Chen in 2017 and Pamuk and colleagues in 2015; Sahin and colleagues in 2011; Schmidt and colleagues in 2009), despite the fact that the seven-factor structure of TPACK has been successfully confirmed in several studies (Jang & Tsai, 2012). Concerns have been raised about the framework's construct and discriminant validity as a result of these findings. Because of this, many current self-report tools do not evaluate TPACK holistically, but rather simply TK or the Tdimensions (Scherer, Tondeur, and Siddiq, 2017). (Archambault & Barnett, 2010).
One of the most widely used self-report instruments in the field of teacher training is Schmidt et al. (2009).'s TPACK knowledge assessment survey. Several authors have validated the survey, either in its original form or an adaptation, and reported high reliability (Cronbach's alpha >.80; see, for example, Chai et al., 2010;Chai, Koh Tsai and Tan, 2011;Tan 2010, Chai Koh Tsai andTan 2010). The survey is unique in that it evaluates all seven components. As with other TPACK self-report instruments, it has three drawbacks. TPACK's self-report measures can be lengthy and, as a result, inconvenient for use in the real world (Valtonen et al., 2017). Because the items in each subscale are distributed asymmetrically, these instruments produce instruments with differing degrees of measurement accuracy (for an overview, see Pamuk et al., 2015). Some of these tools can only be used by specific types of teachers, which limits their usefulness as a general tool. According to Schmidt et al. (2009), their questionnaire only allows (pre-service) instructors who teach all four of these subjects to complete it (math and literacy, as well as social studies and science). More specific examples include Jimoyiannis (2010), Doering & Veletsianos (2008), and Archambault & Barnett (2009) "online habitats" (Archambault & Barnett's (2010) "online habitats"). Schmidt et alsurvey,utility .'s can be greatly increased by incorporating these three features into the survey. This will provide the TPACK research community with a long-needed legitimate, trustworthy, and practical tool.
Besides that, the debate over whether to use an integrative or transformational approach to teaching and learning has implications for how assessment tools are constructed (Graham, 2011). TPACK's transformational model by Mishra and Koehler (2006) has been the subject of a few research aiming at proving the validity of the idea (Angeli & Valanides, 2009;Jang & Chen, 2010;Jin, 2019). Despite the fact that their approaches were diverse, they all produced fruitful results. Despite this, no research looked at whether their transformational models were more effective than their integrative counterparts. None of this.

The present study
Teachers' knowledge of digital technology in the classroom is one of the most well-known TPACK models, and it is also used by a wide range of other organizations. TPACK research, on the other hand, places a significant emphasis on a wide range of theoretical and methodological challenges. Aim: The purpose of this project is to create and verify a self-report questionnaire in light of these characteristics (TPACK). For this project, we want to build a succinct instrument that correctly examines all seven components of TPACK while taking parsimony and practicality into mind. When doing large-scale research, TPACK may be more easily integrated because of the shorter scale, which maintains accuracy and reliability at acceptable levels while also decreasing respondent fatigue during survey answers (Rammstedt & Beierlein, 2014;Schweizer, 2011). The second purpose of this research is to use TPACK to examine the internal structure of the TPACK framework and the relationships between its components.

Method Sample
In this research, pre-service elementary school teachers took a compulsory teaching technique course at a Esa Unggul University with their consent. Participation in the study was only possible if the participants had given their informed permission. In all, 117 people from two cohorts (63 women, 52 men, and two who didn't offer their gender information) were included in the final sample because of the 54. average age of the sample (standard deviation: 14.3 years; age range: 22-56). Before being accepted into a teacher training program, all prospective instructors had to have a bachelor's or master's degree in the subject area in which they wanted to concentrate. Total in all, there were 17 different educational disciplines to pick from in the sample. 70 (59.8%) of the pre-service teachers in the sample had no previous teaching experience, 31 (26.5%) had one to two years, 11 (9.4%) had three to six years, and 5 (4.3%) had more than six years. Only seven pre-service teachers completed an optional educational technology program as part of the research.

Measures
Pre-service elementary school teachers were recruited for this study as part of an obligatory teaching style course at a Esa Unggul University. Participation in the study required informed consent. With 54.2 percent participation, the final sample contains 117 respondents (63 females, 52 men, and two who did not identify their gender) divided into two cohorts (fall semester 2018: n 14 49; spring semester 2019: n 1468). There were 22-56 participants in the sample, with a mean age of 31.8 (standard deviation, 14.3) years. Prior to being accepted into the teacher training program, all pre-service instructors were expected to obtain a bachelor's degree (or be in the process of completing a master's degree) in the subject area in which they want to specialize before beginning their training. In all, the sample included representatives from 17 different educational sectors. In terms of past teaching experience, 70 (59.8 percent) of pre-service teachers in the sample lacked it; 31 (26.5 percent) lacked one to two years of experience, 11 (9.4 percent) lacked three to six years of experience, and 5 (4.3 percent) lacked more than six years of experience. It was also addressed in the optional module on educational technology, with just seven pre-service teachers (6.0 percent) completing it, which was consistent with the study's focus on pre-service teachers' contact with educational technology.

Data analysis
An initial reliability study was carried out, followed by a confirmatory factor analysis (CFA) to see if the data matched the theoretically predicted structure and to build a short-scale questionnaire in order to answer the first research question (Schmitt, 2011). To begin, using statistical techniques, the dependability of the whole collection of items for each of the seven subscales was calculated in the first phase. In addition to Cronbach's alpha, McDonald's omega was developed as a complement to the latter, which has been criticized for underestimating internal consistency in certain circumstances (Deng & Chan, 2017). Cronbach's alpha is a dependability measure that is often used in business (McDonald, 1999). Another step involves completing a CFA on a comprehensive scale in order to determine the structural and internal coherence of the system. Item discrimination and factor loadings were reduced until each subscale comprised just the bare minimum number of items required to properly reflect all significant properties of each knowledge component (i.e., face validity) in the reliability study (i.e., face validity). A second round of testing was done to ensure the final model's dependability. This time, Cronbach's alpha and McDonald's omega were utilized, along with a CFA that permitted certain elements within subscales to correlate where reasonable (Schmitt, 2011).
To address the second study objective, SEM was utilized to examine the TPACK components' interactions (SEM).Our study used the likelihood ratio test to create and evaluate models that reflected both an integrative approach (i.e., core components and first-level hybrids that predict TPCK) and a transformational point of view (TPCK prediction models). A mediation analysis was carried out in order to determine the indirect effects of core components on TPCK via their respective first level hybrids. Psych (version 1.8.12; Revelle, 2018), lavaan (version 0.6-3; Rosseel, 2012), and semPower (version 0.6-3; Rosseel, 2012) were used to conduct all of our analyses in the R software environment (version 3.6.0; R Core Team, 2019).
A huge number of CFA and SEM goodness of fit indices are needed for each model. Our model's goodness of fit will be assessed using Chi-square (X2), Bentler Comparative Fit Index (CFI), Tucker Lewis Index (TLI), Steiger-Lind Root Mean Square of Approximation (RMSEA), and, due to our small sample size (N 150), Standardized Root Mean Square Residual (SRMR) for the CFA (Hooper, Coughlan, & Mullen, 2012). This study's cut-off criteria were CFI > 0.95, TLI > 0.95, RMSEA 0.05 with a confidence range of 0.05-0.10, and SRMR 0.08. Schreiber et al. (2006);Hooper et al (2012). All analyses were conducted at a 0.05 level of statistical significance. Recent CFA recommendations show that even small sample sizes may give adequate models provided the number of variables per factor is not too small and internal consistency is strong. (Wolf, Harrington, Clark, & Miller, 2013). In addition, we performed a posthoc power analysis on each structural equation model in order to determine its efficacy and efficiency.

Results and Discussions
One of the primary goals of this study was to develop a brief questionnaire that may be used in therapeutic settings to assess patients (TPACK). All scales are reliable, however a CFA using the whole set of 42 items, which are divided into seven categories, does not result in a model that fits reasonably well (X2(798) 1223.8, p.000; TLI 0.819; CFI 0.832; RMSEA 0.068, 90 percent confidence interval [0.060; 0.074]); and SRMR 0.084. We eliminated items from each subscale based on item discrimination and factor loadings, as well as theoretical concerns about the building of facets (wordings that lead to item repetition or constraints on generalizability), among other considerations (wordings leading to item redundancy or limitations of generalizability). Despite the fact that item pck4 had lower factor loadings than item pck6, the authors considered it to be more comprehensive than identifying student faults since it addressed the component of student assessment (pck4) and the component of student assessment (pck6) (pck6). For these reasons, pck4 was selected as the preferred candidate above pck6 Ultimately, the model was composed of seven components, each of which had four subscale portions, for a total of 28 components.
It was possible to get acceptable fit indices for the final model by allowing five residuals from independent subscales to correlate with one another (X2 (324)  Both the first level hybrid components PCK (0.30, p.04) and TPK (0.30, p.04) in Model 2 were shown to be accountable for the indirect effects of core components on TPCK. This was the first time this had been observed ( 0.57, p .02). (0.57, p.02) is a positive number. There was just one substantial mediation in relation to TK: TPK (0.38, p.00), whereas TCK did not mediate in any relevant manner (0.04, p .30). The significance of this result is 0.04 (p.30). As for CK, neither PCK (p.15) nor TCK (p.33) were discovered as relevant mediators (0.09 and 0.15, respectively) in this study (0.09 and 0.15, respectively).
To begin with, the goal of this study was to develop a simple questionnaire that could be used to measure TPACK cheaply and realistically. In the reliability analysis, the CFA, and the SEM models, the seven TPACK knowledge components may be evaluated using TPACK with four questions per subscale. Cronbach's alphas ranged from 77 to 91 and McDonald's omegas ranged from 79 to 92 for each of the seven subscales. The CFA showed that there is enough differentiation between the subscales. There were large correlations between PK and PCK, PCK and TPCK, and TPK and TPCK, indicating that there were substantial links between the various subscales. All of these trends are in agreement with earlier results and may be explained conceptually (Valtonen et al, 2019). It is generally accepted that TPACK is a legitimate and reliable method for assessing teachers' knowledge. TPACK measures may now be more easily included into research with limited questionnaire space thanks to the short-scale. It is also a generic scale containing vocabulary that is appropriate to a wide range of fields, which makes it subject-specific. For the purpose of bringing together study data from different subjects and grade levels, this will be implemented.
The second goal of investigating the internal connections between the various TPACK components may provide different conclusions. The structural differences between the integrative and transformational models were indistinguishable. Since core components were no longer considered relevant, ties between them were severed, notwithstanding the integrative model's characterization of TPCK as the point at where core and first level hybrid components meet. Thus, the integrative model's structure reflected the transformational model's structure automatically. According to Mishra and Koehler's (2006) basic notion of TPACK, as well as the existing body of evidence supporting a transformational viewpoint, the results given are congruent with the findings (Jin, 2019). This suggests that when TPCK is assessed in its current The construction of a quick TPACK evaluation tool ... | 49 Indonesian Counselor Association (IKI) | DOI: 10.23916/0020210635020 form, the hybrid components TPK and PCK have the most influence. There is a strong correlation between TPK and TPCK. We did not see a significant impact of TCK on TPCK, contrary to the theoretical model. When comparing our results to previous TPACK structural equation models, it becomes clear that there are differences. TPK, TCK, and PCK were all significant predictors of TPCK, according to Pamuk et al (2015). According to Dong et al. (2015) and Koh et al. (2013), although PCK was not a good predictor of TPCK, TPK and TCK were. It was found that PCK and TCK had a positive influence on TPCK but not TPK, according to Celik and colleagues in 2014. In our study, TCK had no significant impact on TPACK, although TPK and PCK did. The question of how these outcomes may be explained arises. TCK items have been reformed in the present study to be more recognizable from TPCK, which might account for this. Another possibility is that the TPACK knowledge components interact differently in different contexts.

Conclusion
Teachers' education and professional development might benefit from the outcomes of this study. When looking at TPCK as transformative, a rise in TK or PK does not always imply an increase in TPCK (Angeli, Valanides, & Christodoulou, 2016). As a result, teacher preparation programs focusing on TK will likely not make the switch to TPCK right once. Instead, the flow of information from one place to another must be deliberate. Because of this, teacher preparation programs must provide students many opportunities to learn and practice the various components of knowledge and, more importantly, the combinations of these components. TPCK development relies heavily on high-quality technology experiences throughout the teaching profession's preparation process, as this study's findings verify.

Limitations and future research
Future research will need to overcome some of the shortcomings of this study. There are several limitations to this study, the most prominent of which are the sample size, survey instrument, and crosssectional study design. Pre-service upper-secondary school teachers who are currently engaged in their initial stages of teacher training are included in this sample. This suggests that, except from those who are well-versed in the subject matter, these instructors have just a limited grasp of TPACK's many components (Koehler et al., 2014). Additionally, additional research is needed in bigger samples with better statistical power, across cultures and teacher demographics in order to establish the questionnaire's overall validity and the validity of its specific subscales. Further research on the specific location of TPACK, which is presently being researched, may benefit from increasing the sample size.
Other limitations should be highlighted in relation to the survey instrument. Contextual knowledge was not examined in this research, as was the case with other TPACK investigations. Few empirical studies have tried to examine context as part of instructors' knowledge despite the fact that it has been considered as an important body of information in some research (Mishra and coworkers, 2019). According to Porras-Hernandez & Salinas-Amescua (2013), the context and its many levels (micro, meso, macro) should be included in future study to better understand the structure and practical application of this information. As well since a lack of contextual references, a further issue with the instrument's content is that evaluating teacher ability at the topic level might be an overly wide approach, as knowledge can differ between disciplines. Therefore, future research should focus on determining a more accurate way to measure TPACK.
This test relies only on pre-service teachers' self-reported knowledge, which raises the issue of how reliable they are in describing their own knowledge (Drummond & Sweeney, 2017). A future research could compare self-declarations with other TPACK variables, such as lesson observations or performance evaluations, to eliminate these biases (Koehler et al., 2012). This study may also demonstrate the validity of self-reported TPACKs by providing important evidence of their convergent validity (Jung, & Baser, 2014). To further investigate the complicated link between TPACK and self-efficacy, self-regulation, beliefs, or attitudes toward educational technology, it will be necessary to investigate the interaction between TPACK and these important categories. It was shown that beliefs influence the links between selfreported TPACK and other factors by Krauskopf and Forssell (2018). In order to increase the effectiveness of technology integration in teacher education and the classroom, it may be necessary to understand how these factors interact. Our findings support a transformational interpretation of TPACK, however the reported effects are only correlational rather than causal. The precise interaction of TPACK components and their reciprocal impacts on one another will need a long-term examination.
A first step toward an assessment of TPACK in survey research that takes into account the transformational character of the model's mix of knowledge domains that is simpler but still trustworthy, even with these limitations, may have been taken in developing TPACK. It is possible that the rapidity and efficiency of TPACK assessments will make it easier to include TPACK evaluations into bigger studies with a varied instructor demographic.