J2 Students Perspectives On The Use of Differentiate
J2 Students Perspectives On The Use of Differentiate
J2 Students Perspectives On The Use of Differentiate
net/publication/367460873
CITATIONS READS
6 114
5 authors, including:
All content following this page was uploaded by Noris Mohd Norowi on 28 February 2023.
Christye Majuddin
Universiti Putra Malaysia, Malaysia
ORCID: 0000-0002-9974-7977
Mas Nida Md. Khambari
Universiti Putra Malaysia, Malaysia
ORCID: 0000-0002-7517-9442
Su Luan Wong
Universiti Putra Malaysia, Malaysia
ORCID: 0000-0001-7824-314X
Norliza Ghazali
Universiti Putra Malaysia, Malaysia
ORCID: 0000-0002-4735-1209
Noris Mohd. Norowi
Universiti Putra Malaysia, Malaysia
ORCID: 0000-0001-7420-5867
Abstract
While educators worldwide are moving towards the universal design for learning, it is also essential to
ensure learners are suitably assessed. Assessments and learning profoundly reciprocate one another as
assessment may inform the learning practices, and vice versa. Resonating the aforesaid view, PutraPacer
was developed as a customizable tool to empower instructors in embracing differentiated assessment.
The objective of this pilot study is to elicit feedback on the use of PutraPacer as a differentiated
assessment tool among a group of undergraduate students at a higher education institution. Drawing on
the UTAUT model, this study employs an explanatory sequential mixed-method design to gather both
quantitative and qualitative data. The quantitative findings show that the mean values for performance
expectancy, effort expectancy, social influences, and behavioural intention to use PutraPacer are ranged
between 3.56 and 3.67. Based on the Pearson’s correlation coefficient, there are strong association
between performance expectancy, effort expectancy, social influences with behavioural intention to use
PutraPacer as a differentiated assessment tool. The qualitative findings reveal that the students perceived
PutraPacer as a user-friendly and a learning tool that promotes individualized learning experience and
supports students with different abilities, and iii) a good platform for practices, quizzes, and revision.
Copyright © 2022 by the authors; licensee CEDTECH by Bastas. This article is published under the terms of the
Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/).
Majuddin et al. / Contemporary Educational Technology, 2022, 14(2), ep358
INTRODUCTION
Ensuring rock solid clarity about where we want students to end up as a result of a sequence of learning is
fundamental to educational success. Remembering that we cannot reach the mind we do not engage ought
to be a daily compass for educational planning (Tomlinson, 2001, p. 1).
Assessment is deemed important as a proof that learning happens, regardless of students’ background,
characteristics, and ability per se. The aforementioned quote can be translated that assessment is a powerful
tool to determine whether an instructor is engaged with the students’ minds and whether students are
actually learning. Other than getting evidence that students learned, assessment can benefit the assessor
(Nasri et al., 2010), namely the instructor, to improve his/her pedagogical qualities and enhance his/her
scholarship of teaching and learning. Besides that, assessment most importantly allows instructors to have a
better picture of their learners’ abilities as it allows continuous interaction between assessment and
instruction (Al-Mahrooqi & Denman, 2018).
An alternative assessment is a student-centered approach that focuses on the level of the application of
knowledge and skills to real life, taking the individual features of the students into consideration (Caliskan &
Kasikci, 2010). One of the approaches to alternative assessment, namely differentiated assessment, is the
approach put forth in this research to address mixed-ability and diverse learning styles. Differentiation stems
from beliefs about differences among learners (Algozzine & Anderson, 2007; Lawrence-Brown, 2004;
Tomlinson, 2001), like background, characteristics, learning style, needs, preferences, interests, and abilities.
The role of instructor, therefore, has amplified in a multitude of forms to address these diversities. As for
students, differentiated assessment gives them the opportunity to choose how they want to be assessed and
prove that they have learned (NSW Education Standards Authority, n.d.; Tomlinson, 2001). It celebrates
students’ diversity and acknowledges their mixed learning ability.
To ensure the needs of diverse learners are met, educators bear the responsibilities to plan strategically to
achieve targeted standards (Suprayogi et al., 2017; Tomlinson, 2015). One of the responsibilities is to apply
the principle of differentiation in teaching and learning (Gregory & Chapman, 2013). Differentiation in
assessment is an approach to alternative assessment which attempts to address differences among learners.
Time and again, studies have shown that learners are different not only in terms of characteristics and
background, but also in learning abilities, styles, preferences, needs, adult support, experience, and interests
(Algozzine & Anderson, 2007; Kaur et al., 2018; Lawrence-Brown, 2004; Moon et al., 2020; Tomlinson, 2001).
Differentiated assessment, therefore, provides these learners with flexibility in skills development, levels of
knowledge acquisition, and types of assessments assumed by them (Varsavsky & Rayner, 2013). Recent
studies which shared the same notion also emphasize on the need to transform teaching and learning
methods toward innovation that is based on information and technology (Anggraeni, 2018; Gulicheva et al.,
2017; Lawrence et al., 2019). When teaching and learning methods are transformed, the way learners are
assessed should also come hand-in-hand.
However, the Malaysian education predominantly uses standardized test as a form of assessment although
its practice has been subjected to heavy criticism (Chin et al., 2019; John, 2018; Loh & Teo, 2017; Wilson &
Narasuman, 2020). Standardized tests which are widely practiced in examination-oriented education cause
excessive learning fatigue, and distorting learners from growing within their capabilities and educators from
being creative (Chan et al., 2018). Most importantly, standardized tests fail to address learners’ diverse and
individual needs (Noman & Kaur, 2014; Tomlinson, 2015) as this form of assessment “are not designed to
address variance in readiness, interest, or learning profile” (Tomlinson & Moon, 2013, p. 76). Surprisingly,
not only compassion for new knowledge has faded among learners when education is examination-driven,
but the compassion for teaching among educators were also diluted as a study found that teachers do not
know how to let their students learn if there were no examinations (Ho et al., 2012).
Realizing the disadvantages of standardized tests, the practice of-high-staked standardized examination is
abolished through the national education reform. The Ministry of Education is currently advocates a holistic
approach to assessment and highlights the practice of assessment for learning or formative assessment in all
levels (Ministry of Education, 2013, 2015). The implementation of assessment for learning requires teachers
2 / 18
Majuddin et al. / Contemporary Educational Technology, 2022, 14(2), ep358
to assess their students using alternative methods that go beyond worksheets and written assignments (Chin
et al., 2019). Eventually, this opens more opportunities for teachers to exercise differentiation in the
classroom since effective differentiation of instruction is inseparable with the practice of formative
assessment (Tomlinson & Moon, 2013).
Previous research on alternative assessments in Malaysia show that educators have employed various tools
for formative assessments. For examples rubrics, portfolios, online games, and concept maps (Alias & Osman,
2014; Ghani et al., 2017; Swaran Singh & Abdul Samad, 2012). However, it is found that there is no generic,
systematic and dedicated tools available yet for educators to employ differentiated assessment in classrooms
in Malaysia. Therefore, the main objective of this study is to elicit students’ feedback on the use of a
differentiated assessment tool named PutraPacer.
3 / 18
Majuddin et al. / Contemporary Educational Technology, 2022, 14(2), ep358
examination-driven education as it enables educators who do not have the pedagogical know-hows of
differentiated assessment to implement this approach with their students. As a generic web-based tool,
PutraPacer can be employed to any subject and education level.
PutraPacer borrows the elements of differentiated instruction, which provides choice and opportunities for
students to get appropriate education in general education classrooms (Lawrence-Brown, 2004). As
compared to traditional assessment or tests, differentiated assessment can benefit students ranging from
gifted to those with significant disabilities by providing tiered or multi-level assessment system that will adapt
to students’ answers and responses. For instance, instructors could create a quiz with tiered levels of
assessments based on different levels level of difficulty like easy, medium, and hard (Figure 1). Students can
only advance to the next level of difficulty if they manage to accomplish the goal of the level they are currently
at. The goal refers to a pass mark which is determined by the instructor. For example, if the students managed
to achieve the pass mark for the ‘easy’ level, they could proceed to answer questions at the ‘medium’ level.
Otherwise, if they are not able to advance to the next level, they can continue to answer subsequent
questions offered at their current level.
Another distinct feature of PutraPacer as a differentiated assessment tool is its capacity to provide a platform
for students to demonstrate what they have learned according to their niche abilities and interests.
Instructors could create questions that prompt students to submit or present their answers in various ways
such as mind maps, audio recording or video recording (Figure 2). This feature was developed based on the
theory of Multiple Intelligence by Gardner (1983). He asserts that people process the world and demonstrate
their strengths in multiple ways, and intelligence can be constructed and achieved with non-conventional
methods (Crim et al., 2013). Besides that, in the context of assessment, Gardner (1992) believes that
assessments that fail to address differences among individuals are outdated.
Vygotsky’s (1999) theory of Zone of Proximal Development (ZPD) is fundamental in foregrounding the
concept of differentiation in the development of PutraPacer. According to Tomlinson and Moon (2013),
within the ZPD, learning happens “on a novice-to-expert continuum that builds over time rather than being
constrained by a specific set of grade-level standards” (p. 72). The ZPD concept, which is also seen as a
scaffolding enables students to accomplish tasks which at first beyond their capabilities (Wood et al., 1976).
Scaffolding is found to help develop students’ abilities by increasing the complexity levels of a task and
revealing their hidden potential (Ajideh & Nourdad, 2012; Shabani et al., 2010).
4 / 18
Majuddin et al. / Contemporary Educational Technology, 2022, 14(2), ep358
Figure 2. Students could attach a file to present their answer in alternative form
PutraPacer aims to provide flexibility to ease the pressure on middle to low ability learners to genuinely learn
and engage in depth with their learning instead of memorizing the information (Llewellyn, 2002), as well as
benefit advanced leaners with the opportunities to engage in a more challenging and higher order thinking
skills tasks. This will, in return, provide a much accurate insight of students’ skills and abilities (Dikli, 2003).
The prototype of the assessment tool was employed on a group of undergraduate students at Universiti Putra
Malaysia to assess its’ usability and functionality. The central focus of this study is therefore to elicit learners’
feedback and perspective on the use of PutraPacer as a differentiated assessment tool.
THEORETICAL FRAMEWORK
The Unified Theory of Acceptance and Use of Technology (UTAUT) was optimized as the lens of this study.
Students’ feedback on the use of PutraPacer as a differentiated assessment tool were elicited both
quantitatively and qualitatively through survey questionnaires and one-to-one semi-structured interviews.
Many research studies on computer-based assessments utilize UTAUT as it is regarded as a comprehensive
framework for its high explanatory power. It has also been successfully applied to establish studies in various
educational context (Lawson-Body, 2018; Suki & Suki, 2017). Based on the UTAUT model, intention or usage
is significantly determined by four constructs namely, performance expectancy, effort expectancy, social
factors, and facilitating conditions (Venkatesh et al., 2003). Generally, in this study, ‘performance expectancy’
refers to the degree to which the students believe that PutraPacer helps them in their learning and eventually
improve their performance in their studies. ‘Effort expectancy’ is the degree of ease related to the use of
PutraPacer as a differentiated assessment tool. Meanwhile, ‘social influence’ means the degree to which the
students believe they should use PutraPacer based on influence of people like classmates and lecturers, and
also environment like the university itself. ‘Behavioural intention to use’ reflects the degree to which the
students intend to use PutraPacer for their studies in the future. The construct ‘facilitation conditions’ was
not measured as PutraPacer was employed as obligatory assessments for students who were enrolled in a
course which they enrolled during the time of the study.
5 / 18
Majuddin et al. / Contemporary Educational Technology, 2022, 14(2), ep358
METHODS
This study employs an explanatory sequential mixed methods design. This involves the first phase where
quantitative data was collected through questionnaire surveys followed by the second phase where
qualitative data was gathered through observation and interviews. By employing the explanatory sequential
mixed methods design, the results from the quantitative data could provide general insights of the research
problem while the qualitative data could help explain on the quantitative results (Creswell, 2012).
Procedure
Prior to data collection, course instructors were recruited through a series of training workshops on how to
utilise PutraPacer for differentiated assessment. To supplement the trainings, video tutorials and a ‘how-to’
module were also provided. Other than that, the course instructors could contact the researchers or the
developer of PutraPacer for assistance. After the trainings, two course instructors volunteered to participate
in the pilot study. These course instructors are experts in Human-Computer Interaction and Economic
Education respectively and have more than five years of teaching experience. Both of them managed to
conduct two taxonomy-based assessments using PutraPacer before data collection for the pilot study began.
Data Collection
In the first phase of data collection, a questionnaire with thirty items was distributed and collected online
using Google Forms. These items were adopted from Ibrahim et al. (2016) work which are based on the
UTAUT model by Venkatesh et al. (2003). Based on Cronbach’S alpha coefficient test, the reliability of all
constructs in their study exceeded .70. Thus, all constructs were acceptable (Hair et al., 2020). The
questionnaire comprises four constructs namely performance expectancy, effort expectancy, social
influence, and behavioural intention to use with the total of 30 items. Each construct was measured through
five-point Likert-scale ranging from 1 (strongly disagree) to 5 (strongly agree).
The second phase of data collection involved observation and interviews. First, the researcher attended a
Multimedia Laboratory session, a class of eighteen first-year undergraduate students with a course
instructor. During this session, a non-participatory observation was conducted focusing on how the students
responded to quiz questions that was done online via PutraPacer. Field notes were taken during this
observation. After the quiz session was over, a semi-structured interview was carried out with two
volunteering students. The interviews which lasted for about 40 minutes each were done to draw their
feedback on their experiences in using PutraPacer. To demonstrate validity and credibility of the qualitative
research methods, the researchers adopted approaches like trustworthiness and triangulation. The use of
experts validated interview protocol (Appendix A) in this study add value of trustworthiness (Yin, 2016).
Meanwhile, triangulation of data sources where data were collected from multiple sources such as
observation and interviews, add credibility to the study (Patton, 2015).
6 / 18
Majuddin et al. / Contemporary Educational Technology, 2022, 14(2), ep358
Table 1. TitleHere
Constructs Number of items Cronbach’s alpha
Performance expectancy 9 .987
Effort expectancy 10 .975
Social influence 6 .977
Behavioural intention to use 5 .979
Data Analysis
For the quantitative component, reliability analysis, descriptive statistical analysis & correlation analysis was
used to analyse the data with Statistical Package for the Social Science (SPSS) Statistics version 22 software.
The analyses aim to obtain richer understanding of the students’ perspective based on the UTAUT model.
A reliability test was conducted using Cronbach’s alpha coefficient to measure the consistency and stability
of the instrument used for quantitative data (Cronbach, 1951). The questionnaire items were analysed using
SPSS software. As summarized in Table 1, the range of Cronbach’s alpha for all constructs were more than
0.90 which are preferable (Pallant, 2016). The findings of the reliability study reveal that all four constructs
are valid and reliable in terms of students’ perceptions of PutraPacer measurement. This is due to the fact
that the questionnaire items were taken from a previously conducted study that had been empirically tested
and conceptualised.
Meanwhile, the two-cycle data analysis adopted from Saldaña (2009) was employed for the qualitative
component. In the first cycle, after the field notes and interview transcripts were read and re-read to increase
the researcher’s familiarity with the data, the data were coded using in Vivo Coding and Descriptive Coding
with memos. Next, in the second cycle, Pattern Coding is used with memos-on-memos to develop major
categories from the data. The qualitative data analysis was finalized by regrouping the categories to form
coherent themes. To retain both the emic perspective of the participants, and the etic perspective of the
researchers and theory, a constant comparative method was employed for qualitative data analysis
(Charmaz, 2014, p. 53)
FINDINGS
Quantitative Data
This section discusses the descriptive statistical analysis (performance expectancy, effort expectancy, social
influences, and behavioural intention to use) and the relationship between the constructs used in this study.
Performance expectancy
Table 2 summarizes students’ perspective on performance expectancy. There are nine items in this construct
with overall mean 3.59 (SD=1.26). The highest mean refers to students’ perception that using PutraPacer is
7 / 18
Majuddin et al. / Contemporary Educational Technology, 2022, 14(2), ep358
helpful for their learning (Mean=3.72, SD=1.32) while the lowest mean in this construct refers to the
perception that PutraPacer helps the students to become an autonomous learner (Mean=3.44, SD=1.24).
Effort expectancy
Findings in Table 3 indicates the students’ perspective on effort expectancy. This construct consists of ten
items with overall mean 3.67 (SD=1.34). The highest mean in this construct refers to how instructions given
by the lecturers are helpful in using PutraPacer (Mean=3.91, SD=1.35). The lowest mean indicates that
respondents needed more time than expected to get familiar with PutraPacer (Mean=3.13, SD=1.56).
Social influence
Table 4 shows the students’ perspective on social influence. Six items made up this construct with overall
mean 3.59 (SD=1.27). More than 50% of the respondents either agree or strongly agree on all the statements
except for one item where only 40.6% of the respondents either agree or strongly agree that there were
influences from their classmates in regards of using PutraPacer for revision. The highest mean in this
construct refers to lecturer’s support in using PutraPacer for learning (Mean=3.88, SD=1.21) while the lowest
mean which is 3.19 (SD=1.23) refers to the perception of the respondents’ classmates in regards of using
PutraPacer for revision.
8 / 18
Majuddin et al. / Contemporary Educational Technology, 2022, 14(2), ep358
Table 6. Pearson correlations between performance expectancy, effort expectancy, social influence, and
behavioural intention to use
1 2 3 4
Performance expectancy -
Effort expectancy .955** -
Social influence .951** .937** -
Behavioural intention to use .958** .956** .953** -
**Correlation is significant at the 0.01 level (2-tailed)
Figure 3. Linear relationship between the performance expectancy and behavioural intention to use
PutraPacer
Pearson correlations
Table 6 provides a summary of a Pearson correlation analysis to test the relationships the relationship
between performance expectancy (PE), effort expectancy (EE), social influence (SI) and behavioral intention
to use (BI).
Based on the findings, the value of Pearson’s correlation r=0.958 (PE and BI), r=0.956 (EE and BI), and r=0.953
(SI and BI) show that there are strong positive relationships between performance expectancy, effort
expectancy, and social influence with behavioural intention to use PutraPacer. The higher of performance
expectancy, effort expectancy and social influence among the students, the higher behavioural intention to
use PutraPacer will be.
Figure 3, Figure 4, and Figure 5 show the linear relationship between the performance expectancy, effort
expectancy, and social influence with behavioural intention to use PutraPacer, respectively.
9 / 18
Majuddin et al. / Contemporary Educational Technology, 2022, 14(2), ep358
Figure 4. Linear relationship between effort expectancy and behavioural intention to use PutraPacer
Figure 5. Linear relationship between social influence and behavioural intention to use PutraPacer
Qualitative Data
Three themes emerged from the interviews and observation analysis, namely, PutraPacer:
1. is user-friendly,
2. promotes individualised learning experience and supports students with different abilities, and
3. functions as a platform for practices, quizzes, and revision.
PutraPacer is user-friendly
The interviewed participants, Ahmad and Nur, showed positive feedback on their experience in using
PutraPacer to answer quiz questions. Based on their expression, both indicated that using PutraPacer for
assessment is enjoyable.
It felt good. It felt very natural and intuitive for the most part…I would say just the general
layout felt very clean very nice. It didn’t have many...it felt very easy to go through from
question to question and it was laid out really pleasingly (Ahmad).
10 / 18
Majuddin et al. / Contemporary Educational Technology, 2022, 14(2), ep358
I actually really enjoyed the experience because, well to compare with my previous
experience with PutraBlast quiz, I prefer PutraPacer more because it’s easier, it’s very
user-friendly and I really like the design of it (Nur).
Based on observation, the researcher noticed that the quiz session was conducted in a laid-back manner. The
session was smooth, though occasionally, some students asked for help on the technical settings of
PutraPacer such as login and navigating the pages. In a casual discussion, when the instructor asked how the
students think of the assessment, a couple of students responded that they were being competitive with
each other because the quiz looks like an online game as they need to achieve a certain score to advance to
the next level.
PutraPacer promotes individualised learning experience and supports students with different abilities
Based on her experience sitting for assessments in written and digital forms before, Nur believed that
PutraPacer is different as it offers individualised learning experience. In addition to that, Ahmad and Nur
agreed that PutraPacer can support students with different ability in a way that students became more aware
of their own ability.
…It’s more individual and you can test your ability to know how much you know about
the subject…we can know which level we are at (Nur).
From my thoughts, it seems to allow different types of people to be able to engage with
the same quiz whereas in the standard quiz, we have both hard and easy questions at
once which isn’t really suited for people who aren’t as advance (Ahmad).
As observed by the researcher, during the quiz session, the students did the quiz at their own pace. Although
the questions given were the same for all the students, the questions did not appear in the same order as
they were randomly ordered. This feature encourages students to think on their own as they had to focus on
their own set of questions. Some of the students also made the effort to self-review the questions which they
had answered wrongly and discuss it with the instructor.
I would [support]. It seems like a very good platform from what I have used of it. I think
its use could be expanded as it is a good tool for students and teachers alike (Ahmad).
For test, maybe not. For quizzes, yes. For in class, for practice and revision… Because, it
has levels, right? (Nur).
These findings suggest that students are looking forward to other types of examination and the use of
PutraPacer is well received as a tool for differentiated assessment. They were also ready to move away from
the practise of standardized test and ready to embrace assessments.
DISCUSSION
This pilot study aims to explore students’ perspectives on the use of PutraPacer as a differentiated
assessment tool based on the UTAUT theory, namely performance expectancy, effort expectancy, social
influence, and behavioural intention to use. One of the constructs from UTAUT, which is facilitating condition,
was not measured because PutraPacer was used as an obligatory assessment in the course.
Results from the quantitative data reveals that in terms of performance expectancy, most of the students
perceived that PutraPacer brings advantages for learning purposes. The value of Pearson’s correlation
coefficient, r=0.956 (Table 6), shows that there is a strong positive relationship between effort expectancy
and behavioural intention to use. In other words, the higher effort expectancy among students, the higher
11 / 18
Majuddin et al. / Contemporary Educational Technology, 2022, 14(2), ep358
behavioural intention to use PutraPacer will be. As claimed by Lin and Lai (2019), performance expectancy
demonstrates the degree where students believe that a system can assist them to improve their academic
performance. Therefore, this finding indicates that PutraPacer has the potential to be used to enhance
students’ learning. Findings from the qualitative data further supports this statement whereby positive
learning experiences are gained by the students. Based on the interviews, the students suggested that the
quiz that they did was individualized and at the end of the quiz, they were aware of their own level of ability.
This show that PutraPacer managed to integrate the element of differentiation, where students are provided
with opportunities to get proper education in the context of general education classrooms by learning
according to their own abilities (Lawrence-Brown, 2004). Through such differentiation, there will be less
pressure on learning especially among the low to middle ability learners as they are able to focus on the
learning instead of memorizing the information. Meanwhile, for advance learners, they will have the
opportunities to assume more challenging tasks (Llewellyn, 2003). This finding resonates the concept of
giving achievable tasks that are appropriate for students’ level of abilities and not bounded by a rigid timeline
could build students’ mastery and prepare them to higher-level tasks (Moon et al., 2020). Moon et al. (2020)
asserted, tasks that are too difficult for students are detrimental to their learning progress while giving them
tasks that are not challenging is deemed as “a lost learning opportunity” (p. 34). As implied by one of the
students during the interview, using PutraPacer gave her the sense of individualised learning. Individualised
learning like completing a given task at students own pace enables students to improve their level of ability
(Ali, 2015; Ajideh & Nourdad, 2012).
In terms of effort expectancy, it is found that most of the students agree or strongly agree that PutraPacer is
easy to use. User-friendly technology is closely related to the term ‘effort expectancy’ which Venkatesh et al.
(2003) defined as the level of easiness when using any system. Effort expectancy is also believed to
significantly influence users’ intention to use a technology (Venkatesh et al., 2003). This finding is important
to determine the effectiveness of PutraPacer as a computer-based assessment tool and whether it can easily
be accepted and adopted by users (Catherine et al., 2017; Lin & Lai, 2019). Based on the interview, both
students agree that PutraPacer is easy to use. This consolidates the quantitative finding. As a formative
assessment tool, being user-friendly is a trait that could support effective feedback (Tomlinson & Moon,
2013). According to Moon et al. (2020), feedback helps students to understand the purpose of learning goals
which consequently gives them opportunities to reach the goals. On the same note, while agreeing that
PutraPacer is user-friendly, these students also expressed their enjoyment when using PutraPacer during the
quiz. It was apparent that during the observation, the researcher noticed that during the lab session in which
the quiz was conducted, the students seemed not pressured by the fact that they were doing an assessment.
This denotes a positive finding which is consistent with Hashemian (2011) who emphasises that learning
without pressure could leads to creativity, besides creating awareness and giving learning satisfaction among
students. Earlier study by Isen et al. (1987) also believe that positive effects play a role in facilitating creative
problem-solving which is now considered as one of the essential skills in the 21st century education (Alias &
Osman, 2015; Burke, 2005).
In terms of social influence, the quantitative results revealed that majority of the students perceived that in
one way or another, their classmates, lecturer, and the university environment had influenced their use of
PutraPacer. Based on observation, the researcher noted that there are some attributes of competitiveness
shown among the students. Although Moon et al. (2020) disagree on emphasizing competition among
students especially when striving for a “socially relevant classroom” (p. 37), some studies suggest that
competitive behaviour is a trait of people who are most likely to succeed (Baumann & Harvey, 2018) and the
capability of an instructor has a more significant effect on learners’ motivation and performance (Nguyen &
Nguyen, 2010).
Based on the quantitative data, the mean values for performance expectancy, effort expectancy, social
influences, and behavioural intention to use ranged between 3.56 and 3.67. The highest construct is effort
expectancy (Mean=3.67) while performance expectancy and social influences generated similar mean values
(Mean=3.59). Findings from the interview also indicated that students support the use of PutraPacer to some
extent. This implies that that most of the students have the intention to use PutraPacer in the future
12 / 18
Majuddin et al. / Contemporary Educational Technology, 2022, 14(2), ep358
especially if it were used for practices, quizzes, and revisions due to its level-based feature. This view echoed
the same sentiment with Noguera (2015) who believes that it is more beneficial for students when they are
given sufficient opportunities to advance to next level based on demonstrations of their content knowledge
and skills. A one-off graded test might not be helpful in this context. These arguments also show consistency
with the idea of conducting formative assessments in a differentiated classroom (Moon et al., 2020). Moon
et al. (2020) believe that formative assessments should not be graded since this is when students learn to
master the content of a topic or a lesson, and data from these assessments will be used by the instructors
for instructional planning. Nevertheless, PutraPacer can be potentially used to administer formative and
summative assessments. These findings are important to guide instructors on the direction of their
instructional plan especially when they are considering using PutraPacer as an assessment tool.
CONCLUSIONS
In sum, the findings from the pilot study suggested that most of the students who took part of this study
perceived that the use of PutraPacer has positive effects on their learning. Besides enriching their individual
learning experience, PutraPacer also shows its capacity to address learners’ diverse abilities. PutraPacer is
easy to use, and students enjoyed using it. This pilot study presents some limitations. First, the number of
participants for the interview is too small. However, the data from the interviews provide insights on how
the features in PutraPacer can be improved. Secondly, since a part of this study is qualitative in nature, the
findings are not meant for generalization. Therefore, the findings can only be applied to classes with almost
similar characteristics. Although PutraPacer is deemed as a good platform for differentiated assessment,
there is still room for improvement especially in the idealisation of equal opportunity in learning. This study
welcomes further research, whether quantitative or qualitative in nature, to explore how a differentiated
assessment tool could address learners’ individual needs while at the same time providing them with
unlimited access to learning resources regardless of their proficiency levels.
Author contributions: All authors were involved in concept, design, collection of data, interpretation, writing, and critically
revising the article. All authors approve final version of the article.
Funding: This work was supported by the Centre for Academic Development, Universiti Putra Malaysia via the Incentive for
Teaching and Learning Grant (GIPP 9323753).
Declaration of interest: Authors declare no competing interest.
Data availability: Data generated or analysed during this study are available from the authors on request.
REFERENCES
Adnan, N. L., Mohd Sallem, N. R., Muda, R., & Wan Abdullah, W. K. (2019). Is current formative assessment
still relevant in turning students into deep learners? TEM Journal, 8(1), 298-304. https://doi.org/10.
18421/TEM81-41
Ajideh, P., & Nourdad, N. (2012). The effect of dynamic assessment on EFL reading comprehension in
different proficiency levels. Language Testing in Asia, 2(4), 101-122. https://doi.org/10.1186/2229-
0443-2-4-101
Algozzine, B., & Anderson, K. M. (2007). Tips for teaching: Differentiating instruction to include all students.
Preventing School Failure: Alternative Education for Children and Youth, 51(3), 49-54. https://doi.org/
10.3200/psfl.51.3.49-54
Ali, H. I. H. (2015). Toward differentiated assessment in a public college in Oman. English Language Teaching,
8(12), 27-36. https://doi.org/10.5539/elt.v8n12p27
Alias, A., & Osman, K. (2015). Assessing oral communication skills in science: A rubric development. Asia
Pacific Journal of Educators and Education, 30, 107-122. http://eprints.usm.my/34769/1/
APJEE_30_Art_7_(105_-_122).pdf
13 / 18
Majuddin et al. / Contemporary Educational Technology, 2022, 14(2), ep358
Al‐Mahrooqi, R., & Denman, C. (2018). Alternative assessment. In J. I. Liontas & M. DelliCarpini (Eds.), The
TESOL encyclopedia of English language teaching (pp. 1-6). John Wiley & Sons, Inc. https://doi.org/
10.1002/9781118784235.eelt0325
Anggraeni, C. W. (2018). Promoting education 4.0 in English for survival class: What are the challenges? The
Journal of English Language and Literature, 2(1), 12-24. https://doi.org/10.31002/metathesis.v2i1.676
Baumann, C., & Harvey, M. (2018). Competitiveness vis-à-vis motivation and personality as drivers of
academic performance. International Journal of Educational Management, 32(1), 185-202.
https://doi.org/10.1108/IJEM-10-2017-0263
Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy
& Practice, 5(1), 7-74. https://doi.org/10.1080/0969595980050102
Burke, K. (2005). How to assess authentic learning (4th ed.). Corwin Press.
Caliskan, H. & Kasikci, Y. (2010). The application of traditional and alternative assessment and evaluation
tools by teachers in social studies. Procedia Social and Behavioural Sciences, 2(2010), 4152-4156.
https://doi.org/10.1016/j.sbspro.2010.03.656
Catherine, N., Geofrey, K. M., Moya, M. B., & Aballo, G. (2017). Effort expectancy, performance expectancy,
social influence and facilitating conditions as predictors of behavioural intentions to use ATMs with
fingerprint authentication in Ugandan banks. Global Journal of Computer Science and Technology: E
Network, Web & Security, 17(5), 5-22. https://computerresearch.org/index.php/computer/article/
view/1622
Chan, T. W., Looi, C. K., Chen, W., Wong, L. H., Chang, B., Liao, C. C. Y., Cheng, H., Chen, Z. H., Liu, C.C., Kong,
S.C., Jeong, H., Mason, J., So, H. J., Murthy, S., Yu, F. Y., Wong, S.L., King, R. B., Gu, X., Wang, M., …
Ogata, H. (2018). Interest-driven creator theory: Towards a theory of learning design for Asia in the
twenty-first century. Journal of Computers in Education, 5, 435-461. https://doi.org/10.1007/s40692-
018-0122-0
Chan, Y. F., & Sidhu, G. K. (2010). Authentic assessment and pedagogical strategies in higher education.
Journal of Social Sciences, 6(2), 153-161. https://doi.org/10.3844/jssp.2010.153.161
Charmaz, K. (2014). Constructing grounded theory. SAGE.
Chin, H., Thien, L. M., & Chew, C. M. (2019). The reforms of national assessments in Malaysian education
system. Journal of Nusantara Studies, 4(1), 93-111. https://doi.org/10.24200/jonus.vol4iss1pp93-111
Creswell, J. W. (2012). Educational research: Planning, conducting, and evaluating quantitative and
qualitative research (4th ed.). Pearson Education, Inc.
Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika 16(3), 297-334.
https://doi.org/10.1007/BF02310555
Dikli, S. (2003). Assessment at a distance: Traditional vs. alternative assessments. The Turkish Online Journal
of Educational Technology, 2(3), 13-19. https://files.eric.ed.gov/fulltext/EJ1101956.pdf
Fisk, P. (2017). Education 4.0 … the future of learning will be dramatically different, in school and throughout
life. https://www.peterfisk.com/2017/01/future-education-young-everyone-taught-together/
Gardner, H. (1992). Assessment in context: The alternative to standardized testing. In B. R. Gifford & M. C.
O’Connor (Eds.), Changing assessments: Alternative views of aptitude, achievement, and instruction
(pp. 77-119). Springer. https://doi.org/10.1007/978-94-011-2968-8_4
Ghani, I. B. A., Ibrahim, N. H., Yahaya, N. A., & Surif, J. (2017). Enhancing students’ HOTS in laboratory
educational activity by using concept map as an alternative assessment tool. Chemistry Education
Research and Practice, 18(4), 849-874. https://doi.org/10.1039/C7RP00120G
14 / 18
Majuddin et al. / Contemporary Educational Technology, 2022, 14(2), ep358
Gozuyesil, E., & Tanriseven, I. (2017). A meta-analysis of the effectiveness of alternative assessment
techniques. Eurasian Journal of Educational Research, 70, 37-56. https://doi.org/10.14689/ejer.2017.
70.3
Gregory, G. H., & Chapman, C. (2013). Differentiated instructional strategies: One size doesn’t fit all. Corwin
Press.
Gulicheva, E., Lisin, E., Osipova, M., & Khabdullin, A. (2017). Leading factors in the formation of innovative
education environment. Journal of International Studies, 10(2), 129-137. https://doi.org/10.14254/
2071-8330.2017/10-2/9
Hariharasudan, A., & Kot, S. (2018). A scoping review on digital English and education 4.0 for industry 4.0.
Social Sciences, 7(11), 227. https://doi.org/10.3390/socsci7110227
Hashemian, M., & Azadi, G. (2011). Arguing for the use of portfolio in L2 classrooms. Theory and Practice in
Language Studies, 1(5), 501-506. https://doi.org/10.4304/tpls.1.5.501-506
Ho, E., Bin, J., & Chang, J. (2012). Survey of middle school student learning: Saving the generation of
unmotivated. http://topic.parenting.com.tw/issue/2013/futurelearning/article2-1-2.aspx
Hoogland, K., & Tout, D. (2018). Computer-based assessment of mathematics into the twenty-first century:
Pressures and tensions. ZDM, 50(4), 675-686. https://doi.org/10.1007/s11858-018-0944-2
Ibrahim, N., Mohd Ayub, A. F., & Md. Khambari, M. N. (2016, December 17). Students’perspectives on the use
of mobile phone in learning activities [Paper presentation]. Graduate Research in Education (GREduc)
2016 Seminar, Selangor, Malaysia. http://spel3.upm.edu.my/max/dokumen/
GREDUC_GREduc2016_E-proceedings.pdf
Isen, A. M., Daubman, K. A., & Nowicki, G. P. (1987). Positive affect facilitates creative problem solving.
Journal of Personality and Social Psychology, 52(6), 1122-1131. https://doi.org/10.1037//0022-
3514.52.6.1122
Janisch, C., Liu, X., & Akrofi, A. (2007). Implementing alternative assessment: Opportunities and obstacles.
Educational Forum, 71(3), 221-230. https://doi.org/10.1080/00131720709335007
Johanson, G. A., & Brooks, G. P. (2009). Initial scale development: Sample size for pilot studies. Educational
and Psychological Measurement, 70(3), 394-400. https://10.1177/0013164409355692
John, M. (2018). Assessment reform in Malaysia: Policy into practice in primary schools [Doctoral dissertation,
University of Stirling]. STORRE: Stirling Online Research Repository. http://hdl.handle.net/1893/29915
Kaur, A., Noman, M., & Awang-Hashim, R. (2018). Exploring and evaluating differentiated assessment
practices of in-service teachers for components of differentiation. Teaching Education, 30(2), 160-176.
https://doi.org/10.1080/10476210.2018.1455084
Koshy, S. (2013). Differentiated assessment activities: Customising to support learning. In P. Bartholomew,
N. Courtney, & C. Nygaard (Eds.), Quality enhancement of university teaching and learning (pp. 37).
Libri Publishing.
Lawrence, R., Ching, L. F., & Abdullah, H. (2019). Strengths and weaknesses of education 4.0 in the higher
education institution. International Journal of Innovative Technology and Exploring Engineering,
9(2S3), 511-519. https://doi.org/10.35940/ijitee.b1122.1292s319
Lawrence-Brown, D. (2004). Differentiated instruction: Inclusive strategies for standards-based learning that
benefit the whole class. American Secondary Education, 32(3), 34-62. https://www.jstor.org/stable/
41064522
15 / 18
Majuddin et al. / Contemporary Educational Technology, 2022, 14(2), ep358
Lawson-Body, A., Willoughby, L., Lawson-Body, L., & Tamandja, E. M. (2018). Students’ acceptance of e-
books: An application of UTAUT. Journal of Computer Information Systems, 60(3), 256-267.
https://doi.org/10.1080/08874417.2018.1463577
Letina, A. (2015). Primjena tradicionalnih i alternativnih oblika vrednovanja učeničkih postignuća u nastavi
prirode i društva. [Application of traditional and alternative assessment in science and social studies
teaching]. Croatian Journal of Education, 17(1), 137-152. https://doi.org/10.15516/cje.v17i0.1496
Lin, J. W., & Lai, Y. C. (2019). User acceptance model of computer-based assessment: Moderating effect and
intention-behaviour effect. Australasian Journal of Educational Technology, 35(1). https://doi.org/
10.14742/ajet.4684
Llewellyn, D. (2002). Inquiry within: Implementing inquiry-based science standards. Corwin Press.
Loh, C. Y. R., & Teo, T. C. (2017). Understanding Asian students learning styles, cultural influence and learning
strategies. Journal of Education & Social Policy, 7(1), 194-210.
http://jespnet.com/journals/Vol_4_No_1_March_2017/23.pdf
Ministry of Education. (2013). Malaysia education blueprint 2013-2025 (Preschool to post-secondary
education). Ministry of Education Malaysia. https://www.moe.gov.my/menumedia/media-
cetak/penerbitan/dasar/1207-malaysia-education-blueprint-2013-2025/file
Ministry of Education. (2015). Malaysia education blueprint 2015-2015: Higher education. Kementerian
Pendidikan Malaysia. https://www.kooperation-international.de/uploads/media/3._Malaysia_Educa
tion_Blueprint_2015-2025__Higher_Education__.pdf
Mohtar, T. M. T. (2010). The use of alternative assessment to sustain teaching and learning. Penerbit UPSI.
Moon, R. T., Brighton, C. M., & Tomlinson, C. A. (2020). Using differentiated classroom Assessment to enhance
student learning [eBook edition]. Routledge. https://doi.org/10.4324/9780429452994
Nasri, N., Roslan, S. N., Sekuan, M. I., Bakar, K. A., & Puteh, S. N. (2010). Teachers’ perception on alternative
assessment. Procedia-Social and Behavioural Sciences, 7(C), 37-42. https://doi.org/10.1016/j.sbspro.
2010.10.006
Newstead, S. E., & Findlay, K. (1997). Some problems with using examination performance as a measure of
teaching ability. Psychology Teaching Review, 6, 23-30.
Nguyen, T. T. M., & Nguyen, T. D. (2010). Determinants of learning performance of business students in a
transitional market. Quality Assurance in Education, 18(4), 304-316. https://doi.org/10.1108/
09684881011079152
Noguera, P., Darling-Hammond, L., & Friedlaender, D. (2015). Equal opportunity for deeper learning. Jobs for
the Future. https://files.eric.ed.gov/fulltext/ED560802.pdf
Noman, M., & Kaur, A. (2014). Differentiated assessment: A new paradigm in assessment practices for diverse
learners. International Journal of Education and Applied Sciences, 1(4), 167-174.
Norazilawati, A., Noorzeliana, I., Mohd Sahandri, G. H., & Saniah, S. (2015). Planning and implementation of
school-based assessment (SBA) among teachers. Procedia-Social and Behavioural Sciences, 211, 247-
254. https://doi.org/10.1016/j.sbspro.2015.11.031
NSW Education Standards Authority. (n.d.) Differentiated assessment. https://syllabus.nesa.nsw.edu.au/
support-materials/differentiated-assessment/
Pallant, J. (2016). SPSS survival manual: A step by step guide to data analysis using SPSS program. McGraw-
Hill Education.
Saldaña, J. (2009). The coding manual for qualitative researchers. SAGE.
16 / 18
Majuddin et al. / Contemporary Educational Technology, 2022, 14(2), ep358
Shute, V. J., & Rahimi, S. (2017). Review of computer‐based assessment for learning in elementary and
secondary education. Journal of Computer Assisted Learning, 33(1), 1-19. https://doi.org/10.1111/jcal.
12172
Suki, N. M., & Suki, N. M. (2017). Determining students’ behavioural intention to use animation and
storytelling applying the UTAUT model: The moderating roles of gender and experience level. The
International Journal of Management Education, 15(3), 528-538. https://doi.org/10.1016/j.ijme.2017.
10.002
Suprayogi, M. N., Valcke, M., & Godwin, R. (2017). Teachers and their implementation of differentiated
instruction in the classroom. Teaching and Teacher Education, 67, 291-301. https://doi.org/10.1016/
j.tate.2017.06.020
Swaran Singh, C. K., & Abdul Samad, A. (2012). The use of portfolio as an assessment tool in the Malaysian
L2 classroom. International Journal of English Language Education, 1(1), 94-108. https://doi.org/10.
5296/ijele.v1i1.2851
Tomlinson, C. A. (2001). How to differentiate instruction in mixed-ability classrooms. Association for
Supervision and Curriculum Development.
Tomlinson, C. A. (2015). Teaching for excellence in academically diverse classrooms. Society, 52(3), 203-209.
https://doi.org/10.1007/s12115-015-9888-0
Tomlinson, C. A., & Moon, T. R. (2013). Assessment and student success in a differentiated classroom.
Association for Supervision and Curriculum Development.
Varsavsky, C., & Rayner, G. (2013). Strategies that challenge: Exploring the use of differentiated assessment
to challenge high-achieving students in large enrolment undergraduate cohorts. Assessment and
Evaluation in Higher Education, 38(7), 789-802. https://doi.org/10.1080/02602938.2012.714739
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology:
Toward a unified view. MIS Quarterly, 27(3), 425-478. https://doi.org/10.2307/30036540
Whitehead, A. L., Julious, S. A., Cooper, C. L., & Campbell, M. J. (2016). Estimating the sample size for a pilot
randomised trial to minimise the overall trial sample size for the external pilot and main trial for a
continuous outcome variable. Statistical Methods in Medical Research, 25(3), 1057-1073.
https://doi.org/10.1177/0962280215588241
Wilson, D. M., & Narasuman, S. (2020). Investigating teachers’ implementation and strategies on higher order
thinking skills in school based assessment instruments. Asian Journal of University Education, 16(1),
70-84. https://doi.org/10.24191/ajue.v16i1.8991
Zitlow, C. S., & Kohn, A. (2001). The case against standardized testing: Raising the scores, ruining the schools.
The English Journal, 91(1), 112-114. https://doi.org/10.2307/821673
Correspondence: Mas Nida Md. Khambari, Faculty of Educational Studies, Universiti Putra Malaysia,
Malaysia. E-mail: [email protected]
17 / 18
Majuddin et al. / Contemporary Educational Technology, 2022, 14(2), ep358
APPENDIX A
Interview Protocol
Study title: Students’ perspectives on the use of PutraPacer as a differentiated assessment tool
Time of interview:
Place:
Interviewer:
Interviewee:
Position of interviewee: Undergraduate student
Sample interview questions:
1. Tell me briefly about the activity you did just now?
2. How do you feel about using PutraPacer during the quiz?
3. Can you recall the kind of assessments that you have experienced before? E.g., Test 1 using MCQ, essay,
etc.
4. How is PutraPacer different from the other assessment that you mentioned just now?
5. In your opinion, how can PutraPacer support learning?
6. How do you think PutraPacer may benefit you?
7. How do you think it may benefit other learners with different learning styles?
8. What do you like about PutraPacer? Any feature in particular?
9. What can be improved about PutraPacer?
10.Would you support the use of PutraPacer as an assessment tool?
18 / 18