Mat Test
Mat Test
Mat Test
USER’S GUIDE
The Profiling for Success series is published by Team Focus Limited, Heritage
House, 13 Bridge Street, Maidenhead, Berkshire, SL6 8LR, England, tel: +44
(0)1628 637338.
Introduction ........................................................................................................................................................... 17
Is the MAT an appropriate test? ........................................................................................................................ 17
Planning the test session ................................................................................................................................... 17
Access to the MAT online.................................................................................................................................... 18
References ................................................................................................................................................................. 31
As the respondent progresses through the shapes, the complexity of the instructions increases, so
requiring the respondent to hold a relatively large amount of information in memory in order to be
able to respond correctly. The respondent is able to refer to the current instruction set at any time
during the test, but each time they refer to the instructions will count against them in relation to the
assessment of the memory component of the task. The test is a speeded test and respondents are
asked to complete the test as quickly as they are able to. In addition to the memory score based on
the number of times they referred to the instructions, the test is scored in terms both of the
accuracy with which they clicked on the correct shapes and also the time taken to complete the
test.
The test is available in online format only and normally takes less than 20 minutes to complete.
Scoring is carried out by the online administration system and reports are available for both
respondent and test administrator.
The MAT simulates one of the most important aspects of the workplace: the need to quickly
memorise and retain information in order to apply rules or procedures in a timely and accurate
manner. It is a multi-faceted test that generates a rich profile of performance as individuals respond
to increasingly complex instructions and screens of information. The facets measured are:
• Memory – the number of times the test-taker needs to check the relevant instructions;
• Baseline response – a control for fluency with the computer mouse and speed of
responding.
• Decisiveness - the number of times the respondent changes their mind regarding a given
test shape
• Decision efficiency - an overall measure of the number of correct decisions per minute
In the modern workplace, the acquisition of knowledge and skills and the application of these are
increasingly important. One of the main reasons for this is that work has become more cognitively
demanding over the past few decades, particularly with the advent of the information technology
age, meaning that for organisations to be successful they require an increasingly skilled workforce
(Besnahan, Brynjolfsson and Hitt, 2002). The acquisition of the necessary skills for the modern
workplace can be studied from a number of perspectives including: the education that workers
receive as children and young adults, the opportunities for specific skills training both prior to
commencing work or whilst on-the-job; how governments and societies are seen to value skills
training and embed this in their educational policies and the resources made available to support
these. All of these perspectives, however, focus on high-level provision of opportunities for the
development of skills and ignore the cognitive abilities that underlie the acquisition of skills..
Aptitudes describe the fundamental cognitive processes that are important in the acquisition of
higher-level abilities or skills. An individual’s level of aptitude in a particular area predicts their
likely level of learning in the corresponding area, and therefore the extent to which they will
benefit from exposure to education (e.g. Ehrman, 1994) or training (e.g. Schmidt and Hunter, 1988).
Aptitudes indicate the potential for learning experiences to be translated into more permanent
retention of knowledge or skill, through the process of learning. This relationship between memory
and learning is captured by Squire (1987) who describes this as: “Learning is the process of
acquiring new information, while memory refers to the persistence of learning in a state that can be
revealed at a later time”.
The ability to learn job-relevant information is important for jobs at all levels. In 2008, the CIPD
reported that 70% of organisations across the public and private sector identified ‘lack of
necessary specialist skills’ as their major recruitment difficulty, an increase of 5% from the
previous year (CIPD, 2008). The most commonly used strategy to overcome this skills shortage is
for organisations to identify people who they see as having the potential to grow into the role and
develop the skills required. However, this strategy was only identified as having a positive impact
by 65% of organisations. Both the initial shortage of skills and the failure of all candidates
recruited for their potential to develop as hoped, means skills shortages place a major burden on
organisations.
As well as helping understand differences in outcome from educational and training opportunities,
fundamental process connected with memory and attention are important in their own right in the
workplace. In work that requires employees to be focussed and remember facts, procedures or
other information there is clear benefit in having an effective memory. Memory capacity is also
highly relevant to work environments requiring multi-tasking for their safe and effective
performance. As experimental evidence shows individuals with higher memory capacity to
outperform those with lower capacity, it is relevant to understanding the likely employee
performance across a wide range of tasks (Hilkemann, 2011).
information must pass through short term-memory in order to be consolidated into a form
that resembles long-term memories.
The distinction between short-term and long-term memory is intuitive to many and provides a
useful way of thinking about different types of memory. However it is primarily a structural
description, whereas the working memory model of Baddeley and Hitch (1974) focuses more on the
processes involved in memory. Working memory is a system with limited capacity for the
temporary storage of information, consisting of visual and phonological stores controlled by the
‘central executive’. It facilitates higher level skills such as verbal comprehension, reasoning and the
ability to learn and encode new information (Baddeley, 2000).
The central executive system is particularly important in explaining the link between attention and
memory, as its functions include the control of attentional resources and the flow of information.
Kahneman previously (1973) likened attention to an energy resource; tasks that place requirements
on attention draw on this resource for their successful completion. Within Kahneman’s work the
idea of 'attentional capacity' is introduced. Attentional capacity has similarities with a number of
models of memory, but also clearly introduces the idea that not everyone has equal attentional
capacity.
Research into Baddeley’s model of working memory supports the idea that capacity is not fixed or
unlimited, but that working memory has a finite capacity and that this capacity differs between
individuals. Central executive functioning has been related to performance on complex span tasks,
requiring both the processing and storage of information, which have been used to indicate the
capacity and limits of the central executive. Span tasks, in turn, are associated with more complex
tasks requiring higher-order cognitive processing including reasoning abilities and verbal
comprehension (Gathercole, 2008). Association between psychometric assessments of abilities and
measures of working memory capacity further support argument that working memory is a vital
element of both fluid and crystallised abilities. Kyllonen and Christal (1990) found that in four
separate studies the association between psychometric tests of ability and tests of Baddeley’s
working memory model, performance on the two correlated in the order of 0.8.
Assessing an individual’s attention and memory through as assessment such as the MAT, therefore
provides us with valuable information about an individual’s attentional capacity. This, in turn,
allows us to better understand about their likely learning potential and performance on cognitively
demanding, work relevant, tasks.
The MAT has been designed to assess fundamental aspects of memory and attention in a user-
friendly and engaging format. The basic requirement is for respondents to memorise instructions
and then apply these quickly and accurately. An abstract format was chosen as being best for the
MAT, as this removes the effect of any prior experience or knowledge that test takers may have and
which would lead to construct-irrelevant variance in the test results.
The original version of the MAT was organised into 10 sets of 10 test screens, giving 100 screens in
total. Since the publication of the original version, the test was modified to use only 5 screens per
set, thus giving a total of 50 screens in the test. Each set of screens introduces a new instruction
(e.g. ‘click on yellow circles’) and respondents are then asked to apply this instruction to each of the
following set of screens as quickly and as accurately as they can. After the first set of screens has
been completed, a further instruction is added (e.g. ‘click on yellow circles’, ‘click on stars if they are
above a square).
Each of the instructions involves at least two elements; in the example given above these elements
are a shape (circle) and colour (yellow). This was done to ensure that conscious processing of the
information on each test screen was needed, so ensuring that the task required the use of
controlled attention and working memory. Research on attention has shown that when searching
for objects with single distinguishing features (e.g. circles amongst squares and triangles or yellow
shapes amongst blue and red shapes); targets can be identified automatically, without the use of
focussed attention (Treisman, 1988). When targets have two or more features that need to be
combined (e.g. yellow circles), their identification requires focussed attention which is one of the
key capacities being assessed by the MAT.
The four constructs assessed by the original version of the MAT are as follows.
2. Attention (now called 'Accuracy'): Attention indicates the number of correct responses given by
the candidate. Attention scores result from candidates having correctly memorised and applied the
rules relating to each set of screens, and showing this by clicking on the appropriate shapes. Higher
scores on this construct indicate higher levels of attention.
3. Speed of working: The time taken to complete the MAT, minus the screens used to establish ‘click
speed’ (see below), is recorded as ‘speed of working’. This construct variable includes the time
taken to complete all screens and any time the respondent spends reminding themselves of the
instructions. Lower scores therefore indicate a faster speed of working.
Since the publication of the original version of the test, a further two constructs have been added as
follows.
5. Decisiveness: Decisiveness is a measure of how infrequently the respondent changed their mind
regarding any particular shape. If the respondent had clicked on a symbol to select it and had then
clicked once again to deselect it, this will contribute to the Decisiveness score.
6. Decision efficiency: Decision Efficiency is an overall measure which combines both accuracy and
speed and is computed from the number of correct items the respondent completed per minute.
It should be noted that for the variables Memory, Speed of Working, Click Speed and Decisiveness,
lower raw scores indicate better performance and are therefore translated to high percentiles
scores, with high raw scores translating to low percentiles.
Screen shots of two instruction pages from the test and two item panels corresponding to these are
shown on the following page.
2. A test item from the 5th set of test panels in the first part of the test
4. A test item from the 3rd set of test panels in the second part of the test
Development of the MAT began in 2001 and the test was first published in 2003. The initial version
of the MAT was designed to contain 10 sets of instructions, each of which would be associated with
10 sets of item screens, thus making a total of 100 item screens. In the first part of the test (5
instruction sets associated with 50 item screens), the item screens were to contain only shapes. In
the second part of the test, the item screens were to contain both shapes and numbers.
For the first part of the test, the first set of instructions would contain only a single instruction - for
example:
The second set of instructions would contain the instruction from the previous set plus a new
instruction - for example:
later sets of instructions would similarly add one further instruction as above. For the second set of
instructions, a similar principle held, except that instructions for the numbers and letters in the item
screens would also be given. For example, for the first set of instructions:
For the item screens, a grid template was constructed, for each item, and shapes were assigned to
positions on the grid with number and letter sequences also assigned at the top of the panel in the
case of items for the second part of the test. For any given item screen, shapes, numbers and letters
were assigned in accordance with the specific instructions which would hold for that screen,
ensuring that for most screens, there would be shapes which would need to be clicked on for each
rule in the current instruction set but also for some screens there would be no shape which fitted a
given rule (and hence no need for the respondent to click on any shape for that rule).
Shapes were placed on the grid in such a manner as to maximise the attention which would be
required of the respondent in order to respond correctly. For example, if one of the rules was "Click
on all yellow shapes which are higher up the screen than at least one square", then a yellow shape
might be placed at the same vertical position as a square so as not to fit the rule, but to induce a
less attentive respondent to click on it.
After internal trialling and review of the items, the test was administered online to several trial
samples and respondents were asked to note down comments on the test screens. The items were
then reviewed with the assistance of the written comments and item screens and test instructions
were modified to remove ambiguities or to avoid untoward difficulties or complexities in the items.
A second version of the test was then administered and data from this was subjected to item
analysis to examine specific item properties and further adjustments to the items were made. The
final set of items for the first version of the test was then prepared and was administered to a
sample of 170 respondents. This sample was used for the development of norms for this version of
the test and further updated norms were provided in 2005 based on a sample of 1047 respondents.
Subsequent to the development of the first version, a second version of the test was also
developed along similar lines. This was identical in all respects to the original version but consisted
of only five item panels for each set of instructions (50 in total). In 2007, the decision was taken to
remove the original version of the test and use only the second version. This was due primarily to a
perceived need to reduce the administration time of the test and the fact that the shorter version
appeared to have more or less equivalent psychometric properties to the longer version.
Introduction
For any test to play a valuable role in the decision-making process, it has to be matched to the
abilities and competencies required by the job role. The first part of this section provides an
overview of how to identify whether MAT is an appropriate test. Good administration is the key to
achieving reliable and valid test results. When administering the test in person, a well-defined
procedure is to be followed. However, computer administration offers test takers the opportunity
to complete tests in their own time, at a location of their choosing, without an administrator being
present. Since MAT is only available online it is important to recognise that this does not obviate
the need for considering the conditions under which it will be taken and preparing the test taker for
the experience such that they can genuinely show their ability without extraneous factors
interfering. There is also the question of whether the procedure needs to be supervised or whether
there are situations when it could be administered unsupervised.
As described in the rationale, the MAT is based on research into memory and attention. It requires
an ability to attend to a range of stimuli and to follow simple instructions. It is therefore a measure
of information processing speed and accuracy together with the ability to follow simple
instructions efficiently without becoming overloaded or confused. This simulates many busy
environments where the challenge is less to do with abstract thinking and more to do with efficient
sequencing of tasks. There are many such environments where this might be appropriate such as
call centres, busy offices, warehouses or any situation where the challenge is likely to involve
multiple instructions and multi-tasking. It is the administrator's responsibility to have a clear
rationale for how these skills are required in any job a person may be applying for.
The test room needs to be suitably heated and ventilated (with blinds if glaring sunlight is likely to
be a problem) for the number of people taking the tests and for the length of the test session. All
the computer screens need to be clear and easy to read. The room should be free from noise and
interruption, as any disturbances can affect test takers’ performance. There should be space
between each test taker’s screen so that test takers cannot see others’ progress or answers and the
administrator should be able to walk around to keep an eye on progress or difficulties – especially
during the examples where misunderstandings can be ironed out.
If the tests are to be taken as part of an assessment day, remember that performance tends to
deteriorate towards the end of a long day. If a number of test sessions are being planned, those
who take the tests towards the end of the day may be disadvantaged. If there are other mental
challenges remember to organise appropriate breaks.
Test takers should be notified of the date, time and location of the test session and told which
test(s) they will be taking. When test takers are notified about the session, it is essential that they
are also asked to contact the administrator or other appropriate person, if they have any disabilities
that will affect their ability to complete the tests and to specify what accommodation needs to be
made for them to complete the tests. Under the Disability Discrimination Act (1995; 2005), test
users are obliged to make changes to assessment procedures so that people with disabilities are
not disadvantaged at any stage of the selection process. By obtaining information about any
special needs well in advance of the test session, organisations can make the necessary adaptations
to the testing session and have time to seek further advice if necessary. Further information on
assessing people with disabilities can be found on the PfS website as:
Before the testing session, ensure that you have the correct Client Code, Access Code, Password
and an ID number (optional) plus check that the codes are active (they may have a date after which
they become inactive). Also check that the account contains sufficient credits to run the session.
Make sure that all the computers are turned on and that the appropriate screen is ready. You
should also keep a Test Log which reminds you of the materials needed, the process involved and it
also allows administrators to record the room layout, any unusual occurrences during the test
session and to summarise the test scores of a group of test takers. It is a useful document to keep
for later review sessions or if any challenges are made to the test results or decisions that the
results feed into.
A notice to the effect of ‘Testing in progress – Do not disturb’ should be displayed on the door of
the test room. Ensure that chairs and desks are correctly positioned. There is no need to provide
pens and rough paper for the MAT
If ID numbers are being used but have not already been allocated to test takers, allocate these
outside the test room, then ask test takers to enter the room and find the corresponding desk.
Otherwise, invite test takers into the test room and direct them where to sit.
Invite test takers into the testing room and direct them where to sit. When all test takers are seated,
the administrator should give the informal introduction to the test session. This needs to be
prepared in advance to include the points given below, but should be delivered informally, in the
administrator’s own words. The aim here is to explain clearly to the test takers what to expect and
to give them some background information about the tests and why they are being used. This will
help to reduce anxiety levels and create a calm test setting. The administrator should aim for a
relaxed, personable, efficient tone, beginning by thanking the test takers for attending.
• Ask test takers not to touch the computers until they are told to do so.
• Give an informal introduction and tell the test takers that they will be taking the test on
computer.
• At the end of the informal introduction, ask if there are any questions.
• Direct test takers to the PfS website and follow the appropriate link to take a test, then give
them the Client code, Access code and Password to enter when prompted (or Licence number
and Password if the project facility is used). Alternatively, prior to the beginning of the testing
session, ensure that the PfS website has already been accessed on each computer and the entry
codes entered in order that the PfS assessment facility is already displayed on screen when
candidates take their places at their computers.
• Tell test takers that the computer will prompt them to enter their personal information before
giving them the test instructions and practice and example items.
• Test takers should be allowed to work through the instructions at their own pace. In the case of
an unsupervised computer administration they should begin the test when they are ready. For a
supervised computer administration test takers should be told either to start the test when they
are individually ready, or to wait until everyone is ready to begin.
• Explain that if they have any questions or experience any difficulties during the test, they
should raise their hand.
In the case of test takers beginning the test when they are ready, they will finish the tests at slightly
different times using this approach, as not everyone will work through the instructions at the same
pace. If this approach is taken, administrators should judge which is the least disruptive between
asking them to remain seated until everyone completes the test or whether they can leave the room
when they have finished without disturbing other test takers. This is likely to depend on the
number of people being tested and the room set-up (i.e. how easily people can leave the room
without disturbing others).
Alternatively, test takers can be asked to work through the instructions, practice and example items,
and then wait until everyone is ready to begin. When everyone is ready, the administrator should
ask test takers to start. Everyone is more likely to finish the testing session at a similar time if this
approach is used, thus reducing the possibility of test takers who have been slower to work through
the instructions being disturbed by others leaving the room.
Finally, it should be noted that the tests which will be displayed on the screen when test-takers
enter the PfS assessment area on the PfS web site will depend on the 'Access Code' which has been
used to log in to the system. Administrators should therefore ensure that they have set up an
Access Code which includes only the appropriate tests and test levels which they wish to be
presented. A discussion of access codes is beyond the scope of this manual, though detailed
information will be provided by Team Focus to users of the PfS online assessment system.
The internet offers the potential to exploit the benefits of testing in new ways, but takes users into
the less familiar territory of unsupervised assessment. There are many issues with unsupervised
assessment: access to technology, fairness and the authenticity of test results being paramount.
Despite the need to address these issues, the benefits of internet-based testing are many.
Particularly notable are its efficiency and the opportunity to gather additional information to feed
into the early stages of the decision-making process.
When planning an unsupervised testing session, administrators need to consider the target group
and their likely access to technology. Certain groups (e.g. university students or those already
working for an organisation) may have greater access to the necessary technology than others (e.g.
people returning to work). Where it is anticipated that a number of potential test takers may not
have access to the necessary technology, it may be advisable not to use internet testing unless
other appropriate arrangements can be made. For example, it may be possible to direct test takers
to places such as libraries, careers centres or an organisation’s regional offices where they can take
the PfS-Reasoning Tests under appropriate conditions.
Access to the necessary technology is also related to issues of fairness. If completing internet-
based assessments is made a compulsory part of an application process, this may bias the process
against those who do not have easy access to the necessary technology. In some cases it could also
constitute deliberate discrimination and so be unlawful. Although many organisations use online
application procedures, alternatives to these should be put in place (e.g. a paper-based test session
available on request). Organisations may have to accept that, in some cases, test results will not be
available for all applicants.
One significant advantage of internet-based testing, as mentioned above, is that psychometric tests
can be used early in a selection procedure, possibly at the same time application forms are
completed. If used as part of a selection decision, it is essential to be confident that the test results
are indeed the work of the applicant.
Ensuring the validity of test results requires that test takers are monitored during the test session.
This removes many of the advantages of internet-based testing, so it is important to encourage
honesty in test takers. One way in which this can be done is to position the tests as offering
potential applicants valid feedback on their abilities and the demands of the job. This would imply
on the one hand, suggesting to low scorers that the job may not be well matched to their abilities,
and so would be unsatisfying for them and, on the other hand, confirming to higher scorers that
they appear to have the necessary basic abilities required by the job. If test scores are used to
make decisions at an early stage of an application process, it may be prudent to give them a lower
weighting than normal and to set lower standards of performance.
The validity of test scores is more of an issue with high scorers. One approach to dissuade people
from obtaining assistance with the tests is to view them as a ‘taster’ to the next stage of selection
where further testing will take place under more controlled conditions. If test takers know that they
will have to take a similar test under supervised conditions if they proceed to the next stage of the
selection process, they may be less inclined to seek assistance with the unsupervised tests. In
these circumstances it may be appropriate to initially use the open versions of the Reasoning Tests,
then follow these up with the closed versions under supervised conditions if it is deemed necessary
to verify results.
All the issues discussed above need to be considered when undertaking unsupervised, internet
assessment. Despite this, in many ways, the actual test procedure is not that different from
supervised administration. The main stages of the test process remain the same, although as it is
not possible to give an informal introduction to the test session, the initial contact with test takers
is very important. The contact letter, email or telephone conversation should include:
Particularly important under unsupervised test conditions will be the information on why the tests
are being used. As discussed above, positioning the tests as providing applicants with an insight
into their own suitability for the job can help to encourage honesty and acceptance of the remote
testing experience when used for selection. If applicants who proceed to the next stage will have
to take further tests, this should also be stated, again to encourage honesty.
If remote internet testing is being considered, the issue of access to technology needs to be
addressed. Although the majority of people now have access to computers, it should not be
assumed that this is the case for everyone. It also needs to be recognised that conditions should be
conducive to completing a timed test; some computers that are accessible to the public may be in
noisy environments and where test takers are liable to disruption.
To make the PfS Tests widely accessible, the system has been designed to make minimal demands
on technology. The system will work on any internet-ready computer. The preferred browser is
Internet Explorer with Adobe Flash® installed. The minimum screen resolution needed is 800 x
600 though a resolution of 1024 by 768 is recommended. Virtually all modern desktop computers
and most modern laptop computers will meet the specifications needed to run the tests. Tests are
accessed over the internet. As the whole test is downloaded before the test begins, timing for the
test is unaffected by the speed of the internet connection.
It is not necessary for the internet connection to be maintained once a test has been downloaded.
However, the internet connection does have to be active when the test results are submitted.
Information about the need for test takers to be actively connected to the internet for their test
results to be recorded is displayed at the end of the test.
Explain that from this point, the administration of the test will follow a set procedure and all
instructions are to be followed on screen. Say:
“The following screen will ask you for some personal information. Your name and
email address will be used to identify you and to generate a report of your
assessment results. Other personal information, for example sex and ethnicity,
will be used as part of our ongoing research and development to ensure the
assessments used on this site are fair to all people. This personal information in
no way affects your assessment results. All information will be stored in
accordance with the Data Protection Act. If you agree to your personal
information being used for these purposes please click on continue. Otherwise,
close this window to exit the testing session.”
Does anyone have any questions or objections? When everyone is happy to click on ‘Continue’, read
the instructions on the next six screens as the test-takers complete them:
1. “Please enter the details requested and then click on the ‘Continue button’.
Fields marked with an asterisk (*) are compulsory.”
2. “Please indicate your ethnic background (used for monitoring purposes
only) and continue.”
3. Click on ‘Memory and Attention Test’ “This test looks at your ability to
memorise and follow instructions. Click on the 'Continue' button below to
take this test/questionnaire.”
4. Click on ‘Memory and Attention Test Version 1t’.
5. Click on ‘Begin’.
6. Click on ‘Continue’. “This test looks at your ability to memorise and follow
instructions. Click on the ‘Continue’ button below to see the instructions for
this test.”
When you click on a shape, it will become surrounded by a border to show you have
selected it. If you click on it again, the border will disappear. Try clicking on some of
the shapes in the illustration to see how this works.
Before each set of screens is shown, you will see some instructions telling you which
of the shapes you should select. The purpose of the test is to follow these
instructions as quickly and as accurately as possible. The instructions will become
harder as you progress through the test.
Sometimes more than one instruction will apply to a particular shape. Where this is
the case, you only need to click on the shape once to select it.
In the bottom left of the screens there will be an ‘instructions’ button. You can click
on this button at any time to remind you of the instructions, Click on the ‘Continue’
button to have a go at some practice screens.
Please work through the practice screens and put your hand up if you have any questions about
the practice screens. Do not click on “Begin Test” until I say so.
When you’ve answered all arising questions and everyone can see the ‘Begin Test’ button on their
screen say:
This test is timed. You will have 6 minutes followed by 11 minutes. There is a timer
at the bottom right hand corner of the screen. Work through from start to finish
including the practice session in the middle.
I will be walking around to see that you are all working OK. Is everybody ready?
Any final questions? Please click on the ‘Begin test’ button.
Reliability
The internal consistency reliability of the MAT was assessed on a sample of 259 respondents
assessed in a format training environment and was found to be 0.892 (Chrobach's alpha). The
internal consistency estimated from an unselected sample of 818 respondents from the PFS data
records was 0.835. Both these estimates are based on the MAT Accuracy score. The mean of the
accuracy score from the latter sample was 29.323 (maximum score = 50) and the standard deviation
was 7.0.
Validity
Evidence for the validity of the MAT comes from the correlation of the test with the PFS Verbal,
Numerical and Abstract Reasoning tests. The following tables show the correlation of total scores
on these three tests with various MAT indices and is based on a sample of 100 respondents
undergoing a course of professional training. Marked coefficients are significant at p<0.05.
From this same sample, ratings of 'intellectual potential' were also obtained. These correlated 0.27
(p<005) with the MAT Total Raw score, 0.28 (p<005) with Correct Items per minute.
Total Raw Total B Shape Raw Total B Lett Raw Total Help Time N
Mean: Pass 44.24 26.17 28.95 49.76 42
Mean: Risk pass 39.60 22.20 24.92 53.60 25
p 0.036389 0.020345 0.014086 0.709869
F 4.566 5.6559 6.366 0.1396
It can be seen that significant differences in line with prediction were found on six out of the eight
variables. For those variables where there was not expected to be a significant difference between
the groups, the following results were obtained.
It can be seen that contrary to expectation, those subjects predicted to pass the course had
significantly longer total responding times (time to complete the test) than those classified in the
'Risk Pass' category.
The intercorrelations between the MAT scores were calculated from the training sample referred to
above and are shown in the following table overleaf. Marked cells indicate correlations significant
at p<0.05.
Total
Raw
Total
Total Screen
0.07 Screen
Time
Time
Total
Total Respond
0.11 0.91 Respond
Time
Time
Total Help Total Help
-0.07 0.34 -0.09
Time Time
First Set
First Set Time -0.08 0.37 0.31 0.20
Time
N Help
N Help Clicks -0.00 0.15 -0.24 0.88 0.11
Clicks
N Swaps N Swaps
-0.18 0.08 0.05 0.09 0.17 0.06
(indecision) (Indecision)
Correct Correct
Decisions Per 0.60 -0.62 -0.54 -0.25 -0.37 -0.09 -0.16 Decisions
Minute Per Minute
The MAT score variables referred to in the above table are as shown below.
Note that the Help screens referred to in the table above are those which allow the person to see
the current instruction set once again.
A factor analysis of the individual MAT scores was carried out on this same sample. This suggested
a three factor structure accounting for 76.61% of common variance. The three factors were
interpreted as Speed, Memory (i.e. reliance on the instructions) and Accuracy and are illustrated by
the following factor loadings.
Factor 3 Accuracy
Total Raw 0.929229
Correct Decisions Per
0.678794
Min
N swaps (indecision) -0.472959 (-ve)
Total Responding
0.029098
Time
N Help Clicks 0.012822
Total Screen Time -0.003876
Total Help Time -0.073767
First Set Time -0.239035
% Variance 20.14%
Norms
The norms currently available for the MAT are based on a sample of 675 candidates being assessed
for selection for a variety of positions. The means and standard deviations of the MAT scores from
this sample are as follows.
Mean SD
Total Raw: 29.292 7.041
Total screen time: 688.727 282.724
N Help Clicks 22.991 26.929
First set time 23.480 7.812
N Swaps 6.547 8.895
Correct Items Per 2.777 .966
Minute
The reports currently available for the MAT consist of the Feedback Report and the Administrator's
report.
The Feedback Report presents an introduction to the test followed by a graphic of results on each of
the 6 principal scores generated by the test. This is followed by a description of each scale and a
brief explanation of how the respondent's percentile score places them in respect of the
comparison group. Finally a number of general points are made in relation to the interpretation of
scores from psychometric tests.
The Administrator's Report provides the same graphic of results as the Feedback report but also
provides raw scores on each scale along with more detailed results output.
Atkinson, R.C.; Shiffrin, R.M. (1968). Chapter: Human memory: A proposed system and its control
processes. In Spence, K.W.; Spence, J.T. The psychology of learning and motivation (Volume
2). New York: Academic Press. 89–195.
Baddeley, A.D. (2000). The episodic buffer: a new component of working memory? Trends in
Cognitive Science, 4, pp 417–423.
Baddeley, A.D., Hitch, G.J.L (1974). Working Memory, In G.A. Bower (Ed.), The psychology of learning
and motivation: advances in research and theory (Vol. 8, pp. 47–89), New York: Academic
Press.
Binet, A., & Simon, Th. A. (1905). Méthode nouvelle pour le diagnostic du niveau intellectuel des
anormaux. L'Année Psychologique, 11, 191-244.
Carpenter, P. A., Just, M. A., and Schell, P. (1990). What one intelligence test measures: A theoretical
account of the processing in the Raven Progressive Matrices test. Psychological Review, 97,
404-431.
CIPD (2008). Recruitment, Retention and Turnover: Annual Survey Report 2008. London: CIPD.
Ehrman, M. (1994). The Modern Language Aptitude Test for predicting learning success and advising
students. Applied Language Learning, 9, 31-70.
Gathercole, S. (2008). Working Memory. In Roediger, H. L. III (Ed.), Cognitive Psychology of Memory.
Vol. [2] of Learning and Memory: A Comprehensive Reference, . 33-52. Oxford: Elsevier.
Kyllonen, P. C. and Christal, R. E. (1990). Reasoning ability is (little more than) working-memory
capacity?! Intelligence, 14, 389-433.
Treisman, A. (1988). Features and objects: the fourteenth Bartlett Memorial Lecture. Quarterly
Journal of Experimental Psychology, 40A, 201-236.