2:15-CV-05811 Response To MSJ
2:15-CV-05811 Response To MSJ
2:15-CV-05811 Response To MSJ
1
2
Attorneys at Law
814 W. Roosevelt
Phoenix, Arizona 85007
(602) 258-1000 Fax (602) 523-9000
4
5
6
7
10
WESTERN DIVISION
11
12
Plaintiff,
13
vs.
14
Defendant.
16
17
18
19
1.
TABLE OF CONTENTS
21
2.
TABLE OF AUTHORITIES
22
3.
20
23
24
25
26
27
///
TABLE OF CONTENTS
2
3
4
5
6
7
8
9
11
Phoenix, Arizona 85007
12
814 W. Roosevelt Street
10
13
14
15
16
17
18
I. INTRODUCTION ................................................................................................ 5
II. PROCEDURAL HISTORY ................................................................................. 8
III. LEGAL ARGUMENT .......................................................................................... 9
B. ASSUMING NO DISPUTE OF MATERIAL FACTS, THE FAA IS NOT ENTITLED TO
SUMMARY JUDGMENT AS A MATTER OF LAW ........................................................... 12
1. The validation study and summary show no merit of being privileged ......... 12
(a) The validation study and the summary do not meet the elements of
privilege ...................................................................................................... 12
(i) The Validation Study and the Summary merely reveal facts, which
are not protected under privilege ........................................................... 13
(ii) There is a lack of litigation needed for the FAA to anticipate in
relation to the study and the summary .................................................... 14
(iii) The Study and the Summary were not prepared in anticipation of
litigation .................................................................................................. 15
(b) Substantial need/undue hardship and balancing of interests overcome
privilege ...................................................................................................... 18
(c) Even assuming the study and summary are covered by privilege, the
FAA waived that privilege ......................................................................... 20
2. The Validation Study and Summary Are Not Privileged ............................... 22
(a) APT Metrics is not an attorney capable of providing legal advice ..... 22
IV. CONCLUSION ................................................................................................... 22
19
20
21
22
23
24
25
26
27
TABLE OF AUTHORITIES
CASES
Bairnco Corp. Sec. Litig. V. Keene Corp., 148 F.R.D. 91 (S.D.N.Y. 1993) ............... 16
California Sportfishing Protection Alliance v. Chico Scrap Metal, Inc., 299 F.R.D.
638 (E.D. Cal. 2014) ................................................................................................ 13
Coastal Corp. v. Duncan, 86 F.R.D. 514 (D. Del. 1980) ............................................ 19
Columbia Pictures Television, Inc. v. Krypton Broadcasting of Birmingham, 259 F.3d
1186 (9th Cir. 2001) ................................................................................................. 21
Exxon Corp. v. FTC, 466 F. Supp. 1088 (D.D.C. 1978), aff'd, 663 F.2d 120 (D.C. Cir.
1980) ......................................................................................................................... 13
Garcia v. City of El Centro, 214 F.R.D. 587 (S.D. Cal. 2003) ................................... 13
Hamdan v. U.S. Dept. of Justice, 797 F.3d 759 (9th Cir. 2015) ................................. 11
Harper v. Auto-Owners Ins. Co., 138 F.R.D. 655 (S.D. Ind. 1991)............................ 15
Hickman v. Taylor, 329 U.S. 495 (1947)..................................................................... 19
In re Grand Jury Investigation, 599 F.2d 1224 (3rd Cir. 1979).................................. 15
In re Grand Jury Subpoena (Mark Torf/Torf Envtl Mgmt), 357 F.3d 900 (9th Cir.
2003) ................................................................................................................... 12, 16
In re Green Grand Jury Proceedings, 492 F3d. 976 (8th Cir. 2007).......................... 12
In re Jury Subpoenas, 318 F.3d 379 (2nd Cir. 2003) .................................................. 18
Kintera, Inc. v. Convio, Inc, 219 F.R.D. 503 (S.D. Cal. 2003) ................................... 21
Moody v. I.R.S., 654 F.2d 795 (D.C. Cir. 1981) .......................................................... 19
Nat'l Council of La Raza v. DOJ, 411 F.3d 350 (2d Cir. 2005) .................................. 21
Parrot v. Wilson, 707 F.2d 1262 (11th Cir. 1983) ...................................................... 19
Ramsey v. NYP Holdings, Inc., 2002 U.S. Dist. LEXIS 11728 (S.D.N.Y. 2002) ....... 13
S. Union Co. v. Southwest Gas Corp., 205 F.R.D. 542 (D. Ariz. 2002) ..................... 12
Tayler v. Travelers Ins. Co., 183 F.R.D. 67 (N.D.N.Y. 1998) .................................... 17
Texas Puerto Rico, Inc. v. Department of Consumer Affairs, 60 F.3d 867 (1st Cir.
1995) ......................................................................................................................... 19
U.S. Department of State v. Ray, 502 U.S. 164 (1991) ................................................. 5
U.S. v. Christensen, 801 F.3d 970 (9th Cir. 2015) ...................................................... 19
U.S. v. Fort, 472 F.3d. 1106 (9th Cir. 2007) ............................................................... 18
U.S. v. Nobles, 422 U.S. 225 (1975) ............................................................................ 19
U.S. v. Richey, 632 F.3d 559 (9th Cir. 2011) .................................................. 12, 15, 16
U.S. v. Textron Inc. and Subsidiaries, 577 F.3d 21 (1st Cir. 2009) ............................ 18
United States v. Aldman, 68 F.3d 1495 (2d Cir. 1995) ......................................... 12, 16
Upjohn Co. v. U.S., 449 U.S. 383 (1981) .................................................................... 19
3
4
5
6
7
8
9
11
Phoenix, Arizona 85007
12
814 W. Roosevelt Street
10
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
1
2
3
4
5
6
7
8
9
11
Phoenix, Arizona 85007
12
814 W. Roosevelt Street
10
13
14
15
16
17
18
19
Verizon California Inc. v. Ronald A. Katz Technology Licensing, L.P., 266 F.Supp.2d
1144 (C.D. Cal. 2003) .............................................................................................. 20
Yurick v. Liberty Mut. Ins. Co., 201 F.R.D. 465 (D. Ariz. 2001) ................................ 17
Zemansky v. EPA, 767 F.2d 569 (9th Cir. 1985) ......................................................... 11
STATUTES
41 CFR 60-3.5 ............................................................................................................ 6
41 CFR 60-3.7 ............................................................................................................ 6
42 U.S.C. 2000e-2(h) ............................................................................................ 6, 17
RULES
Fed. R. Civ. P. 26(b)(3) ............................................................................................... 19
Fed. R. Civ. P. 56(a) ...................................................................................................... 9
REGULATIONS
29 CFR 1607.1 .......................................................................................................... 17
29 CFR 1607.15 .......................................................................................................... 6
29 CFR 1607.4(D) .................................................................................................... 17
OTHER AUTHORITIES
Blacks Law Dictionary, http://thelawdictionary.org/validation ................................... 6
http://www.siop.org/workplace/employment%20testing/information_to_consider_wh
en_cre. aspx ................................................................................................................ 6
https://www.opm.gov/policy-data-oversight/assessment-and-selection/otherassessment-methods/biographical-data-biodata-tests/ ............................................... 6
Merriam-Webster.com. Merriam-Webster, n.d. Web. 25 Apr. 2016 ............................ 6
Restatement (Third) of the Law Governing Lawyers 87 cmt. g (2000) ................... 14
Restatement (Third) of the Law Governing Lawyers 87(1) (2000) ......................... 13
20
21
22
23
24
25
26
27
1
2
3
4
6
7
8
9
11
Phoenix, Arizona 85007
12
814 W. Roosevelt Street
10
13
14
15
16
17
18
19
20
21
22
23
24
25
I.
INTRODUCTION
The Freedom of Information Act (FOIA) was enacted to pierce the veil of
administrative secrecy and open Agency action to the light of public scrutiny. U.S.
Department of State v. Ray, 502 U.S. 164, 173 (1991). Since 2013, over 3,000
individuals have been negatively impacted by Defendant Federal Aviation
Administrations (FAA) changes to the Air Traffic Control Specialist (ATCS) hiring
process. Plaintiffs Statement of Facts (PSOF) 6-7. The FAA significantly reduced
the requirements for this safety sensitive and skill intensive position. Id. 8. Part of the
new hiring process included purging an employment referral list of approximately
2,000-3,000 qualified candidates. Id. 7. These candidates graduated from FAA
sanctioned Air Traffic Collegiate Training Institutions (CTI) and passed the FAAs
previously extensively validated air traffic control aptitude examination (AT-SAT). Id.
8, 9. Spokesman Tony Molinaro said the decision was made to add diversity to the
workforce. Id. 10. The FAA stated in various notifications to impacted individuals,
that a Biographical Questionnaire would be used for the new hiring process. Id. 11,
13. This included statements from John Scott, Chief Operating Officer of APT Metrics.
Id. 12. The FAAs new hiring process included a new exam that was taken online
from the applicants home called the Biographical Assessment (BA). Defendant
Federal Aviation Administrations Statement of Facts (DSOF) 12.
26
27
1
2
3
4
5
6
7
8
9
The subject of this action is the disclosure of the 2015 validation study and any
related summaries, required to be completed pursuant to statute 1 and regulation 2 .
Validation is defined as to recognize, establish, or illustrate the worthiness or
legitimacy of . 3 The Society for Industrial and Organizational Psychology, Inc.
(SIOP) is cited on the United States Office of Personal Management (OPM) website
regarding bio-data testing such as the BA.4 According to the SIOP, [e]xperienced
and knowledgeable test publishers have (and are happy to provide) information on the
validity of their testing products.5 Plaintiff is simply requesting what experienced
and knowledgeable test publishers are usually happy to provide.
11
Phoenix, Arizona 85007
12
814 W. Roosevelt Street
10
13
14
15
16
17
18
19
20
21
This case is about the FAAs continued lack of institutional veracity and repeated
improper attempts at withholding documents that are clearly subject to release and
review. The FAA has failed to be upfront about the rationale or methodology of the
new screening and testing process. Therefore, Plaintiff is utilizing the FOIA process to
serve the public interest by sharing records concerning the changes with those impacted
by the action. Those impacted by the FAA changing the standards for hiring ATCS are
not a small subset of society anyone who flies is adversely impacted by the
degradation of the national airspace system at the hands of those entrusted to ensure
safety. FAA Spokesman Mr. Molinaro stated that the purge of the list of eligible
candidates was done to add diversity to the workforce. PSOF 10. Piercing the veil
of administrative secrecy and opening up the FAAs actions to the light of public
scrutiny is particularly necessary in this case to ensure public safety. Revealing the
22
23
24
25
26
27
1
2
3
4
5
6
7
8
9
11
Phoenix, Arizona 85007
12
814 W. Roosevelt Street
10
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
1
2
3
4
II.
5
6
7
8
9
11
Phoenix, Arizona 85007
12
814 W. Roosevelt Street
10
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
PROCEDURAL HISTORY
Plaintiff requested records concerning the validation study for the 2015 Biographical
Assessment (BA). PSOF 20. The request was assigned to multiple organizations
within the FAA. Id. The subject of the instant action is the response from the FAAs
Office of the Chief Counsel (AGC). On June 18, 2015, the AGC responded with a
FOIA exemption 5 claim deliberative process, and attorney-client privilege. DSOF
16.
On June 25, 2015, Plaintiff submitted a FOIA appeal concerning AGCs
response. Id. 17. Plaintiff alleged that the documents were not protected by the
Attorney-Client or deliberative process privilege. See DSOF Exhibit B.
Plaintiff received no reply from the FAA within the statutory twenty-day period.
Therefore, on July 31, 2015, Plaintiff filed the underlying action. (Dkt. # 1).
During conversations between Plaintiff and FAA Counsel, it was made clear that
the subject of this action was the validation study proving that the administration and
use of the 2015 Biographical Assessment (BA) was valid. PSOF 21. In other words,
Plaintiff seeks proof that the BA measures characteristics related to the field for which
the test was allegedly designed for.
The FAA remanded the FOIA request for processing on October 7, 2015. DSOF
18. The FAA, through counsel, indicated by telephone that in response to Plaintiffs
FOIA request, the FAA had reviewed the wrong year of records. FAA Counsel later
emailed Plaintiff such was the case. PSOF 22. Plaintiff alleges that this is an attempt
by the FAA to further stall and block access to Agency records, as Plaintiffs initial
FOIA request was very clear as to what records were sought.
1
2
3
4
5
On December 10, 2015, the FAA finally provided a revised response to the
FOIA request. DSOF 20. This time, the FAA dropped its pre-decisional claim and
instead invoked Attorney-Client and Attorney-Work Product Privilege. Id. The FAAs
removal of the pre-decisional claim is further evidence of the FAAs consistent willful
violations of FOIA and attempts to shield Agency documents from disclosure.
6
7
8
9
11
process, a private contractor, APT Metrics, was contracted by the Agency to perform
the validation study. Id. 3-4, 9-11. Plaintiff maintains that the FAA was required by
statute to perform such a validation study and that even with the potential threat of
litigation, the validation study would have been conducted in the course of regular
agency business.
12
814 W. Roosevelt Street
10
The FAA states that in anticipation of litigation concerning the ATCS hiring
13
14
15
16
17
18
19
The FAAs assertion that it performed the validation study following the filing
of EEO complaints as a result of the 2014 hiring session are false, as shown by the
FAAs failure to address anticipation of litigation during the 2014 announcement, yet
it admits that it was validated. Id. 3-4. As a result of the FAAs requirement to
perform a validation study, the validation performed by APT Metrics is a course of
normal agency business, and therefore the validation study is not subject to Exemption
5. Furthermore, the Vaughn Index provided by Defendant demonstrate that an adequate
search for responsive records has yet to be performed.
20
21
22
23
24
25
26
III.
LEGAL ARGUMENT
A.
The FAA is not entitled to summary judgment because there is a genuine dispute
of material fact regarding whether the FAA conducted an adequate search. The court
shall only grant summary judgment if the movant shows that there is no genuine
dispute as to any material fact. Fed. R. Civ. P. 56(a).
27
1
2
3
4
5
6
7
8
9
11
Phoenix, Arizona 85007
12
814 W. Roosevelt Street
10
13
14
15
16
17
18
19
20
21
22
23
24
25
26
The FAA relies on work product privilege in withholding the validation study.
(Defs Mot. for Summ. J. at 11-12, April 4, 2016). In support of this, the FAA claims
that the validation study came about as a result of anticipated litigation when it
requested the study following the filing of an EEO complaint against them. Id.
However, the facts tell a different story. Under the former air traffic aptitude test (ATSAT) the FAA also conducted validation studies. PSOF 9. Exhibit 10 to PSOF. The
FAA Administrator admitted that it hired APT in 2013 and that APTs work was to
last 2 years, concluding at the end of 2014. Letter from Michael Huerta, Administrator,
Fed. Aviation Admin., to Kelly Ayotte, Chair, Subcomm. on Aviation Operations,
Safety, and Sec. U.S. Senate at 1 (Dec. 8, 2015). Exhibit 1 to PSOF. This shows that
APT Metrics was already conducting these validations before the EEO filings. Even
now, the FAA is continuing with the usual practice of conducting validation studies on
their tests for the 2016 year. PSOF 23. Mem. from Teri Bristol, Chief Operating
Officer, Air Traffic Org., to Distribution, Fed. Aviation Admin. at 1 (Feb. 11, 2016).
As Officer Bristol writes in the 2016 memoranda, [t]he FAA is evaluating potential
replacements for the AT-SAT . . . . We are asking randomly selected CPCs . . . to help
us evaluate their effectiveness as a future selection tool. Exhibit 15 to PSOF. Nowhere
in that memorandum does it mention words like litigation or adversarial
proceedings.
In a 2015 letter to Congress, the FAA Administrator claimed that the FAA
maintains the safest and most efficient aerospace system in the world partly because
we continuously evaluate and strengthen our ATCS hiring and training processes.
Exhibit 1 to PSOF at 2. The Administrator then states that the changes made in 2014
and 2015 were to further that commitment. Id. Given their public proclamation of
conducting a validation of the 2014 and 2015 tests, the FAAs history of validation
studies and the fact that they did these studies before the EEO complaint even arose,
27
10
1
2
there is a genuine dispute of material fact as to whether the FAA really did request the
study as a result of the EEO complaint being filed.
3
4
5
6
7
8
9
11
Phoenix, Arizona 85007
12
814 W. Roosevelt Street
10
13
14
15
16
17
18
19
20
21
Furthermore, the FAA did not conduct an adequate search and should not be
granted summary judgment. FOIA requires an agency responding to a request to
demonstrate that it has conducted a search reasonably calculated to uncover all
relevant documents. Hamdan v. U.S. Dept. of Justice, 797 F.3d 759, 770 (9th Cir.
2015) (quoting Zemansky v. EPA, 767 F.2d 569, 571 (9th Cir. 1985)). FAA Counsel,
Alarice Medrano, advised Plaintiff that the wrong years of records were reviewed
responsive to Plaintiffs request. PSOF 22. Along with this, it is questionable whether
the FAA uncovered all the documents regarding the validation study. Former validation
studies done by the FAA have been well over 100 pages and consisted of multiple
volumes. Id. 9. Defendants Vaughn Index shows the withheld validation documents
being 9 pages in length. This is drastically shorter than those previously released. This
alludes that the FAA may not be fully forthcoming about this matter. This is further
shown by the FAA Administrator admitting to Congress that it did not even do the 2014
validation study until after the hiring took place, contrary to what they had said
previously. Id. 17. Given that the FAA is not being entirely upfront on this matter,
that they searched during the wrong time frame, and that there are inconsistencies with
the validation studies, Plaintiff has valid and reasonable concerns regarding whether
the FAA has conducted a search reasonably calculated to find all the requested
materials. As such, Defendants Motion for Summary Judgment should be denied.
22
23
24
25
26
27
///
11
1
2
3
B.
(a)
4
5
6
7
8
9
11
Phoenix, Arizona 85007
12
814 W. Roosevelt Street
10
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
12
1
2
3
4
5
6
7
8
9
11
Phoenix, Arizona 85007
12
814 W. Roosevelt Street
10
13
14
15
constituting work-product, so long as they act under the general direction of attorneys.
See, e.g., Exxon Corp. v. FTC, 466 F. Supp. 1088, 1099 (D.D.C. 1978), aff'd, 663 F.2d
120 (D.C. Cir. 1980). APT Metrics is not a retained expert because of litigation. APT
Metrics designed the BA/BQ tests, and allegedly validated the same, for testing
purposes not in anticipation of litigation. APT Metrics could not properly act as an
independent expert or consultant if the quality of its products were at issue. APT is at
best a non-party witness to this FOIA matter. It is improper to invoke work-product
privilege for a non-party witness to preclude production of materials prepared by of for
that witness even if the materials were created in contemplation of the witnesss own
pending or anticipated litigation. Ramsey v. NYP Holdings, Inc., 2002 U.S. Dist. LEXIS
11728, at *18-*19 (S.D.N.Y. 2002). The second element is not at issue here. Because
both documents reveal only facts, because there was no litigation to be anticipated at
the time of creation, and because the documents were not prepared in anticipation of
litigation, the first element is not met. Therefore, neither type of document is protected
under work-product privilege.
16
17
(i)
The Validation Study and the Summary merely reveal facts, which are
not protected under privilege
18
Both the validation study and the summary only provide facts and, as a result,
19
are not protected by work-product privilege. The work-product doctrine does not
20
protect the underlying facts. Restatement (Third) of the Law Governing Lawyers
21
87(1) (2000). [B]ecause the work product doctrine is intended only to guard against
22
the divulging of attorneys strategies and legal impressions, it does not protect facts
23
concerning the creation of work product or facts contained within the work product.
24
California Sportfishing Protection Alliance v. Chico Scrap Metal, Inc., 299 F.R.D. 638,
25
643 (E.D. Cal. 2014) (quoting Garcia v. City of El Centro, 214 F.R.D. 587, 591 (S.D.
26
Cal. 2003)). Immunity does not attach merely because the underlying fact was
27
13
1
2
3
4
5
6
7
8
9
11
Phoenix, Arizona 85007
12
814 W. Roosevelt Street
10
13
14
15
16
17
18
19
(ii)
20
There was no litigation that could have been anticipated in relation to the
21
validation study or its summary. Litigation includes civil and criminal trial
22
23
24
25
26
is litigation for purposes of the immunity. Id. The litigation in question though cannot
27
be some vague suspicion that litigation might come from a situation. Because litigation
14
1
2
3
4
5
6
7
8
9
11
Phoenix, Arizona 85007
12
814 W. Roosevelt Street
10
13
14
15
16
17
18
can, in a sense, be foreseen from the time of occurrence of almost any incident, courts
have interpreted the Rule to require a higher level of anticipation in order to give a
reasonable scope to the immunity. Harper v. Auto-Owners Ins. Co., 138 F.R.D. 655,
659 (S.D. Ind. 1991). Courts have ranged from emphasizing litigation being real and
imminent to litigation being identifiable or reasonable. In re Grand Jury
Investigation, 599 F.2d 1224, 1229 (3rd Cir. 1979).
In this case, we are dealing with a validation study meant to ensure the quality
of the test used to fill ATCS positions. As already shown, this is not the first time the
FAA has conducted a validation study and today, it continues to conduct them. PSOF
9, 23. Furthermore, APT Metrics website highlights the importance of disclosing
validation studies and ensuring a transparent hiring system. Id. 25. Again with the
burden falling on the FAA, it is up to the FAA to show how this particular validation
study was somehow not only prepared for the real possibility of litigation but also
litigation as contemplated by the EEO complaint, which it references. The mere fact
that a complaint is filed does not convert documents that were regularly created in the
past as falling under work-product privilege. Because there is no connection made as
to the litigation in relation to the EEO complaint and the validation studies, the workproduct privilege does not apply.
19
20
(iii)
21
The FAA did not prepare the validation study or the summary in anticipation of
22
litigation for work-product purposes. Both documents were required by law and are a
23
part of regular Agency business. Even assuming it was tied to some possibility to
24
litigation, [i]n circumstances where a document serves a dual purpose, that is, where
25
it was not prepared exclusively for litigation, then the because of test is used. U.S.
26
v. Richey, 632 F.3d 559, 567-568 (9th Cir. 2011). This because of test consider[s]
27
the totality of the circumstances and affords protection when it can fairly be said that
15
1
2
3
4
5
6
7
8
9
11
Phoenix, Arizona 85007
12
814 W. Roosevelt Street
10
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
the document was created because of anticipated litigation, and would not have been
created in substantially similar form but for the prospect of that litigation[.]. In re
Grand Jury Subpoena (Mark Torf/Torf Envtl Mgmt), 357 F.3d 900, 908 (9th Cir. 2003)
(quoting United States v. Adlman, 134 F.3d 1194, 1195 (2nd Cir. 1998)). Therefore,
even if the documents were prepared in anticipation of litigation, the materials are not
work-product if they would have been prepared irrespective of the prospect of
litigation. Bairnco Corp. Sec. Litig. V. Keene Corp., 148 F.R.D. 91, 103 (S.D.N.Y.
1993).
The Ninth Circuit case U.S. v. Richey is greatly on point here. In that case, the
appellees retained a law firm for legal advice concerning a conservation easement. U.S.
v. Richey, 632 F.3d 559, 562 (9th Cir. 2011). That law firm retained an appraiser to
provide valuation services and advice with respect to the conservation easement. Id.
As a result, the appraiser prepared an appraisal report to be filed with the Taxpayers
2002 federal income tax return . . . . Id. The Ninth Circuit found that the appraisal
work file could not be said to have been prepared in anticipation of litigation. Richey,
at 568. Despite being related to the law firms representation, the Ninth Circuit
emphasized the fact that the appraisal report [was] . . . required by law. Id. Had the
IRS never sought to examine the Taxpayers 2003 and 2004 federal income tax returns,
the Taxpayers would still have been required to attach the appraisal to their 2002 federal
income tax return. Nor is there evidence in the record that [the appraiser] would have
prepared the appraisal work file differently in the absence of prospective litigation. Id.
Like in Richey, this case involves a partys law firm contracting with another
entity to create a document that assesses certain facts within its area of expertise. Like
in Richey, the FAA was required to conduct the validation study by law.
Title VII of the Civil Rights Act of 1964 prohibits the use of discriminatory tests
and selection procedures (Title VII). Title VII permits the use of employment tests
so long as they are not designed, intended or used to discriminate because of race,
16
1
2
3
color, religion, sex or national origin. 42 U.S.C 2000e-2(h). The Federal government
has issued regulations to meet the needs set by Title VII. Specifically, 29 CFR 1607.1
states in part:
They are designed to provide a framework for determining the proper use of tests
and other selection procedures. These guidelines do not require a user to conduct
validity studies of selection procedures where no adverse impact results.
However, all users are encouraged to use selection procedures which are valid,
especially users operating under merit principles. [emphasis added].
The language and spirit of Part 1607 is clear that the selection procedures validity must
be well documented and properly performed. Adverse impact existed in the 2014 hiring
10
session, which occurred prior to the 2015 hiring session. As 29 CFR 1607.4(D)
11
describes, selection rates for any group lower than 4/5 of the rate of the group with the
12
highest success will generally be regarded as evidence of adverse impact. The 2014
13
hiring session had adverse impact ratios of .73 for Blacks. PSOF 27. These rates are
14
for the phase of the application immediately following the administration of the BA
15
used in 2014. Therefore, the adverse ratios identified above are a result of the BA used
16
in 2014. As adverse impact exists, the agency was required to perform a validation
17
study. Since the Agency was required to perform a validation study, it was performed
18
in the course of regular agency business and therefore not subject to Exemption 5.
19
There is nothing in the record to suggest that the study would not have been
20
created in substantially similar form but for the prospect of litigation. Even without
21
this present matter, the FAA would still have been required to conduct the validation
22
study.
23
Again, besides being required by law, the study was a part of regular agency
24
business. There is no work product immunity for documents prepared in the ordinary
25
course of business prior to the commencement of litigation. Yurick v. Liberty Mut. Ins.
26
Co., 201 F.R.D. 465, 472 (D. Ariz. 2001) (quoting Tayler v. Travelers Ins. Co., 183
27
F.R.D. 67, 69 (N.D.N.Y. 1998)); see also U.S. v. Fort, 472 F.3d. 1106, 1118 n. 13 (9th
17
1
2
3
4
5
6
7
8
9
11
Phoenix, Arizona 85007
12
814 W. Roosevelt Street
10
13
14
15
16
17
18
19
20
21
22
Cir. 2007) (quoting In re Jury Subpoenas, 318 F.3d 379, 384-85 (2nd Cir. 2003)) (In a
criminal case, the Ninth Circuit agreed with the 2nd Circuit that the privilege would not
apply to materials in an attorneys possession that were prepared . . . by] a third party
in the ordinary course of business and that would have been created in essentially
similar form irrespective of any litigation anticipated by counsel).
In U.S. v. Textron Inc. and Subsidiaries, 577 F.3d 21, 30 (1st Cir. 2009), the case
involved [a] set of tax reserve figures.. Despite the dispute arising with the IRS, the
First Circuit found the ordinary business rule applied straightforwardly and found
them to be prepared in the ordinary course of business. Id. The first circuit reasoned
that [e]very lawyer who tries cases knows the touch and feel of materials prepared for
a current or possible . . . law suit . . . . No one with experience with law suits would talk
about tax accrual work papers in those terms. Id. The figures were for the purpose of
supporting a financial statement and the independent audit of it. Id.
Similarly here, as evidenced by the FAAs continued practice of conducting
validation studies in 2016, they are not talking about these studies in the terms of
litigation. PSOF 23. Just as corporations have the regular imperative to acquire
accurate financial statements, so too does the FAA have the regular imperative to
ensure that it is using a test that is selecting highly qualified candidates. The studies
were meant to be an independent verification that the tests were of the proper caliber.
As a result, the validation study and its summary fail to pass the because of
test and were prepared through regular agency business, thus were not created in
anticipation of litigation. Therefore, it is not protected by work-product privilege.
23
24
(b)
25
26
undue hardship and balancing of interests trump that privilege. The privilege derived
27
from the work-product doctrine is not absolute. U.S. v. Nobles, 422 U.S. 225, 239
18
1
2
3
4
5
6
7
8
9
11
Phoenix, Arizona 85007
12
814 W. Roosevelt Street
10
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
(1975). The scope of the doctrine entails one needing to balance [the] competing
interests of the privacy of a mans work on one end against the fact that public
policy supports reasonable and necessary inquiries. Hickman v. Taylor, 329 U.S. 495,
497 (1947). Fed. R. Civ. P. 26(b)(3) permits disclosure of documents and tangible
things constituting attorney work product upon a showing of substantial need and
inability to obtain the equivalent without undue hardship. Upjohn Co. v. U.S., 449 U.S.
383, 400 (1981). [W]hen documents have been generated by the government[,]
scrutiny of a claim of privilege by an attorney of the government is even more essential
. . . where many attorneys function primarily as policy-makers rather than as lawyers.
See Coastal Corp. v. Duncan, 86 F.R.D. 514, 521 (D. Del. 1980); see also Texas Puerto
Rico, Inc. v. Department of Consumer Affairs, 60 F.3d 867, 884 (1st Cir. 1995).
The Ninth Circuit Case, U.S. v. Christensen, 801 F.3d 970, 983 (9th Cir. 2015),
is applicable on this matter. In Christensen, the defendant hired a third party to wiretap
an individual who was in a dispute with one of the defendants clients.. In that case, the
Ninth Circuit found that the work product doctrine did not apply. Id. at 1009. The
Ninth Circuit reasoned that the purpose of the work product privilege is to protect the
integrity of the adversary process. Id. at 1010 (quoting Parrot v. Wilson, 707 F.2d
1262, 1271 (11th Cir. 1983)). It did not apply to foster a distortion of the adversary
process by protecting illegal actions . . . . Christensen, at 1010. It would indeed be
perverse . . . to allow a lawyer to claim an evidentiary privilege to prevent disclosure
of work product generated by those very activities the privilege was meant to prevent.
Id. (quoting Moody v. I.R.S., 654 F.2d 795, 800 (D.C. Cir. 1981)).
In this case, FOIA was passed by Congress to also ensure the integrity and
openness of its government. Similar to the Ninth Circuits reasoning, work-product
privilege does not protect illegal actions. This is especially true in the FOIA context.
If the FAA did indeed discriminate in the testing process, the validation study was
bound to be work-product generated by that illegal conduct since it was required by
19
1
2
3
4
5
6
7
8
9
11
Phoenix, Arizona 85007
12
814 W. Roosevelt Street
10
13
14
15
16
17
18
19
20
21
22
23
24
25
26
(c)
27
20
1
2
3
4
5
6
7
8
9
11
Phoenix, Arizona 85007
12
814 W. Roosevelt Street
10
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
1
2
actual agency policy). The public should be allowed to fact-check what the FAA has
already stated openly to the world.
3
4
Because the FAA waived its work-product privilege in relation to the validation
study and summary, such documents are not protected under the privilege.
2.
(a)
Both the validation study and the summary fail to fall under attorney-client
privilege. As the title suggests, even before getting into a test to determine attorney-
10
client privilege, one needs an attorney. Since only the study and the summary are being
11
sought, FAA Counsel is claiming that these two documents contained legal advice.
12
(Def.s Mot. for Summ. J. at 13, April 4, 2016). In essence, their assertion is claiming
13
that APT Metrics was an attorney to the FAA. Id. at 19. APT Metrics created both the
14
validation study and the summary. DSOF 3-4, 9-11. Nowhere in the record does it
15
show that APT Metrics or John Scott are authorized to provide legal advice. Such a fact
16
17
Metrics nor John Scott are authorized to provide legal advice, the FAAs argument that
18
the validation study and the summary contain legal advice is meritless. Therefore,
19
neither the study nor the summary of it are protected by attorney-client privilege.
20
IV.
CONCLUSION
21
The spirit and intent of the Freedom of Information Act is to pierce the veil of
22
administrative secrecy and open Agency action to the light of public scrutiny. Agency
23
actions that compromise public safety and then attempts to cover up illegal activities
24
must be revealed for the sanctity of our democratic system. As there are genuine
25
disputes of material fact, the FAA did not conduct an adequate search, and the
26
27
1
2
3
Plaintiff asks that this Court order the FAA to produce the documents requested and
conduct and adequate search in a timely manner.
RESPECTFULLY SUBMITTED this 25th day of April, 2016.
4
5
7
8
9
11
Phoenix, Arizona 85007
12
814 W. Roosevelt Street
10
13
14
15
16
17
18
19
20
21
22
CERTIFICATE OF SERVICE
I hereby certify that on this 25th day of April, 2016, I electronically transmitted the
foregoing document to the Clerks Office using the CM/ECF System for filing and
transmittal of a Notice of Electronic Filing to the following CM/ECF registrant(s):
Eileen M. Decker
United States Attorney
Dorothy A. Schouten
Assistant United States Attorney
Alarice M. Medrano
Assistant United States Attorney
300 North Los Angeles Street
Room 7516, Federal Building
Los Angeles, California 90012-9834
Attorneys for Defendants
23
24
25
26
27
23
1
2
3
4
5
6
7
Attorneys at Law
814 W. Roosevelt
Phoenix, Arizona 85007
(602) 258-1000 Fax (602) 523-9000
10
WESTERN DIVISION
11
12
13
14
Defendant.
16
17
18
19
20
21
22
23
(PSOF). The facts of record show the impropriety and inaccuracies of Defendant
24
25
support a finding that Defendant is not entitled to summary judgment, and that
26
27
inaccuracies also support a finding that FAA is not eligible for summary judgment for
1
2
1.
5
6
record available concerning the Agencys review of the process for hiring of the Air
Traffic Control Specialist (ATCS) position, it is clear that the Agency did not
10
11
the Biographical Questionnaire (BQ) test. Furthermore, John C. Scott, along with
12
numerous other FAA officials, called the exam a questionnaire in 2014. Exhibit 2:
13
14
8, 2014 Telcon Transcript. Exhibit 4: Joseph Teixeira Email. Exhibit 5: Matthew Borten
15
Statement.
3.
16
17
this civil action is to identify the ability of the 2015 BA to identify the characteristics
18
needed for the ATCS position. Exhibit 18: Jorge Alejandro Rojas (Rojas Affidavit)
19
Affidavit 4.
4.
20
21
22
23
26
Plaintiff avers DSOF 12. The 2014 and 2015 exams were starkly different.
Id.
PLAINTIFFS SEPARATE STATEMENT OF FACTS
24
25
6.
The Federal Aviation Administration (FAA) changed the hiring practices for
27
1
2
3
4
5
6
7
7.
who were on a Qualified Applicant Register or other list of candidates. Individuals were
negatively impacted as they were forced to reapply under the new hiring system and
not all were selected. Exhibit 8: National Black Coalition of Federal Aviation
Employees ATC Hiring update from the National President, at page 1.
8
9
11
Phoenix, Arizona 85007
12
814 W. Roosevelt Street
10
13
14
8.
17
Training Initiative Schools Graduates, at page 1. Previously the Agency used to hire
from a group of schools approved by the Agency, offering aviation specific education.
Individuals were required to take the Air Traffic Selection and Training (AT-SAT)
exam. Exhibit 9: FAA, Air Traffic Collegiate Training Initiative (AT-CTI),
8/10/2011 to 2/25/2014 Website, at page 1.
9.
20
21
22
23
24
25
26
27
The Air Traffic Selection and Training (AT-SAT) was previously extensively
18
19
The Agency changed to only requiring a four-year degree in any field, or three
15
16
10.
FAA Spokesman Tony Molinaro, said the FAAs decision to modify the
hiring process was to add diversity to the workforce. Exhibit 11: INFOURM Want
to be an air traffic controller? UND says FAA has dumbed down the process, at page
1.
11.
Biographical Questionnaire would be used in 2014. The Agency further stated that
the exam was designed, developed and validated through the FAAs Civil Aerospace
Medical Institute (CAMI). Exhibit 3: January 8, 2014 Telcon Transcript.
12.
1
2
3
4
5
6
7
8
9
11
Phoenix, Arizona 85007
12
814 W. Roosevelt Street
10
13
14
15
16
17
18
19
20
21
22
23
24
25
13.
The Agency also provided several letters and statements concerning the use
The Agency stated that the 2015 Biographical Assessment was newly
The Agencys 2014 Biographical Assessment was very different than the
The Agency notified applicants that were rejected and that they failed because
FAA Administrator Huerta admitted that the job-task analysis and the
validation was not completed until the end of 2014 significantly after the 2014 exam.
Exhibit 1: December 8, 2014 Letter from FAA Administrator Michael Huerta.
18.
000431 reveal that 2,407 passed the biographical exam in 2014, while 28,511 applied.
Exhibit 18: Rojas Affidavit 6.
19.
009349 reveal that 5,083 passed the biographical exam in 2015, while 18,302 applied.
Exhibit 18: Rojas Affidavit 7.
20.
During conversations with Defendants counsel, it was made clear that the
documents sought were the validation study and related communications regarding the
examination. Exhibit 18: Rojas Affidavit 8.
26
27
1
2
3
4
5
6
7
8
9
11
Phoenix, Arizona 85007
12
814 W. Roosevelt Street
10
13
14
15
16
17
22.
normal course of Agency business. Exhibit 15: February 11, 2016 FAA Memorandum.
24.
validation studies should be disclosed and that the advice provided was not provided in
the capacity of an attorney. Exhibit 16: APT Metrics website & Testing the Test
Powerpoint.
26.
Agency sent a letter to the Vice President of the United States concerning the
validation of the examination. Exhibit 17: Letter to Vice President Joe Biden.
27.
The adverse impact ratios for the 2014 hiring announcement were compiled
18
19
20
21
22
23
24
25
26
27
///
1
2
3
4
5
6
7
8
9
11
Phoenix, Arizona 85007
12
814 W. Roosevelt Street
10
13
14
CERTIFICATE OF SERVICE
I hereby certify that on this 25th day of April, 2016, I electronically transmitted the
foregoing document to the Clerks Office using the CM/ECF System for filing and
transmittal of a Notice of Electronic Filing to the following CM/ECF registrant(s):
Eileen M. Decker
United States Attorney
Dorothy A. Schouten
Assistant United States Attorney
Alarice M. Medrano
Assistant United States Attorney
300 North Los Angeles Street
Room 7516, Federal Building
Los Angeles, California 90012-9834
Attorneys for Defendants
/s/ Christine L. Penick
15
16
17
18
19
20
21
22
23
24
25
26
27
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 1 of 342 Page ID #:196
EXHIBIT 1
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 2 of 342 Page ID #:197
U.S. Department
of Transportation
Federal Aviation
Administration
December 8, 2015
The Honorable Kelly A. Ayotte
Chair, Subcommittee on Aviation
Operations, Safety, and Security
United States Senate
Washington, DC 20510
Dear Madam Chair:
Thank you for your July 13 letter, cosigned by your congressional colleagues, about the Federal
Aviation Administration's (FAA) revised hiring process for entry-level Air Traffic Control
Specialists (ATCS) and requesting infonnation about the results of the three rounds of hiring
pursuant to the recently revised process.
As you know, the FAA maintains the safest and most efficient aerospace system in the world
partly because we continuously evaluate and strengthen our ATCS hiring and training processes.
The 2014 and 2015 changes to the ATCS hiring process furth er that commitment. This ensures
that we use an efficient and fair process aimed at selecting those applicants with the highest
probability of successfully completing our rigorous ATCS training program from among a large
and diverse applicant pool.
The ATCS position has been and likely will continue to be a highly sought-after and well-paid
Federal occupation for which qualified applicants significantly outnumber available positions. In
2012, the FAA undertook a comprehensive review of the current ATCS selection and hiring
process as called for by the Equal Employment Opportunity Commission. This review and
subsequent analysis indicated a number of concerns in the FAA ATCS hiring process, including
the use of hiring sources, the Air Traffic Selection and Training Test (AT-SAT), and the
Centralized Selection Panel.
Accordingly, given these concerns in 2013, the FAA undertook a comprehensive analysis of how
to improve the current ATCS selection and hiring process. The FAA retained industrial
organizational psychology consultancy, Outtz and Associates, along with nationally recognized
human resources consultants, APTMetrics, to conduct a thorough review and analysis of the
ATCS hiring process, recommend improvements, and assist in implementing those
recommendations. A tMctrics' work was scheduled to last 2 years. concluding at the end of
2014.
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 3 of 342 Page ID #:198
2
While this work continued, the 2014 Controller Workforce Plan identified the need to hire and
train 1,286 air traffic control specialists at the FAA Academy. This required developing a
selection process to effectively evaluate the expected surge of applications in a timely and
cost-efficient manner. As a result, in February 2014 the FAA implemented the 2014 Interim
Hiring Process for one-time use, incorporating as many of APTMetrics' initial recommendations
as practicable including:
Ending the use of large inventories segregated by applicant source and unre lated to
then-current hiring needs;
Opening a vacancy announcement available on the same terms to all sources (all U.S.
citizens) to ensure equitable treatment and the broadest pool of qualified candidates;
Eliminating the ineffecti ve, time-consuming, costly and un-validated subjective selection
procedures associated with Centralized Selection Panels and candidate interviews; and
Developing and substituting the Biographical Assessment (BA) as a stand-alone, initial,
objective selection test, in place of the AT-SAT's Experience Questionnaire subtest, which
had lost its val idity. The BA is a computerized test that measures important and
demonstrably job-related personal characteristics of applicants.'
For the Interim Process, the FAA chose the BA as the first step of a mul ti-step process to identify
the most qualified job applicants. That decision reflected detailed review of each AT-SAT
subtest's predictive validity (i.e., how well it differentiated successful from unsuccessful
candidates). which revealed that the Experience Questio1maire (EQ) did not accurately predict
success in proceeding through the FAA academy or attainment of Certified Professional
Controllers (CPC) status at the first facility.
APTMetrics developed and validated the BA using years of research and data gathering by the
FAA's Civil Aerospace Medical Institute for three different biograph ical instruments, including
the EQ when it was part of the AT-SAT. The BA measure required personal job-related
attributes and was validated to I) predict pass rates at the FAA Academy, and 2) predict
ce11ification of an ATCS at hi s or her first assigned facility. Notably, the validati on wor ,
indicated the BA had a high-level of valid it with little adverse effect on any discrete grou1;1 or
subgrou of test-takers.
The Agency also removed the interview stage of the hi.ring process for several reasons. The
questions used in the intervie~ were commonly shared online, and the interview process yielded
an historical passing rate approaching l 00 percent. Thus, and most importantly. the interview
added little value in the selection of ATCS. Further, the interview process was not standardi zed
or validated, and the managers conducting the interviews had little or no training o n proper
interviewing procedures. Moreover, the Agency's decision to assign facilities after training.
rather than during the selection process, made it impossible for managers to interview candidates
that would repo11 to their facilities. Finally, some have raised the concern that the interview
screened for language barriers, the ATCS application asks the candidates to confirm their ability
to speak English clearly in the san1e way it asks applicants to confirm they satisfy the maximum
1
These include flexibility: risk tolerance; self-confidence; dependability; resilience; stress tolerance; cooperation;
teamwork; and rules application.
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 4 of 342 Page ID #:199
3
entry age of 31 years. The FAA will periodically evaluate and update interview guides and
interview process for future announcements.
As a result of these changes, the 2014 Interim Hiring Process became more efficient,
economical, and transparent. We significantly reduced applicant processing time and in 20 14
saved more than $8 million in AT-SAT testing costs by using the BA as an initial screening tool.
Additionally, under the legacy process, applicants could be placed o n inventories for years and
have no understanding of whether they would ever be hired by the FAA and sent to the FAA
academy. Under both the 2014 and 2015 announcements, applicants who passed the selection
hurdles received a Tentative Offer Letter. Those who successfully completed the remaining
medical and security clearances were assured a position and received an estimated date to start
their academy training.
Moreover, by opening the announcements to all sources in the general public, the revised hiring
process as reflected in the chart below. significantly increased the representation of women who
successfully completed the assessment process and to vari ous extents increased the
representation of racial and ethnic minorities, as compared to the Agency" s legacy selection
processes.
Gender
Female
Male
Declined to
Respond
Ethnicity
Multi-ethnic
Hi spanic or
Latino
Asian
Black or
African American
American Indian
or Alaskan Native
Native Hawaiian
or Other Pacific
Islander
White
Declined to
Respond
CPC Population
Interim 20 14
N= l 1567
Percentage N=l593
1855
16%
260
9712
84%
651
0
682
CPC Population
N= ll567
Percentage
1.1 %
123
782
Percentage
28.5%
71.5%
Interim 2014
Percentage
N= l 593
5.3%
48
6.8%
153
16.9%
270
2.3%
57
6.3%
623
5.4%
93
10.3%
86
0.7%
.4%
29
0.3%
.7%
9654
83.5%
544
60.1 %
688
The 2015 ATCS hiring process is substantially similar to the 20 14 interim process, with a
number of modifications. First, in 2015, the Agency used a newly refined BA. The 2015 B
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 5 of 342 Page ID #:200
4
was developed using the newly com~leted 2014 job analysis of the A TCS position, \vhi_cb'
identified the critical and important re uiremenls of the ATCS job. The new BA measures the
knowledge, skills and other characteristics that could most readily be assessed with a biodata
instrument, including those attributes that are not substantially assessed by the AT-SAT. A total
of 1,765 current air traffic control specialists participated in the job analysis study and over 1,000
CPC and their managers contributed to validating the BA. This approach ensured
comprehensive coverage of important attributes for the ATCS position.
Second, the FAA used an alternate, but equated version of the AT-SAT (excluding the
Experience Questionnaire subtest). When the FAA commissioned the development of the
AT-SAT more than 15 years ago, it had asked for and received 2 comparable versions. Until
20 15, the Agency only used one version. However, in 2015 the FAA switched to using the
second version for security and efficacy purposes. Likewise, as a test security measure,
APTMetrics developed an alternate version of the BA -test, each to be used randomly and
concurrently. with the questions also appearing in random order for each BA.
Fina lly. rhe Agency issued a separate vacancy anno uncement for experienced air traffic
controllers. Due to their prior experience, these candidates were placed directly into the ATC
facility (bypassing the Academy) and are expected to re-certify as Certified Professional
Controllers more quickly than applicants with no experience.
Your letter specificall y requested pass rates at the r AA Academy for students hired under the
interim and 2015 process. Understanding that these remain preliminary results, the results of the
February 2014 general public vacancy announcement as follows:
T he observed and intended effect of these Academy grading changes has been to substantially
increase the failure rate of new trainees while they are at the Academy rather than have them fail
after being assigned to their first Tower or En Route Center. Failures at a faci lity incur greater
costs to the Agency than failures occurring earlier in the process. In addition, as described
below, changes in Academy failure/passing rates must distinguish between those in the
Academy's En Route training program from those in the Academy' s Tower training programs
because the Academy modified the grading requirements of each track at different times.
At this time, it is too early to draw any valid conclusions about the success of the Interim hiring
process. Furthermore, drawing any statistically valid conclusions of this cohort 's training
performance is complicated by the FAA's decision to make the FAA Academy grad ing more
rigorous. Any meaningful analysis must account for changes in assessment at the FAA
Academy. The Agency has not yet conducted that analysis.
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 6 of 342 Page ID #:201
5
1. En Route Centers. Changes to the En Route Qualification grading were implemented at the
Academy in 2011; since then, the team of personnel that administer evaluations continues to
adjust the process to improve the consistency of such evaluations. As expected, the graph below
depicts a declining class pass rate trend coincident with the introduction of strengthened grading
requirements beginning in 201 I through and including 2015. Immediately after the En Route
training program adopted its assessment changes in 20 11 , Academy pass rates declined by
12 percent, from 98 percent in 2010/2011 classes to 86 percent in 2011. Further, as the FAA
continues to calibrate En Route grading process, the pass rate has continued to decline. Prior to
the use of the Interim process in FY 2014 Quarter 4, the most recent Academy pass rate was
68 percent (FY 2014 classes in Quarters l - 3) for the En Route training program. Trainees from
the Interim process were phased in during FY 2014 Quarter 4 (July l to September 30) and of the
handful of those trainees that have attended the Academy, 64 percent have passed.
- --
90"1<>
80%
7Cf%
60%
500/ci
1100/ci
30%
200/ci
1001<>
0%
Pre-Curriculum Change
-ID- Post-Hire Change
Post-Curriculum Change
- Linear (Post-Curriculum Change)
*Calculation Notes: Fiscal year based on a start date of 1Oil with l 01l to 12/31 being Quarter 1;
Pass rate based on # of passes I (# of passes + # of fails); for display purposes FY 2011
Quarter l calculation includes one class from FY 20111Quarter2, as it used the assessment
approach; Initial FY 2014 Quarter 4 classes contained ,a mix of pre and post hire changes
students. As such FY 2014 Quarter 4 classes are not presented in the graph.
2. Towers. The FAA implemented changes to the Tower Qualification grading regime at the
Academy concurrently with the initial hiring from the t 014 Interim process. Unlike with the
En Route Qualification regime, this means there is no data post-grading-change that does not
also include the initial hires from the 2014 Interim process, precluding an " apples-to-apples"
comparison.
Finally, your letter asked about students who have been determined to be unqualified at the
Academy for non-academic reasons. No student has been removed based upon a failure to be
qualified. Rather, four selectees have been removed from the FAA Academy for cause since we
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 7 of 342 Page ID #:202
6
implemented the interim hiring process in 2014. Removal for cause could include behavioral
in fractions independent of suitability, such as driving while under the influence of alcohol or
cheating at the Academy. This rate is consistent with the previous rate of terminations.
Notably, the screening employed under both the Interim Process in 2014 and the 2015 process
has led to the dismissal of some candidates before they begin training. For example. as
previously indicated, under Basic Qualifications on the application, the first question asks, "Are
you able to speak English clearly enough to be understood over radios, intercoms and simi lar
communications equipment?" In the past 2 announcements, I 16 applicants responded no and
were eliminated from the process.
We operate the safest and most efficient airspace in the world , in large part due to our dedicated
workforce. As we close out the 2015 hiring process, we are working diligently to bring on board
those hired during that process. We wi ll continue to build on the improvements made over the
past 2 years to find addi tional efficiencies while hiring those most likely to become certified
professional controllers.
We have sent an identical response to each of the cosigners of your letter.
lf I can be of further assistance, please contact me or Molly Han-is, Acting Assistant
Administrator for Government and Industry Affairs, at (202) 267-3277.
Sincerely,
~~rt<D
Administrator
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 8 of 342 Page ID #:203
EXHIBIT 2
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 9 of 342 Page ID #:204
Biogr aphical
Assessment - Page 1
l
I
I
I
mod liklv
docnhe
my
ocade nuc
wo'k .,
mv most deniand1na
6. Ah1nc ewe:-09c
r am not sure
l. In the p.ad:, whilf d id you do when you were working on 1;omethi11g -.nd not_hing !ieemi!cd
o,ioht'
l
l
I
I
II
A. I to~k a break from working to relsx and !:ta rt~ again s-om10time later .
C. l let othe~ kr .ow ~ wae al"lg'"'f, th.en l b.1;9ar1 to foel better ~nd o:intir.uad working.
o. J too< the day as K came and med to remain uooeat.
E. I .5toi:ped whot r wos- doing, figunng th1r.o;is- wou'd go behcr the next doy.
A. Lack of ability
S. Not try1r.g h.ird enough
G~als
D. Bod luck
A. True
a. o
B False
c .c
0 5 1:1
A. determiried
1
1
mo~ j ob ~.
0. have a oood balance beh11e en oettino tco frustrated or stavma too calm.
6. Work ettiic
7.1 am more
A.eager
ccm-s1der~te
6 . dominant
I:
~ys1ca l
23. Duri n!J n1y last year in college, my avcrag(l numbar of hours of paid en1ploymant per
week Wlls:
A. """ th JO hou"
8. L0t-B2Cl hours
D. Non e
E. Didn't oo to colleoe
Edu catmn
1
8. B
c. c
o. D CY lower
:;_.Didn't take
17. t learned about the opportun1tv lo at>PI Y for an Air Traffic Control Speoallst { ATCS) Jol:l
through:
I
1
C. College -ccruitmer.t
O. Wo..king in some other eapocity for the o~er.cy
c. tloncareer officer
O. Corccr cnhstcd
E. Ccircc::rofficcr
The aspe~
. of biDing an a.ir trafHc.controllcr that a. ppe.als to m e most is that:
~. My job 1s secure in the future
o or lo:11er
f}.
D.
E. Don't r emember
8 respectful
E.
16.
B. persisterce
D. l
B. inquiring mird
D. Hittory/Sccial ~ienc:H
A.drive
6 .3
C.2
C. Engtis;h
I
I
A. o.-MoE
B. cons!derate
8 _ l\lath
6.
A.thoniui;ih
I
I
I
ovcre~e
E. Don,t kno N
8-. accurate
C.
C. ~ve reac
D . Si::\o w
E.
I
I
E n gli ~h
or didn' t go to college
2!>. Jn t he three y ears imn1ediately b e fore .apply i ng t o this. job, the number Df d ifferent full
l
l
l
c.
3 (() ~
D. 5 t o6
E.1 or more
w~
A. O
s.
~ to
C.. 3i"o4
D. S to6
E 7 or more
27 . Which of th.e followino would your peers say des cribes your behavior in a t1roup
auon?
.o... r ou freefy expre- your Vfe'NS a nd sway the .group -consideraDly
3 . Yau frel'!!y e"!press: your v~ws;. but th<!. group
do~n't a l~ys
s:harl! them
C. Yc;u ore r cludcnl to c,.;prc:s-:s you~ v1cw:s, but whc r- y1Ju do. they o:tre U:M.Jally Vl'cll r cccivc::-rl
: . Don't know
A No
B.Vc$
"'111r.ot')
c. Yes - 0"11ian
0. Yes - Both ~1litary and Cr111(i!ln
S,. How long dn you think It will take to become -fulty @[email protected] in your jnb1
A.Ye >
54. Whidt of the followino BEST describes aviation coursework taken towards vour
c. Bacc.al.:tu..-ca:e-tran&f41t' 0'1ert.ec
o.oner
E. Not applicable
SS. Of all the Air Traffic Control Speciali5ls (ATCSs) in the country, at what percentile do
you think you will be able to perform?
ion,.
62. How have you planned your work activities al work, ~chool, or in other similar
~ltuaUon~?
B. 2.00 or betow
not exact.
:a.so
O. I usually did not have plans because Jt lS 11T1portant to let things happe:n freely.
f. 3.51 to 4.00
57. In vour c urrent or previous Job(s), edu cation. or other similar exoeriencH, how did vou
ally feel about assignments changing at the l a.st minute?
A. t d.1d not mind it, but_ preferred that the assignments did not change.
mv ""ot'k.
D. 2.51 to 3.00
A.
c. 2.01 to 2.50
E. 3.01 to
E. :n the top
B. Voc.ation11I oriented
In
B. No
o.
I
I
I
SJ. Do ycu have on Msodalc and/ ol'" Tccfu1icI/ Mil1lary/Vocc.tionol- Tcc.hnic:ol dc::grcc?
AVIATOR:: 27-1
Online Application
I Step8of 15 Page 11 of 342 Page ID
Case 2:15-cv-05811-CBM-SS Document
Filed 04/25/16
#:206
OMB Control Number 2120-0597
Announcement: FAA-AT0 -15-AU.SRCE~01 66
Job Title: Air Traffic Control Specialist- Trainee
Series :
2152
Status :
In Progress.
Grade(s): FG-1
CloW!g Date :
Date Submitted :
3127/2015
Important
Step 8of15
Please correct the following:
An answer to question #16 is required.
2. If you have earned at least a two-year degree, in what major was that degree? (Check all that
apply)
ow many credits of fonnal air traffic control coursework have you completed?
1/4
1
I
1
I
I
I
1
I
AVIATOR:: Online
Step 13of15 Page 12 of 342 Page ID
Case 2:15-cv-05811-CBM-SS Document
27-1Application
Filed I04/25/16
#:207
101. In: : :: ~s:e~ear, the number of times I have been late for w o rk o r an appo
intment is:
0
I am never late.
I am not sure.
102. My co-w o rkers o r class mates would probably d escribe me a s a perso n who:
103 . In yo ur past o r current job( s) and s chool experiences, how did you respo nd when
1 became frustrated.
I ignored the er ticism .
I tried to understand the reason for the crit cism.
I became somewhat d iscouraged.
104. In yo ur current or previo u s job(s) (or s choo l, if not previo u s ly emplo yed ), which of these
were yo u HOST comfortable doing?
Taking on additional work.
105. When you are late fo r an appointment, meeting, o r w o rtc, what is usually the reaso n?
I am so busy t hat I sometimes get behind schedule.
107. In yo ur current or previo u s job( s), educatio n, o r o ther s imilar experiences, ho w did yo u
us ually feel about assignments changing at the last minute?
1 d d not mind it, but prefe rred that the assignments d d not change.
I d d not like t, and t a ffected how m uch I liked my work.
108. Which of the follo wing BES T describes yo ur preferred work environment?
I am not s ure.
213
AVIATOR:: Online
Step 13of15 Page 13 of 342 Page ID
Case 2:15-cv-05811-CBM-SS Document
27-1Application
Filed I04/25/16
110. People who know me well would describe me as:
#:208
a risk taker.
a very caubous md1v1dual.
111. Your manager (or teacher) would say that when you make an error at work (or school),
are visibly upset.
feel g uilty.
move on qu ckly.
fix t immed ia tety.
I
I
113. People who know me would say that when I encounter an obstacle at work or school, I:
in tially struggle a b t to get back on track but eventually recove r.
get frustrated.
have an extreme amount of tolerance.
Next
313
EXHIBIT 3
TELEPHONIC CONFERENCE
IN RE:
JANUARY 8, 2014
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
VTranz
www.avtranz.com (800) 257-0885
UNID MALE:
5
6
JANUARY 8, 2014
LOCATION: TELEPHONE
UNID MALE 2:
10
11
12
Ressitar.
13
THE OPERATOR:
At this
14
15
16
17
then 1.
18
If
19
20
this time.
21
22
23
24
25
today's telcon.
In today's call, Mr. Joseph Teixeira, Vice
26
27
28
VTranz
www.avtranz.com (800) 257-0885
7
8
MR. TEIXEIRA:
10
11
12
13
14
15
16
17
18
19
20
As many of
21
22
23
24
25
26
27
process.
28
VTranz
www.avtranz.com (800) 257-0885
10
11
12
13
years.
14
15
16
17
18
19
20
21
22
23
24
25
26
applicants.
27
28
The FAA
VTranz
www.avtranz.com (800) 257-0885
are.
10
This
11
12
13
14
classes.
15
16
17
18
19
20
21
22
23
24
25
term.
26
27
28
VTranz
www.avtranz.com (800) 257-0885
directly.
topics.
5
6
Rickie.
MR. CANNON:
Thank you.
Thanks, Joseph.
CTI students.
10
11
12
13
issues.
14
15
16
17
18
19
20
21
22
23
24
25
26
will be closed.
27
28
VTranz
www.avtranz.com (800) 257-0885
apply.
7
8
Dr. Scott.
MR. SCOTT:
Thanks, Rickie.
10
11
12
13
14
15
16
17
18
19
20
21
22
23
analysis.
24
25
26
27
28
VTranz
www.avtranz.com (800) 257-0885
10
11
12
13
14
15
16
17
18
19
20
21
22
These will be
23
24
25
26
capabilities.
27
28
overview.
Okay.
VTranz
www.avtranz.com (800) 257-0885
MR. TEIXEIRA:
Okay.
please?
MR. JONES:
5
6
THE OPERATOR:
Thank you.
when prompted.
10
11
12
13
14
opportunity here.
15
16
17
right now?
18
MR. TEIXEIRA:
19
20
21
initial presentation.
22
23
MR. MILLER:
That's a shame.
We believe we have
24
25
26
27
28
VTranz
www.avtranz.com (800) 257-0885
You
Okay.
I'm saying
presentation.
MR. MILLER:
Okay.
questions now.
I am actually standing in an
10
11
12
minutes here.
13
14
MR. TEIXEIRA:
Thank you.
15
THE OPERATOR:
16
MR. PEARSON:
17
18
designed that?
19
UNID MALE:
20
21
22
23
24
25
26
27
28
10
MR. PEARSON:
VTranz
www.avtranz.com (800) 257-0885
11
published?
UNID MALE:
MR. PEARSON:
4
5
6
7
MR. PEARSON:
8
9
Yeah.
UNID MALE:
Yes, it is.
10
MR. PEARSON:
11
UNID MALE:
12
MR. PEARSON:
13
questionnaire?
14
15
16
UNID MALE:
17
18
19
20
21
MR. PEARSON:
22
23
24
UNID MALE:
25
26
MR. PEARSON:
27
28
VTranz
www.avtranz.com (800) 257-0885
12
3
4
MR. PEARSON:
I understand that.
It
10
11
12
MR. TEIXEIRA:
13
14
15
16
This
17
18
19
involved.
20
MR. PEARSON:
21
22
23
24
25
26
27
28
VTranz
www.avtranz.com (800) 257-0885
13
10
So
11
12
13
14
April.
15
16
17
18
19
20
21
22
controllers.
23
24
25
today.
26
MR. PEARSON:
27
28
VTranz
www.avtranz.com (800) 257-0885
14
MR. PEARSON:
10
11
12
13
MR. PEARSON:
Okay.
14
15
16
17
18
interest groups.
19
20
21
22
MR. TEIXEIRA:
23
MR. PEARSON:
24
25
26
is that correct?
27
28
MR. TEIXEIRA:
If I can try to
VTranz
www.avtranz.com (800) 257-0885
processes.
But I
MR. PEARSON:
Okay.
10
11
15
barrier?
MR. MCCORMICK: Mike, I think that would be difficult for us to
12
13
14
15
16
there.
17
18
19
20
21
22
23
agency process.
24
25
26
27
28
MR. PEARSON:
And we
VTranz
www.avtranz.com (800) 257-0885
16
RECORDING:
MR. RESSITAR:
Wayne Ressitar.
RECORDING:
Thank you.
MR. PEARSON:
front of you.
MR. TEIXEIRA:
Okay.
10
speaking.
11
12
of barrier analysis.
13
14
barrier analysis.
15
16
17
18
barrier analysis.
MR. PEARSON:
19
20
21
22
23
24
25
MR. TEIXEIRA:
26
27
individuals.
28
VTranz
www.avtranz.com (800) 257-0885
individuals.
17
MR. PEARSON:
10
someone else.
11
questions.
12
13
14
15
16
17
18
19
20
21
22
23
24
MR. CANNON:
Those
25
26
27
28
control specialist.
VTranz
www.avtranz.com (800) 257-0885
18
We'll take
So
10
11
12
MR. PEARSON:
13
14
15
16
17
But it
18
MR. CANNON:
19
MR. PEARSON:
20
21
22
23
24
25
26
27
28
VTranz
www.avtranz.com (800) 257-0885
What we
We have vacancy
19
10
11
by any means.
12
13
14
15
16
17
18
19
20
21
MR. PEARSON:
22
23
line.
24
25
26
27
question?
28
That's my
MR. MCCORMICK: Mike, the AT-SAT has been in place for many
VTranz
www.avtranz.com (800) 257-0885
20
MR. PEARSON:
10
11
12
13
14
15
16
17
18
19
20
21
22
MR. PEARSON:
23
24
MR. PEARSON:
25
-- FAA.
26
27
28
VTranz
www.avtranz.com (800) 257-0885
SAT.
21
10
11
12
13
14
too.
15
16
17
18
19
UNID MALE:
20
Hello, Victor.
21
22
23
correct?
24
UNID MALE:
That is correct.
25
26
27
28
VTranz
www.avtranz.com (800) 257-0885
22
MR. CANNON:
Nothing is
If
10
11
12
13
I understand.
14
different things.
15
16
17
February 10th?
18
MR. CANNON:
The
19
20
21
opening.
22
23
Jobs regularly.
24
25
26
27
28
VTranz
www.avtranz.com (800) 257-0885
MR. CANNON:
6
7
10
Pardon?
11
12
8
9
23
No.
13
number of hires.
14
15
16
17
18
inventory.
19
20
All right.
Thanks.
21
MR. CANNON:
Thanks, Victor.
22
THE OPERATOR:
23
MR. KUHLMANN:
24
25
26
27
28
Appreciate you
I know
VTranz
www.avtranz.com (800) 257-0885
24
CTI schools.
process?
MR. TEIXEIRA:
10
11
12
13
14
15
MR. KUHLMANN:
16
17
18
19
20
21
22
23
24
25
26
27
28
When they
VTranz
www.avtranz.com (800) 257-0885
So I don't understand
10
11
12
13
program.
14
25
MR. TEIXEIRA:
15
that image.
16
17
18
19
20
21
process.
22
23
now.
24
MR. KUHLMANN:
25
26
27
28
VTranz
www.avtranz.com (800) 257-0885
But
Is that
Those
now, but --
MR. KUHLMANN:
Okay.
10
11
26
everyone else?
MR. TEIXEIRA:
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
cons.
27
28
VTranz
www.avtranz.com (800) 257-0885
27
MR. CANNON:
No, in --
announcement.
10
11
12
send it.
13
14
15
16
MR. KUHLMANN:
I understand that.
17
18
19
for an applicant.
20
21
22
23
24
25
26
27
28
VTranz
www.avtranz.com (800) 257-0885
28
several years.
MR. KUHLMANN:
Is
10
11
12
UNID MALE:
13
14
MR. KUHLMANN:
Okay.
Thank you.
15
THE OPERATOR:
16
MR. MUMMERT:
17
Aeronautical University.
18
19
answered.
20
21
Oklahoma City?
22
23
MR. CANNON:
24
25
26
27
process.
28
VTranz
www.avtranz.com (800) 257-0885
MR. MUMMERT:
7
8
Okay.
9
10
MR. MUMMERT:
11
12
13
the recommendation.
14
MR. CANNON:
15
No, sir.
Thank you.
16
MR. MUMMERT:
Okay.
17
THE OPERATOR:
18
MR. ESQUIBEL:
19
20
21
22
support?
23
24
29
Can any of
25
26
27
28
VTranz
www.avtranz.com (800) 257-0885
I can definitely
MR. ESQUIBEL:
Okay.
10
you?
11
12
13
14
30
MR. TEIXEIRA:
15
16
17
18
19
20
21
22
23
24
I think they're
25
26
27
28
MR. ESQUIBEL:
VTranz
www.avtranz.com (800) 257-0885
31
10
MR. CANNON:
We have made
11
12
13
14
15
16
17
agency.
18
19
MR. ESQUIBEL:
Okay.
20
21
22
23
24
25
MR. CANNON:
26
27
28
VTranz
www.avtranz.com (800) 257-0885
32
As part of this
10
11
come out.
12
MR. ESQUIBEL:
Okay.
13
academy up again?
14
15
MR. ESQUIBEL:
16
All right.
luck.
17
18
THE OPERATOR:
19
MR. FISCHER:
20
21
22
23
24
with.
25
26
27
28
VTranz
www.avtranz.com (800) 257-0885
33
3
4
MR. SCOTT:
MR. FISCHER:
10
11
public?
MR. SCOTT:
12
13
secure.
MR. FICSHER:
All right.
14
15
16
needed.
17
18
MR. CANNON:
19
20
21
22
23
24
25
26
circumstances at hand.
27
28
MS. BOSTICK:
VTranz
www.avtranz.com (800) 257-0885
34
10
11
informed as appropriate.
MR. FISCHER:
Yeah.
Okay.
12
13
14
15
etcetera.
16
17
18
19
20
experience.
21
22
MR. TEIXEIRA:
23
MR. FISCHER:
24
no benefit.
25
26
nursing school?
27
28
MR. TEIXEIRA:
VTranz
www.avtranz.com (800) 257-0885
35
that opportunity.
10
If you
11
MR. FISCHER:
All right.
12
MR. TEIXEIRA:
13
14
15
16
MR. FISHER:
Certainly, certainly.
17
18
19
aviation program.
20
MR. TEIXEIRA:
21
22
23
there.
24
25
26
27
28
VTranz
www.avtranz.com (800) 257-0885
36
Well, certainly.
announcements.
There was a
10
11
So while
12
13
14
15
16
17
18
19
20
21
22
MR. TEIXEIRA:
23
24
25
years.
26
27
28
VTranz
www.avtranz.com (800) 257-0885
I just want to
37
10
went from.
11
12
13
14
15
16
17
18
19
20
21
22
23
24
MR. FISCHER:
All right.
25
26
Initiative Program.
27
28
MR. CANNON:
VTranz
www.avtranz.com (800) 257-0885
38
MR. FISCHER:
No.
That'll be
If there is
10
11
12
13
14
15
16
17
18
MR. TEIXEIRA:
19
20
21
granted there.
22
23
24
anybody since.
25
26
27
28
VTranz
www.avtranz.com (800) 257-0885
You
39
It is a
MR. FISCHER:
All right.
MR. TEIXEIRA:
Thank you.
THE OPERATOR:
Sharon LaRue.
MS. LARUE:
10
11
12
13
14
degree.
That
15
16
17
18
19
population?
20
MR. CANNON:
No.
There is no intent
21
22
23
24
25
Basically,
26
27
28
VTranz
www.avtranz.com (800) 257-0885
We are simply
40
Okay.
on that.
access.
10
MR. CANNON:
11
12
13
14
15
16
17
MS. LARUE:
Okay.
18
19
20
21
22
23
24
25
MR. CANNON:
26
27
letter?
28
MS. LARUE:
VTranz
www.avtranz.com (800) 257-0885
41
quite a while.
applied?
MR. CANNON:
recovered.
MS. LARUE:
Correct.
10
11
12
made to reapply?
13
MR. CANNON:
14
15
process was.
16
MS. LARUE:
17
18
19
20
Okay.
21
22
23
24
25
from us.
26
MS. LARUE:
27
28
MR. CANNON:
Okay.
VTranz
www.avtranz.com (800) 257-0885
out.
MS. LARUE:
Okay.
42
Yes, ma'am.
Would applicants that failed to be
10
11
new system?
MR. CANNON:
12
13
14
15
16
17
to actually be employed.
18
MS. LARUE:
Okay.
19
making.
20
21
22
23
24
25
26
27
28
VTranz
www.avtranz.com (800) 257-0885
that was?
MR. TEIXEIRA:
Okay.
43
We'll certainly
We're not
10
11
12
13
This is a
14
vacancy announcement.
15
16
17
18
19
20
21
22
on.
23
24
MS. LARUE:
25
26
27
28
going.
VTranz
www.avtranz.com (800) 257-0885
44
notice.
10
MR. TEIXEIRA:
But I
11
working on it.
12
it for months.
13
14
15
16
questions.
17
18
19
20
teleconference.
21
22
23
24
25
26
27
28
VTranz
www.avtranz.com (800) 257-0885
45
explain it to you.
MS. LARUE:
Okay.
So you are
be placed at facilities?
do accept a job.
This is something
At what point
10
11
12
13
14
15
be?
16
17
18
What if
from students.
MR. TEIXEIRA:
19
20
this conversation.
21
something Carrolyn?
22
23
24
MS. BOSTICK:
25
26
27
28
VTranz
www.avtranz.com (800) 257-0885
46
keep talking.
tackle.
But please
10
11
12
MR. TEIXEIRA:
13
14
15
now.
16
17
18
19
20
21
22
23
24
25
26
they are --
27
28
MS. LARUE:
But if
VTranz
www.avtranz.com (800) 257-0885
47
to offer that --
MR. TEIXEIRA:
MS. LARUE:
MR. TEIXEIRA:
Yeah.
If they do
10
11
12
13
14
15
MS. LARUE:
16
17
then?
18
can't?
19
MR. TEIXEIRA:
Okay.
20
21
company.
22
23
24
25
26
27
28
VTranz
www.avtranz.com (800) 257-0885
48
highest need.
MR. CANNON:
10
11
12
13
14
preferred location.
15
MS. LARUE:
Okay.
16
17
18
19
20
well?
21
MR. TEIXEIRA:
22
So are
23
MS. LARUE:
Okay.
Okay.
Anything else?
24
We've
25
26
27
28
VTranz
www.avtranz.com (800) 257-0885
49
10
11
12
13
UNID MALE:
14
15
There's going to be a
16
17
18
19
20
MS. LARUE:
21
22
THE OPERATOR:
23
Okay.
Thank you.
24
MS. DENARDO:
25
26
MS. DENARDO:
Hi.
Okay.
27
28
VTranz
www.avtranz.com (800) 257-0885
50
degree.
MR. CANNON:
10
(Indiscernible) to
11
12
13
14
15
three.
16
17
18
UNID MALE:
19
MR. CANNON:
20
UNID MALE:
21
MR. CANNON:
22
UNID MALE:
23
of professional work --
24
MR. CANNON:
Exactly.
25
UNID MALE:
26
MR. CANNON:
27
28
two.
MS. DENARDO:
VTranz
www.avtranz.com (800) 257-0885
So I could be in
MR. CANNON:
Exactly.
MS. DENARDO:
MR. CANNON:
10
11
12
13
14
anything.
15
51
MS. DENARDO:
16
17
18
MR. CANNON:
No, ma'am.
19
MS. DENARDO:
20
21
22
23
24
25
26
MR. TEIXEIRA:
27
announcement out.
28
VTranz
www.avtranz.com (800) 257-0885
52
announcement.
There's been no
MS. DENARDO:
MR. ROMEO:
are aware.
Our students
10
11
12
that out.
13
14
15
me just a second.
16
17
18
19
20
MR. ROMEO:
21
Yes, sir.
Thanks.
22
23
24
25
dig out --
26
MR. ROMEO:
27
28
Right.
VTranz
www.avtranz.com (800) 257-0885
53
would have.
MR. ROMEO:
10
MR. CANNON:
Right.
Right.
And I
11
12
opening.
13
14
15
16
17
18
19
20
So
MR. ROMEO:
So -- okay.
21
22
23
24
25
26
27
28
MR. SCOTT:
VTranz
www.avtranz.com (800) 257-0885
54
MR. ROMEO:
Right.
MR. SCOTT:
-- and experience.
education.
MR. CANNON:
That's right.
MR. SCOTT:
Okay.
MR. ROMEO:
Okay.
10
MR. SCOTT:
11
12
Right.
So --
Now, to follow on
13
14
15
16
17
18
19
20
21
22
23
24
that, but --
25
MR. ROMEO:
26
27
28
Okay.
Okay.
VTranz
www.avtranz.com (800) 257-0885
MR. TEIXEIRA:
55
MR. CANNON:
announcement.
10
11
12
a signal there.
13
14
be --
15
MR. ROMEO:
16
17
MR. CANNON:
Yes.
18
MR. ROMEO:
19
20
21
22
23
24
25
26
27
curious.
28
MR. TEIXEIRA:
VTranz
www.avtranz.com (800) 257-0885
MR. ROMEO:
Yeah, I guess.
MR. TEIXEIRA:
We don't know.
We don't know.
56
So we have --
Our own
10
11
12
13
14
15
16
17
get addressed.
18
19
20
21
their hometown.
22
a small number.
23
MR. ROMEO:
24
25
26
27
28
VTranz
www.avtranz.com (800) 257-0885
MR. SCOTT:
4
5
57
10
11
12
MR. ROMEO:
13
14
Okay.
Thanks, Corkey.
15
MR. ROMEO:
16
THE OPERATOR:
All right.
Thanks.
17
18
19
UNID MALE:
Maybe --
20
MR. LATHAM:
21
University.
22
23
24
25
26
27
28
VTranz
www.avtranz.com (800) 257-0885
58
CTI schools.
program.
the FAA has had the CTI program all the way back
10
11
12
13
14
15
16
17
18
19
20
21
22
23
UNID MALE:
The
24
25
26
27
28
MR. LATHAM:
Excuse me for
VTranz
www.avtranz.com (800) 257-0885
59
at.
UNID MALE:
That
10
11
12
13
14
15
16
CTI programs.
17
18
19
20
21
22
23
facility.
24
25
26
barriers.
27
28
UNID MALE:
VTranz
www.avtranz.com (800) 257-0885
60
So the
10
11
12
13
decision points.
14
15
16
17
18
19
20
MR. LATHAM:
So I think
21
22
23
24
25
26
27
28
Originally, I
So the question I
VTranz
www.avtranz.com (800) 257-0885
61
MR. TEIXEIRA:
And
Okay.
Okay.
But because --
10
MR. LATHAM:
Not really.
11
MR. TEIXEIRA:
12
MR. LATHAM:
Go ahead.
13
MR. TEIXEIRA:
14
15
16
17
18
19
old process.
20
them.
21
MR. LATHAM:
So,
22
wasn't.
23
24
MR. TEIXEIRA:
No.
25
MR. LATHAM:
26
MR. TEIXEIRA:
Okay.
I thought
So what I'm
27
28
VTranz
www.avtranz.com (800) 257-0885
MR. LATHAM:
No.
62
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
large reports.
25
of them, right.
26
27
28
VTranz
www.avtranz.com (800) 257-0885
different people.
MR. LATHAM:
63
10
11
12
13
14
program.
15
diverse enough.
16
17
18
19
20
21
22
23
24
25
MR. TEIXEIRA:
No.
26
MR. LATHAM:
-- the question --
27
MR. TEIXEIRA:
28
MR. LATHAM:
VTranz
www.avtranz.com (800) 257-0885
MR. TEIXEIRA:
64
Okay.
them.
10
11
12
right?
13
14
15
in the report.
16
17
18
19
20
21
22
23
24
25
26
27
28
VTranz
www.avtranz.com (800) 257-0885
65
MR. LATHAM:
questions.
MR. TEIXEIRA:
5
6
Thank you.
THE OPERATOR:
7
8
I have no further
MR. RESSITAR:
10
MR. TEIXEIRA:
Hi, Wayne.
11
MR. RESSITAR:
12
THE OPERATOR:
13
MR. RESSITAR:
Okay.
14
15
UNID MALE:
16
MR. RESSITAR:
17
UNID MALE:
18
UNID MALE 2:
19
THE OPERATOR:
20
21
UNID MALE:
Hi, Julie.
22
MR. RESSITAR:
23
UNID MALE:
24
MR. RESSITAR:
25
Can
26
UNID MALE:
Absolutely.
27
MR. RESSITAR:
Okay.
28
Hello.
VTranz
www.avtranz.com (800) 257-0885
66
of our enrollment.
10
11
12
13
14
15
16
MR. TEIXEIRA:
17
18
MR. RESSITAR:
Okay.
19
MR. TEIXEIRA:
20
21
22
at the pool.
23
24
25
five years.
26
MR. RESSITAR:
Okay.
So no lists.
27
28
VTranz
www.avtranz.com (800) 257-0885
to be gone.
MR. TEIXEIRA:
Correct.
MR. CANNON:
Yeah.
MR. RESSITAR:
So --
MR. CANNON:
Yeah.
9
10
MR. RESSITAR:
Right.
MR. CANNON:
11
12
67
So what I'm
13
14
15
16
17
18
19
to exist.
20
21
22
23
MR. TEIXEIRA:
Okay.
24
25
26
27
28
advantages.
VTranz
www.avtranz.com (800) 257-0885
MR. RESSITAR:
It did.
MR. SCOTT:
Okay.
Beaver County.
How are
10
11
12
68
MR. TEIXEIRA:
Okay.
13
14
15
16
17
18
19
And we have
20
21
22
23
24
25
26
MR. SCOTT:
27
MS. BOSTICK:
28
Bostick.
VTranz
www.avtranz.com (800) 257-0885
69
10
achieving diversity.
11
12
13
it.
14
15
16
17
diversity.
18
understands that.
19
20
MR. JONES:
But
21
22
23
telcon.
24
25
26
27
28
VTranz
www.avtranz.com (800) 257-0885
70
We
10
11
sign out.
UNID MALE:
12
13
THE OPERATOR:
Yes.
Thank you.
14
participation.
15
16
17
UNID MALE:
Bullshit.
(Recording Ends)
18
19
20
21
22
23
24
25
26
27
28
VTranz
www.avtranz.com (800) 257-0885
CERTIFICATE
5
6
7
8
9
______________________________
DEBRA E. SHEA
10
AVTranz
11
12
Phoenix, AZ 85003
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
VTranz
www.avtranz.com (800) 257-0885
71
EXHIBIT 4
Case 2:15-cv-05811-CBM-SS
Document
27-1
sent to an expanded
list of CTI stakeholders to initiate
a dialogue on
upcoming changes to the hiring process for of air traffic controllers #:283
by
the FAA.
The existing testing process has been updated. The revised testing
process is comprised of a biographical questionnaire (completed as
part of the application process) and the cognitive portion of the
AT-SAT. The cognitive portion of the AT-SAT will be administered
only to those who meet the qualification standards and pass the
biographical questionnaire. Applicants for the February 2014
announcement will be required to take and pass the new assessments in
order to be referred on for a selection decision.
Joseph Teixeira
Vice President for Safety &
Technical Training
Air Traffic Organization
Tel: 202-267-3341
Email: [email protected]
(Embedded image moved to file: pic26439.gif)
EXHIBIT 5
!33(%6/13%.%01%2%.3!3)5% (%
)/'1!0()#!,4%23)/..!)1%)2!)3%-).5%.3/183(!36!2$%5%,/0%$
"!2%$/.)3%-2&1/-6%.29)/'1!0()#!,4%23)/..!)1%6%.2#(/%.&%,$3
(%)3%-23!0%)'(3!1%!2
%$4#!3)/.!,"!#+'1/4.$
01)/1-),)3!18/1#)5),)!.%70%1)%.#%).
)-0/13!.#%0,!#%$/.5!1)/42&!#3/12%'2!,!18"%.%&)32*/"2%#41)38
3)-%%70%#3%$3/"%#/-%!.%&&%#3)5%
#/--)3-%.33/!.#!1%%1
6/1+1%,!3%$!33)34$%2
%70%#3%$2!3)2&!#3)/.6)3(!20%#32/&#!1%%12!.$
'%.%1!,0%12/.!,).&/1-!3)/.%'2/#)/%#/./-)#23!342'1/6).'40!,#/(/,
!.$3/"!##/42!'%/,,).2%3!,
EXHIBIT 6
Federal Aviation
Administration
ATC Hiring
Stakeholder Briefing
On
Hiring Process
Background
Interim Changes to the ATCS hiring process went into effect with
the February 10, 2014 announcement. Key changes included:
1 vacancy announcement
Federal Aviation
Administration
Federal Aviation
Administration
Federal Aviation
Administration
All qualified candidates who then meet all pre-employment requirements will receive a
firm offer letter
New hires attend Academy for initial training, facility offered upon
graduation
Service area preference considered but final facility assignment and
option based on Agency need
Includes consideration of those candidates impacted by age-related
provision in the FY15 Consolidated Appropriations Act
Federal Aviation
Administration
Federal Aviation
Administration
Process Overview
Specialized = Yes
ATC Experience
52 Weeks
Certified ATC
Experience?
No =
General
Experience/Education
March Announcement
January Announcement
Includes requirement
for 52 weeks of postcertification ATC
experience
Min Quals
TOL
Min Quals
Biographical Assessment
AT-SAT
Medical/Security/Suitability
TOL
Medical/Security/Suitability
FOL
FOL
Facility Placement
(2nd
Qtr 2015)
Federal Aviation
Administration
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 100 of 342 Page ID
#:295
Federal Aviation
Administration
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 101 of 342 Page ID
#:296
EXHIBIT 7
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 102 of 342 Page ID
#:297
...
From:
Date
To:
Sub
ry 2014
From: <[email protected]>
Date: Jan 27, 2014 16:01
Subject: All Sources Announcement February 2014
To:
Cc:
#!!%0
#*& $%#& 56$!&$% !# *$%#%
$& #%# %# $!$%$562 )#$% )#)##%!!%*% #-$
!%-%$$2
!-0%)##%!!%*% #$0)- )#!!& 0#%!# $$
$+ #)&.2+!!%*% #-+$%$$ #)##!!& %%- )$)(+ # $#2
;9:= )%2-!#
)%2
)+
% !!- #%#)#- )% ##% !% #%#-% %+
+*% #-
#-
%+*%
% #*)#%# $#& #! -%$2
*#*+ % %#$% %#!# $$)$1
& + !&&*049:0*- )% !%
22&.$+$$)
#)#-0;9:=2-*)$#)%)# $#& #! -%)$%!!-)#%+
Case
2:15-cv-05811-CBM-SS
Document
27-1 Filed 04/25/16 Page
103 of
!# $$)#%
!!# %*-
)%2!!%$+%%%&*
#(#$
342 Page ID
#:298
! -%+ &)% !# $$2
!!%$+*)%$%%$")& $#%#2!-0!!%$)$%
! $$$$%$%%#-#$ )4&+ #,!# # #3$#0 # & %
%+ % ")%#-#$2
!#4!
!#$ %+ 4!#%!# $$
!#4! -%%$%$#*$2#*$%$%$
-%%$%
")$& #
# !%)#%&!!& $)$")%%$%$%#
)")$&
! $%4!!& 2$)$")%%$%+ -$%#% % $+ %%")&
$%#$!$$%!!%")$& #2
!!%$+#")#% %!$$%+$$$$%$ ##% ### #
$& $#& 2
$*- )%+)$ #!!%$ )#$0#$)&$
& +###$% %")%$+ %%")& $%#$!$$
%!#4! -%%$%2% )!!% #!!##$+ $#0 #!
!##$+ #)$ #### #$& % $$%-!-+#
%
%%%$2
# .%$$# %!#* )$!!& !# $$2- )*")$& $ #
)#%##& %$$0!$#% %%*& ##$%=9>4B>=4=?>@
-% #-# A19922% =1<9!22%#%#2
#-0
!)%-$$$%%$%#% # #
)
$ )# %
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 104 of 342 Page ID
#:299
EXHIBIT 8
1/3/2016
Google Groups
NBCFAE Family,
Please read the information below carefully. This process is constantly evolving.
WHATS DIFFERENT
ere
The hiring process for applicants will be a modified Pepsi which means there will be five locations where
a person can travel at their own expense to go through the hiring process before attending the
academy. The locations are Seattle, Dallas, Atlanta, Chicago and the fifth site is to be determined.
https://groups.google.com/forum/print/msg/nbcfaeinfowestpac/7k6CeXLyoSI/gi4_MaHs3YgJ?ctz=3159682_88_88_104280_84_446940
1/3
1/3/2016
Applicants will be able to go through the security and human resources part of the process but will have
ve
to handle their medical clearance separately. If a person chooses not to use the Pepsi process then
they will still have the option to use the standard hiring process which takes more time. More to come.
The Federal Aviation Administration (FAA) will be issuing a large number of vacancy announcements
on
February 10, 2014, for air-traffic control specialists on a nation-wide basis. It will only be open for 10
days.
http://www.faa.gov/jobs/career_fields/aviation_careers/
Visit the FAA Virtual Career Fair and learn all about select aviation
careers FAA is offering. FAA recruitment experts will be available for
live chats on Jan. 29, 124 p.m. EST, and Feb. 12, 124 p.m. EST.
To register for the Career Fair and to learn about these aviation careers,
please visit: http://vshow.on24.com/vshow/network/registration/5492
Applicants are highly encouraged to use the resume builder available on the
USAJOBS website usajobs.gov.
Visit the USAJOBS Resource Center at help.usajobs.gov/ to learn how to
build your resume, and access tips and tutorials on applying and
interviewing for federal jobs.
In Unity,
https://groups.google.com/forum/print/msg/nbcfaeinfowestpac/7k6CeXLyoSI/gi4_MaHs3YgJ?ctz=3159682_88_88_104280_84_446940
2/3
1/3/2016
https://groups.google.com/forum/print/msg/nbcfaeinfowestpac/7k6CeXLyoSI/gi4_MaHs3YgJ?ctz=3159682_88_88_104280_84_446940
3/3
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 108 of 342 Page ID
#:303
EXHIBIT 9
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 110 of 342 Page ID
#:305
EXHIBIT 10
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 111 of 342 Page ID
#:306
DOT/FAA/AM-01/5
Documentation of Validity
for the AT-SAT
Computerized Test Battery
Volume I.
R.A. Ramos
Human Resources Research Organization
Alexandria, VA 22314-1591
Michael C. Heil
Carol A. Manning
Civil Aeromedical Institute
Federal Aviation Administration
Oklahoma City, OK 73125
March 2001
Final Report
U.S. Department
of Transportation
Federal Aviation
Administration
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 112 of 342 Page ID
#:307
N O T I C E
This document is disseminated under the sponsorship of
the U.S. Department of Transportation in the interest of
information exchange. The United States Government
assumes no liability for the contents thereof.
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 113 of 342 Page ID
#:308
DOT/FAA/AM-01/5
4. Title and Subtitle
5. Report Date
March 2001
7. Author(s)
This document is a comprehensive report on a large-scale research project to develop and validate a
computerized selection battery to hire Air Traffic Control Specialists (ATCSs) for the Federal Aviation
Administration (FAA). The purpose of this report is to document the validity of the Air Traffic Selection
and Training (AT-SAT) battery according to legal and professional guidelines. An overview of the project
is provided, followed by a history of the various job analyses efforts. Development of predictors and
criterion measures are given in detail. The document concludes with the presentation of the validation of
predictors and analyses of archival data.
Unclassified
Unclassified
22. Price
165
Reproduction of completed page authorized
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 114 of 342 Page ID
#:309
ACKNOWLEDGMENTS
The editors thank Ned Reese and Jay Aul for their continued support and wisdom throughout
this study. Also, thanks go to Cristy Detwiler, who guided this report through the review process
and provided invaluable technical support in the final phases of editing.
iii
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 115 of 342 Page ID
#:310
TABLE OF CONTENTS
VOLUME I.
Page
CHAPTER 1 AIR TRAFFIC SELECTION AND TRAINING (AT-SAT) PROJECT .......................................................... 1
CHAPTER 2 AIR TRAFFIC CONTROLLER JOB ANALYSIS ...................................................................................... 7
Prior Job Analyses ................................................................................................................................. 7
Linkage of Predictors to Work Requirements ...................................................................................... 14
CHAPTER 3.1 PREDICTOR DEVELOPMENT BACKGROUND ................................................................................ 19
Selection Procedures Prior to AT-SAT ................................................................................................. 19
Air Traffic Selection and Training (AT-SAT) Project ............................................................................ 21
AT-SAT Alpha Battery ......................................................................................................................... 23
CHAPTER 3.2 AIR TRAFFIC - SELECTION AND TRAINING ALPHA PILOT TRIAL AFTER ACTION REPORT ............... 27
The AT-SAT Pilot Test Description and Administration Procedures ................................................... 27
General Observations .......................................................................................................................... 28
Summary of the Feedback on the AT-SAT Pilot Test Battery ............................................................... 35
CHAPTER 3.3 ANALYSIS AND REVISIONS OF THE AT-SAT PILOT TEST ............................................................. 37
Applied Math Test ............................................................................................................................... 37
Dials Test ............................................................................................................................................ 38
Angles Test .......................................................................................................................................... 38
Sound Test .......................................................................................................................................... 38
Memory Test ....................................................................................................................................... 39
Analogy Test ........................................................................................................................................ 39
Testing Time ....................................................................................................................................... 40
Classification Test ............................................................................................................................... 41
Letter Factory Test ............................................................................................................................... 42
Analysis of Lft Retest ........................................................................................................................... 43
Scan Test ............................................................................................................................................. 47
Planes Test ........................................................................................................................................... 48
Experiences Questionnaire .................................................................................................................. 49
Air Traffic Scenarios ............................................................................................................................ 52
Time Wall/Pattern Recognition Test .................................................................................................... 54
Conclusions ........................................................................................................................................ 55
REFERENCES .............................................................................................................................................. 55
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 116 of 342 Page ID
#:311
Figure 2.2.
Example of Linkage Rating Scale. .............................................................................................. 61
Figure 3.3.1. Plot of PRACCY*PRSPEED. Symbol is value of TRIAL .......................................................... 62
Tables
Table 2.1.
Table 2.2.
Table 2.3.
Table 2.4.
Table 2.5.
Table 2.6.
Worker Requirement Ratings for Doing the Job for the Three Options and All ATCSs ............ 71
Worker Requirement Ratings for Learning the Job for the Three Options and All ATCSs ......... 73
Table 2.7.
Table 2.8.
Survey Subactivities for All ATCSs Ranked by the Mean Criticality Index ................................ 75
Worker Requirement Definitions Used in the Predictor-WR Linkage Survey ............................ 78
Table 2.9.
Table 2.10.
Table 2.11.
Table 3.1.1.
Table 3.1.2.
Table 3.1.3.
Table 3.1.4.
Table 3.1.5.
Table 3.1.6.
Table 3.2.1
Table 3.2.2
Table 3.2.3
Table 3.3.1.
Item Analyses and Scale Reliabilities: Non-Semantic Word Scale on the Analogy Test
(N=439) .................................................................................................................................... 97
Table 3.3.2.
Table 3.3.3.
Item Analyses and Scale Reliabilities: Semantic Word Scale on the Analogy Test ....................... 98
Item Analyses and Scale Reliabilities: Semantic Visual Scale on the Analogy Test ...................... 99
Table 3.3.4.
Table 3.3.5.
Item Analyses and Scale Reliabilities: Non-Semantic Visual Scale on the Analogy Test ............ 100
Distribution of Test Completion Times for the Analogy Test ................................................... 100
Table 3.3.6.
Table 3.3.7.
Estimates of Test Length to Increase Reliability of the Analogy Test ........................................ 101
Item Analyses and Scale Reliabilities: Non-SemanticWord Scale on the Classification Test ..... 101
Table 3.3.8.
Item Analyses and Scale Reliabilities: Semantic Word Scale on the Classification Test ............. 102
vi
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 117 of 342 Page ID
#:312
Table 3.3.9. Item Analyses and Scale Reliabilities: Non-Semantic Visual Scale on the Classification Test ... 102
Table 3.3.10. Item Analyses and Scale Reliabilities: Semantic Visual Scale on the Classification Test ............ 103
Table 3.3.11. Distribution of Test Completion Times for theClassification Test (N=427) ............................. 103
Table 3.3.12. Estimates of Test Length to Increase Reliability of the Classification Test ................................ 104
Table 3.3.13. Planning/Thinking Ahead: Distribution of Total Number Correct on the Letter Factory Test . 104
Table 3.3.14. Distribution of Number of Inappropriate Attempts to Place a Box in the Loading Area on the
Letter Factory Test (Form A) (N = 441) ................................................................................... 104
Table 3.3.15. Recall from Interruption (RI) Score Analyses on the Letter Factory Test (Form A) .................. 105
Table 3.3.16. Planning/Thinking Ahead: Reliability Analysis on the Letter Factory Test (Form A) ............... 105
Table 3.3.17. Situational Awareness (SA) Reliability Analysis: Three Scales on the Letter Factory Test ....... 106
Table 3.3.18. Situational Awareness (SA) Reliability Analysis: One Scale on the Letter Factory Test
(Form A) .................................................................................................................................. 108
Table 3.3.19. Planning/Thinking Ahead: Distribution of Total Number Correct on the Letter Factory Test
(Form B) .................................................................................................................................. 108
Table 3.3.20. Distribution of Number of Inappropriate Attempts to Place a Box in the Loading Area on the
Letter Factory Test (Form B) (N = 217) ................................................................................... 109
Table 3.3.21. Tests of Performance Differences Between LFT and Retest LFT (N = 184) ............................. 109
Table 3.3.22. Distribution of Test Completion Times for the Letter Factory Test (N = 405) ......................... 109
Table 3.3.23. Proposed Sequence Length and Number of Situational Awareness Items for the Letter
Factory Test.............................................................................................................................. 110
Table 3.3.24. Distribution of Number Correct Scores on the Scan Test (N = 429) ....................................... 110
Table 3.3.25. Scanning: Reliability Analyses on the Scan Test ....................................................................... 111
Table 3.3.26. Distribution of Test Completion Times for the Scan Test (N = 429) ....................................... 112
Table 3.3.27. Reliability Analyses on the Three Parts of the Planes Test ........................................................ 112
Table 3.3.28. Distribution of Test Completion Times for the Planes Test ...................................................... 112
Table 3.3.29. Generalizability Analyses and Reliability Estimates .................................................................. 113
Table 3.3.30. Correlations of Alternative ATST Composites with End-of-Day Retest Measure .................... 115
Table 3.3.31. Time Distributions for Current Tests ....................................................................................... 116
Appendices
Appendix A AT-SAT Prepilot Item Analyses: AM (Applied Math) Test Items That Have Been Deleted ....... A1
Appendix B Descriptive Statistics, Internal Consistency Reliabilities, Intercorrelations, and Factor Analysis
Results for Experience Questionnaire Scales ............................................................................. B1
vii
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 118 of 342 Page ID
#:313
CHAPTER 1
AIR T RAFFIC SELECTION
AND
INTRODUCTION
This document is a comprehensive report on a largescale research project to develop and validate a computerized selection battery to hire Air Traffic Control
Specialists (ATCSs) for the Federal Aviation Administration (FAA). The purpose of this report is to document the validity of the Air Traffic Selection and
Training (AT-SAT) battery according to legal and professional guidelines. The Dictionary of Occupational
Titles lists the Air Traffic Control Specialist Tower as
number 193162018.
Background
The ATCS position is unique in several respects. On
the one hand, it is a critically important position at the
center of efforts to maintain air safety and efficiency of
aircraft movement. The main purpose of the ATCS job
is to maintain a proper level of separation between
airplanes. Separation errors may lead to situations that
could result in a terrible loss of life and property. Given
the consequences associated with poor job performance
of ATCSs, there is great concern on the part of the FAA
to hire and train individuals so that air traffic can be
managed safely and efficiently. On the other hand, the
combination of skills and abilities required for proficiency in the position is not generally prevalent in the
labor force. Because of these characteristics, ATCSs
have been the focus of a great deal of selection and
training research over the years.
Historical events have played a major role in explaining the present condition of staffing, selection and
training systems for ATCSs. In 1981, President Ronald
Reagan fired striking ATCSs. Approximately 11,000 of
17,000 ATCSs were lost during the strike. Individuals
hired from August 1981 to about the end of 1984
replaced most of the strikers. A moderate level of new
hires was added through the late 1980s. However,
relatively few ATCSs have been hired in recent years due
to the sufficiency of the controller workforce. Rehired
controllers and graduates of college and university aviation training programs have filled most open positions.
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 119 of 342 Page ID
#:314
Organization of Report
A collaborative team, made up of several contractors
and FAA employees, completed the AT-SAT project
and this report. Team members included individuals
from the Air Traffic Division of the FAA Academy and
Civil Aeromedical Institute (CAMI) of the FAA, Caliber, Personnel Decisions Research Institutes (PDRI),
RGI, and the Human Resources Research Organization
(HumRRO). The Air Traffic Division represented the
FAA management team, in addition to contributing to
predictor and criterion development. CAMI contributed to the design and development of the job performance measures. Caliber was the prime contractor and
was responsible for operational data collection activities
and job analysis research. PDRI was responsible for
research and development efforts associated with the job
performance measures and development of the Experience Questionnaire (EQ). RGI was responsible for
developmental activities associated with the Letter Factories Test and several other predictors. HumRRO had
responsibility for project management, predictor development, data base development, validity data analysis,
and the final report.
The final report consists of six chapters, with each
chapter written in whole or part by the individuals
responsible for performing the work. Contents of each
chapter is summarized below:
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 120 of 342 Page ID
#:315
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 121 of 342 Page ID
#:316
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 122 of 342 Page ID
#:317
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 123 of 342 Page ID
#:318
CHAPTER 2
AIR TRAFFIC CONTROLLER JOB ANALYSIS
Ray A. Morath, Caliber Associates
Douglas Quartetti, HumRRO
Anthony Bayless, Claudet Archambault
Caliber Associates
Computer Technologies Associates (CTA)
CTA conducted a task analysis of the ARTCC,
TRACON, and Tower Cab assignments with the goal
not only of understanding how the jobs were currently
performed but also of anticipating how these jobs would
be performed in the future within the evolving Advanced Automation System (AAS).1 They sought to
identify the information processing tasks of ARTCC,
TRACON, and Tower Cab controllers in order to help
those designing the AAS to gain insight into controller
behavioral processes (Ammerman et al., 1983).
An extensive assortment of documents was examined
for terms suitable to the knowledge data base, including
FAA, military, and civilian courses. Listed below are the
sources of the documents examined for ATCS terms
descriptive of knowledge topics and technical concepts:
Civilian publications
Community college aviation program materials
Contractor equipment manuals
FAA Advisory Circulars
FAA air traffic control operations concepts
FAA documents
FAA orders
Local facility handbooks
Local facility orders
Local facility training guides and programs
NAS configuration management documents
National Air Traffic Training Program (manuals,
examinations, lesson plans, guides, reference materials,
workbooks, etc.)
Naval Air Technical Training Center air traffic controller training documents
U.S. Air Force regulations and manuals
Alexander, Alley, Ammerman, Fairhurst, Hostetler, Jones, & Rainey, 1989; Alexander, Alley, Ammerman, Hostetler, & Jones,
1988; Alexander, Ammerman, Fairhurst, Hostetler, & Jones, 1989; Alley, Ammerman, Fairhurst, Hostetler, & Jones, 1988;
Ammerman, Bergen, Davies, Hostetler, Inman, & Jones, 1987; Ammerman, Fairhurst, Hostetler, & Jones, 1989.
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 124 of 342 Page ID
#:319
Coding
Decoding
Deductive reasoning
Filtering
Image/pattern recognition
Inductive reasoning
Long-term memory
Mathematical/probabilistic reasoning
Movement detection
Prioritizing
Short-term memory
Spatial scanning
Verbal filtering
Visualization
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 125 of 342 Page ID
#:320
Perceptual Tasks
Discrete Motor Tasks
Continuous Psychomotor Tasks
Cognitive Tasks
Communication Tasks
Spatial visualization
Mathematical reasoning
Prioritization
Selective attention
Mental rotation
Multi-task performance (time sharing)
Abstract reasoning
Elapsed time estimation and awareness
Working memory - attention capacity
Working memory - activation capacity
Spatial orientation
Decision making versus inflexibility
Time sharing - logical sequencing
Vigilance
Visual spatial scanning
Time-distance extrapolation
Transformation
Perceptual speed
Embry-Riddle
Using a hierarchical arrangement of activities and
tasks borrowed from CTA, Embry-Riddle researchers
(Gibb et al., 1991) found that five activities and 119
tasks subsumed under those more global activities were
identified as critical to controller performance in the
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 126 of 342 Page ID
#:321
Deductive reasoning
Inductive reasoning
Long-term memory
Visualization
Speed of closure
Time sharing
Flexibility of closure (selective attention)
Category flexibility
Number facility
Information ordering
Those abilities rated by Terminal controllers as required to perform the Terminal option tasks were:
Selective attention
Time sharing
Problem sensitivity
All of Fleishmans physical abilities related to visual,
auditory, and speech qualities
- Oral expression
-Deductive reasoning
- Inductive reasoning
- Visualization
- Spatial orientation
- All perceptual speed abilities.
The Embry-Riddle researchers presented no discussion on why differences in abilities between ARTCC
and Terminal controllers were found.
Landon
Landon (1991) did not interview SMEs, observe
controllers, or canvass selected groups to collect job
analysis information. Rather, Landon reviewed existing
documents and job analysis reports and summarized
this information. Landons focus was to identify and
classify the types of tasks performed by controllers.
Using CTAs hierarchical categorization of tasks, the
ATCS tasks were organized into three categories based
upon the type of action verb within each task:
10
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 127 of 342 Page ID
#:322
11
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 128 of 342 Page ID
#:323
12
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 129 of 342 Page ID
#:324
13
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 130 of 342 Page ID
#:325
Conclusions
Considering the results of the SACHA job analysis
survey and taking into account the goals of this selection-oriented job analysis, the project staff arrived at
several general conclusions.
There appeared to be no substantial differences in the
rankings of the important WRs between ARTCC,
TRACON, and Tower Cab controllers. However, the
differences in the rankings found between Flight Service
option controllers and the other options did appear to
be substantive enough that any future efforts to develop
selection instrumentation should take these differences
into account.
Considerable agreement was found between the
subactivity rankings for the ARTCC, TRACON, and
Tower Cab controllers, while the rank ordering of the
subactivities for the Flight Service option appears to be
different from all other options and job assignments.
Regardless of job option or assignment, multitasking
is an important component of the ATCS job.
14
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 131 of 342 Page ID
#:326
scales (but did not receive the construct labels for these
scales). Respondents were to use the items comprising
each scale to determine the construct being measured by
that particular scale and then make their ratings as to the
degree to which the scale successfully measured each WR.
Definitions of WRs
The survey contained an attachment listing the WRs
and their accompanying definitions from SACHAs
revised consolidated WR list (except for the SMEgenerated WR of Rule Application). It was felt that, in
order for respondents to make the most informed linkage rating between a test and a WR, they should not only
have a clear understanding of the properties of the test,
but also possess a firm grasp of the WR. Survey respondents were instructed to read through the attachment of
WRs and their respective definitions before making any
linkage ratings and to refer back to these definitions
throughout the rating process (Table 2.8).
Dials
Sound
Letter Factory
Applied Math
Scanning
Angles
Analogies
Memory
Air Traffic Scenarios
Experience Questionnaire
Time Wall/Pattern Recognition
Planes
Survey Respondents
To qualify as raters, individuals had to be familiar
with the measures comprising the AT-SAT battery, and
they had to have an understanding of each of the WRs
being linked to the various measures. Potential respondents were contacted by phone or E-mail and informed
of the nature of the rating task. A pool of 25 potential
respondents was identified. The individuals in this pool
came primarily from the organizations contracted to
perform the AT-SAT validation effort but also included
FAA personnel directly involved with AT-SAT.
Survey Methodology
Those who had agreed to participate in the linkage
process received the packet of rating materials via regular mail. Each packet contained the following items:
(1) An introduction, which outlined the importance of
linking the AT-SAT predictor tests to the WRs identified in the SACHA job analysis. It included the names
and phone numbers of project staff who could be
contacted if respondents had questions concerning the
rating process.
(2) The 7-item background questionnaire.
(3) The attachment containing the list of WRs and
their definitions.
15
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 132 of 342 Page ID
#:327
Scale Reliability
Reliability indices were computed for each rating
scale. Scale reliabilities ranged from .86 to .96. Hence,
the intraclass correlations (Shrout & Fleiss, 1979) for
each of the rating scales revealed a high level of agreement between the respondents as to which WRs were
being successfully measured by the respective tests.
These reliability coefficients are listed in Table 2.9. In
view of the high level of agreement, it appeared that such
factors as the raters highest educational degree, educational
background, and familiarity with the ATCS job did not
influence the level of agreement among the raters.
Angles
The Angles test measures the participants ability to
recognize angles. This test contains 30 multiple-choice
questions and allows participants up to 8 minutes to
complete them. The score is based on the number of
correct answers (with no penalty for wrong or unanswered questions). There are two types of questions on
the test. The first presents a picture of an angle and the
participant chooses the correct answer of the angle (in
degrees) from among four response options. The second
presents a measure in degrees and the participant chooses
the angle (among four response options) that represents
that measure. For each worker requirement listed below, enter the rating best describing the extent to which
this test and/or its subtests measure that particular
worker requirement.
16
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 133 of 342 Page ID
#:328
The linkage survey results indicated that all important WRs were not successfully measured by the ATSAT battery. Four WRs (Oral Communication, Problem
Solving, Long-Term Memory, and Visualization) from
the top third of SACHAs rank-ordered list did not have
linkage means high enough to suggest that they were
being measured to at least a moderate extent. None of
the AT-SAT tests were specifically designed to measure
oral communication and, as a result, linkage means
between this WR and the tests were found to be at or
near zero. Problem Solving had mean linkage ratings
that approached our criterion for inclusion for the
Applied Math and the Letter Factory tests. Similarly, the
mean linkage ratings between the Memory test and
Long-Term Memory, and between the Letter Factory
test and Visualization also approached but failed to
meet the mean criterion score of 3.
Quality of Individual Tests in the AT-SAT Battery
Results of the linkage survey were also summarized to
enable project staff to gain insight into how well individual tests were measuring the most important WRs.
Based upon the criterion of mean linkage score > 3 for
demonstrating that a test successfully measures a particular WR, project staff determined the number of
WRs successfully measured by each test. This score
provided some indication of the utility of each test.
Project staff also computed two additional scores to
indicate the utility of each measure. Some WRs were
rated as being successfully measured by many tests, and
other WRs were measured by only one or two tests. Two
other indicators of the utility of a measure were developed: (a) the number of WRs a test measured that are
only measured by one (or fewer) other test(s), and (b) the
number of WRs that are not measured by any other test.
Scores based upon these criteria were computed for each
measure and are listed in Table 2.11.
In addition to the indicators of each tests utility, it
was felt that indicators of each tests utility and quality
in measuring WRs could also be computed. To provide
some indication of each tests quality, project staff again
utilized SACHA findings the ARTCC controller
ratings of the importance of each WR for doing the job.
Each WRs mean importance rating (from SACHA)
was multiplied by those WR/test linkage ratings meeting criteria. The product of these two scores (mean WR
importance for doing the job x mean linkage rating of
WR for a test) factored in not only how well the test was
capturing the WR but the importance of that WR as
well. The mean and sum of these products were com-
17
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 134 of 342 Page ID
#:329
CONCLUSION
Based upon the results of the linkage survey, every
test within the AT-SAT battery appeared to be successfully measuring at least one WR, and many of the tests
were rated as measuring multiple WRs. While not every
WR was thought to be successfully measured by the ATSAT battery, the vast majority of the WRs considered
most important for doing the job was successfully
measured by one or more predictors from the battery.
18
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 135 of 342 Page ID
#:330
CHAPTER 3.1
PREDICTOR DEVELOPMENT BACKGROUND
Douglas Quartetti, HumRRO
William Kieckhaefer, RGI, Inc.
Janis Houston, PDRI, Inc
The final test in the OPM Battery, the OKT, contained questions on air traffic phraseology and procedures. It was designed to provide credit for prior ATCS
experience. It has been reported that OKT scores correlated with many of the indices of training success (Boone,
1979; Buckley, OConnor, & Beebe, 1970; Manning et
al., 1989; Mies, Coleman, & Domenech, 1977).
The scores on the MCAT and the ABSR were combined with weights of .80 and .20 applied, respectively.
These scores were then transmuted to have a mean of 70
and maximum of 100. The passing score varied with
education and prior experience. Applicants who received
passing scores on the first two predictors could receive up
to 15 additional points from the OKT.
The second stage in the hiring process was the Academy Screen. Applicants who passed the OPM Battery
were sent to the FAA Academy for a 9-week screen,
which involved both selection and training (Manning,
1991a). Students spent the first 5 weeks learning aviation
and air traffic control concepts and the final 4 weeks
being tested on their ability to apply ATC principles in
non-radar simulation problems. Applicants could still be
denied positions after the 9 weeks on the basis of their
scores during this phase. The reported failure rate was 40
percent (Cooper et al., 1994).
This hiring process received much criticism, despite
its reported effectiveness and links to job performance.
The criticisms revolved around the time (9 weeks for the
Academy screen) and cost of such a screening device
($10,000 per applicant). In addition to the FAA investment, applicants made a substantial investment, and the
possibility remained that after the 9 weeks an applicant
could be denied a position. Finally, there was concern
that the combination of screening and training reduced
training effectiveness and made it impossible to tailor
training needs to individual students.
As a result of these criticisms, the FAA separated
selection and training, with the idea that the training
atmosphere of the Academy Screen would be more
supportive and oriented toward development of ATCSs
19
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 136 of 342 Page ID
#:331
20
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 137 of 342 Page ID
#:332
21
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 138 of 342 Page ID
#:333
Stix
Time
Syllogism
Analogy
Classification
Personal Experiences and Attitude
Questionnaire (PEAQ)
Letter Factory
Air Traffic Scenario (from PTS)
Time Wall/Pattern Recognition
(from PTS) Static Vector/Continuous Memory (from PTS)
The project staff met with the panel members on 57 November, l996 to discuss the predictor battery. For
each test, independent ratings on each evaluation criterion were collected, and the relative merits and problems of including that test in the predictor battery were
discussed. The comments were summarized and recorded.
After the group discussion, panel members were
asked to provide independent evaluations on whether or
not each test should be included in the predictor battery.
For each test, panel members indicated Yes for inclusion, No for exclusion, and Maybe for possible
inclusion. The Yes-No-Maybe ratings were tallied and
summarized.
22
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 139 of 342 Page ID
#:334
Excluded Tests
Three of the 20 tests reviewed were deleted from
further consideration: Stix, Map (except as it might be
revised to cover memory), and Syllogism. These tests
were deleted because of problems with test construction, and/or questionable relevance for important job
requirements, and/or redundancy with the included
measures.
Additional Recommendations
Several additional recommendations were made concerning the predictor battery and its documentation.
The first was that all tests, once revised, be carefully
reviewed to ensure that the battery adheres to good test
construction principles such as consistency of directions and keyboard use, reading/vocabulary level, and
balancing keyed response options.
A second recommendation was that linkages be provided for worker requirements that do not currently
have documented linkages with ATCS job duties. The
current documentation (from the Job Analysis report)
was incomplete in this regard.
A third recommendation was to pilot test the
predictor set in February l997. It was thought that this
would yield the kind of data needed to perform a final
revision of all predictors, select the best test items,
shorten tests, reduce redundancy across tests, ensure
clarity of instructions, and so on.
23
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 140 of 342 Page ID
#:335
Analogy Test
The Analogy test measures the participants ability to
apply the correct rules to solve a given problem. An
analogy item provides a pair of either words or figures
that are related to one another in a particular way. In the
analogy test, a participant has to choose the item that
completes a second pair in such a way that the relationship of the items (words or figures) in the second pair is
the same as that of the first.
The test has 57 items: 30 word analogies and 27
visual analogies. Each item has five answer options. The
scoring is based primarily on the number of correct
answers and secondarily on the speed with which the
participant arrived at each answer. Visual analogies can
contain either pictures or figures. The instructions
inform the participant that the relationships for these
two types of visual analogies are different. Picture analogies are based on the relationships formed by the meaning of the object pair (e.g., relationships of behavior,
function, or features). Figure analogies are based on the
relationships formed by the structure of the object pair
(e.g., similar parts or rotation).
Dials Test
The Dials test is designed to test the participants
ability to quickly identify and accurately read certain
dials on an instrument panel. The test consists of 20
items completed over a total time of 9 minutes. Individual items are self-paced against the display of time
left in the test as a whole. Participants are advised to skip
difficult items and come back to them at the end of the
test. The score is based on the number of items answered
correctly. The test screen consists of seven dials in two
rows, a layout which remains constant throughout the
test. Each of the seven dials contains unique flight
information. The top row contains the following dials:
Voltmeter, RPM, Fuel-air Ratio, and Altitude. The
bottom row contains the Amperes, Temperature, and
Airspeed dials.
Each test item asks a question about one dial. To
complete each item, the participant is instructed to (1)
find the specified scale on the instrument panel; (2)
determine the point on the scale represented by the
needle; (3) find the corresponding value among the five
answer options; (4) use the numeric keypad to press the
number corresponding to the option.
Angles Test
The Angles test measures the participants ability to
recognize angles. This test contains 30 multiple-choice
questions and allows participants up to 8 minutes to
complete them. The score is based on the number of
correct answers (with no penalty for wrong or unanswered questions). There are two types of questions.
The first presents a picture of an angle, and the participant chooses the correct answer of the angle (in degrees)
from among four response options. The second presents
a measure in degrees, and the participant chooses the
angle (among four response options) that represents that
measure.
Experiences Questionnaire
The Experiences Questionnaire assesses whether participants possess certain work-related attributes by asking questions about past experiences. There are 201
items to be completed in a 40-minute time frame. Items
cover attitudes toward work relationships, rules, decision-making, initiative, ability to focus, flexibility, selfawareness, work cycles, work habits, reaction to pressure,
attention to detail, and other related topics. Each question is written as a statement about the participants past
experience and the participant is asked to indicate their level
of agreement with each statement on the following 5-point
scale: 1= Definitely true, 2= Somewhat true, 3= Neither
true nor false, 4= Somewhat false, 5= Definitely false.
24
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 141 of 342 Page ID
#:336
Planes Test
The Planes test contains three parts, each with 48
items to be completed in 6 minutes. Each individual
item must be answered within 12 seconds. Part 1:
Participants perform a single task. Two planes move
across a screen; one plane is red, the other is white. Each
plane moves toward a destination (a vertical line) at a
different speed. The planes disappear before they reach
their destinations, and the participant must determine
which plane would have reached its destination first. To
answer each item, the participant presses the red key
if the red plane would have reached the destination first,
and the white key if the white plane would have
arrived first. Participants can answer while the planes are
still moving, or shortly after they disappear. Part 2: Part
2 is similar to Part 1, but participants must now perform
two tasks at the same time. In this part of the test,
participants determine which of two planes will arrive at
the destination first. Below the planes, a sentence will
appear stating which plane will arrive first. The participant must compare the sentence to their perception of
the planes arrival, and press the true key to indicate
agreement with the statement, or the false key to
indicate disagreement. Part 3: Participants perform the
25
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 142 of 342 Page ID
#:337
CONCLUSION
The initial AT-SAT test battery (Alpha) was professionally developed after a careful consideration of multiple factors. These included an examination of the
SACHA job analysis and prior job analyses that produced lists of worker requirements, prior validation
research on the ATCS job, and the professional judgment of a knowledgeable and experienced team of
testing experts.
26
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 143 of 342 Page ID
#:338
CHAPTER 3.2
AIR TRAFFIC - SELECTION
AND
The purpose of this report is to document the observations of the Air Traffic - Selection and Training
Completion (AT-SAT) predictor battery (alpha version) pilot trial. The AT-SAT predictor battery is a series
of tests in five blocks (A through E) of 90 minutes each
and four different ending blocks of 20 minutes each.
The pilot test was administered February 19 through
March 2, 1997, in the Air Traffic Control School at the
Pensacola Naval Air Station in Pensacola, Florida. Participants consisted of 566 students stationed at the
Naval Air Technical Training Center (NATTC). Of the
566 students, 215 of the participants were currently
enrolled in the Air Traffic Control School and 346 were
students waiting for their classes at NATTC to begin.
(The status of five participants was unknown.)
the LFT
the ATS
the SVCM and Word Memory tests
the Word Memory and TWPR tests
The following section describes the test administration procedures including the sequence of the testing
blocks for groups of participants.
27
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 144 of 342 Page ID
#:339
GENERAL OBSERVATIONS
This section presents some general observations about
the entire AT-SAT Battery Pilot Test. The remarks in
this section address the instructions, the test ending, the
purpose of the tests, and the introductory block.
Instructions
Instructions for several of the tests in the battery need
improved clarity. Participants often did not understand
the test instructions as written but proceeded with the
tests, anticipating that the objective of the tests would
become more clear as the tests proceeded. Too often,
however, participants still did not understand the objective even after attempting a few examples. (After participants completed the examples, they would often raise
their hand and ask for further instructions.) Therefore,
any practice sessions for the tests did not clarify the
confusing instructions. The test instructions that need
revision and feedback for specific test blocks are discussed in the following sections.
Introductory Block
The addition of an Introductory Block (IB) is recommended. The IB could explain of the general purpose of
the testing a modified version of the Keyboard Familiarization section and the current Background Information questions.
The explanation of the general purpose of the test
might also include a brief description of the evolution of
the test (how the FAA came to design this specific testing
procedure). This section could describe the types of tests
and the general purpose of the tests (i.e., ability to multitask, ability to follow instructions, skill with plane
routing procedures, etc.). Finally, general grading/scoring procedures could be explained with more specific
explanations within each of the tests.
The Keyboard Familiarization (KF) currently includes instruction and practice for the number keys and
the A, B, C keys (after the Test Administrator exchanges the slash, star, and minus keys with the A, B,
and C keys) on the numeric pad on the right side of the
keyboard. Instructions should be modified to include
the names of the tests requiring the use of these keys.
28
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 145 of 342 Page ID
#:340
New Planes
New planes that appear in the participants airspace
are white (while all other planes are green). The white
planes remain circling at the point where they entered
the airspace until they receive acknowledgment from
the controller (by clicking on the graphic with the
mouse pointer). Often during testing participants did
not understand the purpose of the white planes in their
airspace. They would leave the white planes circling and
never manipulate their heading, speed, or level. White
planes need to be more clearly defined as new planes in
the controllers airspace that require acknowledgment
by the controller.
29
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 146 of 342 Page ID
#:341
Sound Test
For this test, the participant uses headphones to listen
to a sequence of numbers. Then the participant must
repeat the sequence of the numbers heard using the
right-hand numeric keypad to record the sequence of
numbers.
Failures
It was found in the first day of testing that computers
would lock or fail if the Sound Test was run after any
other blocks. In other words, unless Block B was first in
the sequence of testing, computers would fail (at the
moment participants are prompted for sound level
adjustment) and need to be rebooted. This proved
disruptive to other participants and delayed the start of
the test (since Test Administrators can only aid one or
two participants at a time). To prevent failures during
the testing, Test Administrators would reboot every
computer before the start of Block B. Still, the software
would sometimes fail at the moment the participant is
requested to adjust the volume to their headphones via
the keyboard (versus the sound level adjustment located
directly on the headphones). On occasion, the Sound
test would still fail, but after again rebooting the computer, the program recovered.
After several attempts to restore the program where
there were repeated failures, the computer still did not
allow the participant to continue with the test. In these
cases where a computer failed repeatedly, participants
Test Instruction
The test instructions are clear and well-written. Few
participants had questions in reference to the tasks they
were to perform once the test began.
Demonstration
Participants were often confused during the demonstration because the pointer would move when they
moved the mouse, but they could not click and
manipulate the screen. Participants would ask Test
Administrators if they had already begun the test since
they could move the pointer. Perhaps the mouse can be
completely disabled during the demonstration to eliminate confusion. Disabling the mouse would allow participants to concentrate on the instructions since they
would not be distracted by movement of the mouse.
Mouse Practice Instructions
Instructions for the mouse practice session are not
clear. The objective of the mouse practice is for the
participant to click on the red box in the middle of the
30
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 147 of 342 Page ID
#:342
screen and then click on the conveyer belt that illuminates. Participants are often unsure of the objective.
Perhaps text box instructions can be displayed on the
screen that direct the participant to click on the red box.
As the participant clicks on the red box, another instruction screen would appear, telling the participant to click
on the illuminated conveyer belt. After a few sequences
with text box instruction, the instructions could be
dropped.
Some participants had difficulty completing the
mouse practice session. They continuously received
messages instructing them to ...move the mouse faster
and in a straight line. Perhaps there should be a limit to
the number of mouse practice exercises. It is possible
that some participants are not capable of moving the
mouse quickly enough to get through this section.
Experiences Questionnaire
The Experiences Questionnaire determines whether
the participant possesses work-related attributes needed
to be an air traffic controller. Participants generally did
not ask any questions about the Experiences Questionnaire. The occasional inquiry was in reference to the
purpose of certain questions. Test Administrators did
not receive questions about the wording of the items.
FEEDBACK ON BLOCK D
This section details the observations and suggestions
for the three tests in Block D: the Time Wall/Pattern
Recognition Test; the Analogy Test; and the Classification Test. Specific comments about each test are provided below.
Instructions
Participants do not understand the instructions for
the Memory part of the test. Numerous participants
asked for clarity on what numbers they were to compare.
31
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 148 of 342 Page ID
#:343
Analogy Test
The Analogy Test measures the participants reasoning ability in applying the correct rules to solve a given
problem. The participant is asked to determine the
relationship of the words or pictures in set A and use this
relationship to complete an analogy in set B. The
following paragraph provides observations and suggestions for this test.
Level of Difficulty
The vocabulary level and the types of relationships
depicted in the Analogy Test may have been too difficult
for the pilot test participants. Perhaps the questions can
be revised to require a lower level of vocabulary and
reasoning skills for the participants.
Classification Test
This also measures the participants reasoning ability
in applying the correct rules to solve a given problem.
The Classification Test is similar to the Analogy Test,
except that the participant is required to determine the
relationship of three words or pictures and use this
relationship to complete the series with a fourth word or
picture. The following paragraph provides observations
and suggestions for the improvement of this test.
Level of Difficulty
Similar to the issues discussed with the Analogy Test,
many of the items in the Classification Test appeared to
be difficult for the pilot test population. The Classification Test could be revised to allow a lower level of
vocabulary and reasoning skills.
Keyboard
As with the Static Vector/Continuous Memory Test,
many participants attempted to use the numerical keys
on the right-hand side of the keyboard to answer the
items rather than using the keys on the top of the
keyboard as instructed. When participants use the righthand keypad, their answers are not recorded. The keys
to be used for this test need to be stated more explicitly.
Participants may be using the keypad because of the
instruction they receive in the Keyboard Familiarization
(KF) section at the beginning of the first block of testing.
The current version of the KF only provides instruction
for using the keypad. The KF does not instruct participants on the use of the numerical keys on the top of the
keyboard.
32
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 149 of 342 Page ID
#:344
Level of Difficulty
The majority of participants appeared to understand
how to respond to this test. The practice session for this
test seemed to work well in preparing participants for
the actual test questions.
Computer Keyboards
Since the directions instructed the participants to
respond as quickly as possible, in their haste, many
participants were pressing the numeric keys very hard.
The banging on the keyboard was much louder with this
test than with any of the other tests; this affect the
longevity of the numeric keys when this test is repeated
numerous times.
Planes Test
The Planes Test measures the participants ability to
perform different tasks at the same time. The Planes
Test consists of three parts. In Part one, the participant
uses the 1 and the 3 keys to determine whether the
red plane (1) or the white plane (3), which are at varying
distances from their destinations, will reach its destination first. In Part two, the participant uses the 1 and
the 3 keys to determine if a statement about the red
and white planes as they are in motion is true (3) or false
(1). In Part three, the participant uses the 1 and the 3
keys to determine if a statement about the arrival of the
red and white planes at their destination are true (3) or
false (1), but unlike in Part two, the planes are at varying
distances from their destinations. The following paragraphs provide observations and suggestions for the
improvement of this test.
Scan Test
The Scan Test measures a participants ability to
promptly notice relevant information that is continuously moving on the computer screen. Participants are
provided with a number range and asked to type the
identifier for numbers that appear on the screen outside
of that range. A revised version of this test was installed
midway through the pilot test, which changed the
process for recording data but did not change the
appearance or the performance of the test for the participants. The following paragraphs provide observations
and suggestions for the improvement of this test.
Instructions
While the instructions for the test seemed clear,
participants had some common misunderstandings with
the test instructions. First, participants typed the actual
numbers which were outside of the number range
instead of the identifier numbers. This confusion might
be alleviated by revising the text that appears on the
bottom of the screen during the test. It currently states,
Type the identifier numbers contained in the data
blocks with the lower line numbers falling beyond the
range. It could be revised to state, Type the identifier
numbers contained in the data blocks (following the
letter) with the lower line numbers falling beyond the
range. Second, participants did not know to push
Enter after typing the identification numbers. This
confusion might be alleviated by highlighting the text
Practice Sessions
The practice sessions preceding the first two parts of
the Planes Tests are somewhat lengthy. There are 24
practice items that the participant must complete before
the actual test of 96 items. If the number of practice
items were reduced by one half, the participants would
still have enough practice without becoming bored
before the actual test begins.
Level of Difficulty
Participants appeared to be challenged by the Planes
Test. One factor that added to the level of difficulty for
the participants was that the response keys for Parts two
and three of this test are: 1 = false and 3 = true. It was
more intuitive for many participants that 1 = true and
3 = false thus, they had a difficult time remembering
33
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 150 of 342 Page ID
#:345
which keys to use for true and false. This might have
caused participants more difficulty than actually determining the correct answer to the statements. If the
labeling of the true and false response keys cannot be
modified in future software versions, a message box can
be created to remain on the screen at all times that
indicates 1 = false and 3 = true.
Test Results
Once the participant provides a response to an item
on the Planes Test, a results screen appears indicating
whether the response was right or wrong. This is
inconsistent with many of the other tests in the AT-SAT
Battery that do not indicate how a participant performs
on individual test items, in addition to further lengthening an already lengthy test.
Angles Test
This measures a participants ability to recognize
angles and perform calculations on those angles. The
following paragraph provides observations and suggestions for this test.
Level of Difficulty
Participants appeared to be challenged by this test,
although it seemed as if they could either very quickly
determine a response about the measure of an angle, or
it took them some time to determine their response.
Applied Mathematics Test
This measures the participants ability to apply mathematics to solve problems involving the traveling speed,
time, and distance of aircraft. The following paragraphs
provide observations and suggestions for the improvement of this test.
Instructions
A sentence should be included in the instructions
that no pencils, paper, or calculators may be used during
this test. Many pilot test participants assumed that these
instruments were allowed for this portion of the test.
Level of Difficulty
Many participants appeared to have difficulty determining the best answer to these mathematical questions. Several participants spent so much time trying to
34
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 151 of 342 Page ID
#:346
35
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 152 of 342 Page ID
#:347
CHAPTER 3.3
A NALYSIS AND REVISIONS OF THE AT-SAT PILOT TEST
Douglas Quartetti and Gordon Waugh, HumRRO
Jamen G. Graves, Norman M. Abrahams, and William Kieckhaefer, RGI, Inc
Janis Houston, PDRI, Inc
Lauress Wise, HumRRO
This chapter outlines the rationale used in revising
the tests and is based on the pilot test data gathered prior
to the validation study. A full description of the samples
used in the pilot study can be found in Chapter 3.2. It
is important to note that some of the tests were developed specifically for use in the AT-SAT validation
study, and therefore it was imperative that they be pilottested for length, difficulty, and clarity. There were two
levels of analysis performed on the pilot test data. First,
logic and rationale were developed for the elimination
of data from further consideration in the analyses. After
the were elimination process, an item analysis of each
test was used to determine the revisions to tests and
items that were needed.
Exclusionary decision rules were based on available
information, which varied from test to test. For example, in some instances, item latency (time) information was available as the appropriate method for
exclusion; in other cases, the timing of the tests were
computer driven and other criteria for exclusion were
developed. An item was considered a candidate for deletion if it exhibited any of the following characteristics:
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 153 of 342 Page ID
#:348
Item Analysis
The item analysis did not reveal any problem items
and there appeared to be a good distribution of item
difficulties. No text changes were indicated. After reviewing the item analysis and the items in the test, none
of the items were deleted.
Summary and Recommendations
This test appears to function as it was intended.
There were no item deletions and no textual changes.
Item Analysis
After review of the item analysis and of specific
items, 13 items were deleted from the original test.
All had low discrimination and/or another response
option that was chosen more frequently than the
correct response. In many instances, the graphics
made it difficult to discriminate between correct and
incorrect dial readings. The revised test consists of 44
items. The item analysis printout for the deleted
items can be found in Appendix A.
Sound Test
Case Elimination
On the Sound test, 437 participants completed 17 or
18 items. Of the remaining five participants, one completed only two items (got none correct) and was deleted
from the sample. The other four participants made it to
the fourth set of numbers (8 digits). All the scores of this
group of four were within one standard deviation (15%)
38
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 154 of 342 Page ID
#:349
Memory Test
Case Elimination
A scatter plot of Memory test items answered by
percent correct revealed a sharp decline in the percent
correct when participants answered fewer than 14 items.
It was decided that participants who answered fewer
than 14 items were not making an honest effort on this
test. Additionally, it was felt that participants who
scored less than 5% correct (about 1 of 24 correct)
probably did not put forth their best effort, and therefore, they were removed from the item analyses. These
two criteria eliminated 14 participants, leaving a sample
of 435 for the item analyses.
Item Analysis
After review of the item analysis, none of the items
were removed. Item 1 had low discrimination, low
percent correct, and a high number of omits. However,
there were no such problems with the remaining items,
and given that these are non-sense syllables, one explanation may attribute the poor results to first-item nervousness-acclimation. All items were retained for the
beta version, and no editorial changes were made.
Summary and Recommendations
This test performed as expected and had a normal
distribution of scores. One item had problem characteristics, but a likely explanation may be that it was the first
item on the test. The recommendation was to leave all
24 items as they were but to re-examine the suspect item
after beta testing. If the beta test revealed a similar
pattern, then the item should be examined more closely.
Item Analysis
After review of the item analysis, none of the items
were removed. However, the biserial correlations of the
items from digit length 5 and digit length 10 were
appreciably lower than the rest of the items. The reliability of this test with the original scoring procedure was
.70, while the alternative scoring procedure improved
reliability to .77. Using the alternative scoring procedure, in a comparison of the original version and a
revised version with digit length 5 and digit length 10
removed, the revised version had a slightly higher reliability (.78).
Analogy Test
Case Elimination
For the Analogy test, cases were eliminated based on
three criteria: missing data, pattern responding, and
apparent lack of participant motivation.
Missing Data. The test software did not permit
participants to skip items in this test, but several (12.8%)
did not complete the test in the allotted time, resulting
in missing data for these cases. Those missing 20% or
more of the data (i.e., cases missing data for 11 items or
more) were omitted. Five cases were eliminated from the
sample.
Pattern Responders: The chance level of responding
for this test was 20%. An examination of those participants near chance performance revealed one case where
the responses appeared to be patterned or inappropriate.
39
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 155 of 342 Page ID
#:350
Testing Time
Based on the sample of 439 participants, 95% of the
participants completed the test and instructions in 33
minutes (Table 3.3.5). Table 3.3.6 shows time estimates
for two different levels of reliability.
Test Revisions
A content analysis of the test revealed four possible
combinations of semantic/non-semantic and word/visual item types. The types of relationships between the
word items could be (a) semantic (word-semantic), (b)
based on a combination of specific letters (word - nonsemantic), (c) phonetic (word - non-semantic), and (d)
based on the number of syllables (word - non-semantic).
The types of relationships between visual items could be
based on (a) object behavior (visual-semantic), (b) object function (visual-semantic), (c) object feature (visual-semantic, (d) adding/deleting parts of the figures
(visual - non-semantic), (e) moving parts of the figures
(visual - non-semantic), and (f) rotating the figures
(visual - non-semantic).
After categorizing the items based on item type, an
examination of the item difficulty level, item-total
correlations, the zero-order intercorrelations between
all items, and the actual item content revealed only one
perceptible pattern. Six non-semantic word items were
removed due to low item-total correlations, five being
syllable items (i.e., the correct solution to the analogy
was based on number of syllables).
Seven more items were removed from the alpha
Analogy test version due to either very high or low
difficulty level, or to having poor distractor items.
Word Items. The time allocated to the Analogy test
items remained approximately the same (35 minutes
and 10 minutes for reading instructions) from the alpha
version to the beta version. The number of word items
did not increase; however, nine items were replaced with
items that had similar characteristics of other wellperforming word items. There were equal numbers of
semantic and non-semantic items (15 items each).
Since the analogy items based on the number of
syllables performed poorly, this type of item was not
used when replacing the non-semantic word items.
Instead, the five non-semantic word items were replaced
with combinations of specific letters and phonetic items.
Additionally, three semantic items were replaced with
three new semantic items of more reasonable (expected)
difficulty levels.
40
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 156 of 342 Page ID
#:351
Pattern Responding. From examination of the pattern of responses of participants who scored at or near
chance levels (20%), eight participants were identified
as responding randomly and were eliminated.
Unmotivated Participants. It was decided that participants spending less than 3 per item were not making
a serious effort. Four participants fell into this category.
An examination of their total scores revealed that they
scored at or near chance levels, and thus they were
eliminated from further analyses.
In summary, 22 participants were eliminated from
further analyses, reducing the sample size for this test
from 449 to 427.
Scale Reliabilities and Item Analyses
Reliability analyses were conducted to identify the items
within each of the four test scales that did not contribute to
the internal consistency of that scale. The corrected itemtotal correlation was computed for each item within a
scale, as well as the overall alpha for that scale.
An examination of the item-total correlations revealed that the Non-Semantic Word scale items had an
average correlation of .179, and therefore the entire scale
was omitted from further analyses. This reduced the
number of items within the three remaining test scales
as follows (the original number of items appears in
parentheses): Semantic Word 9 (11), Non-Semantic
Visual 10 (13), and Semantic Visual 3 (10). Note that
the greatest number of items were removed from the
semantic visual scale. Tables 3.3.7 to 3.3.10 present the
corrected item-total correlations for the items within
each scale. After omitting items based on the above
criteria, the number of items in this test was reduced
from 46 to 22.
Construct Validity
In assessing the construct validity of the information
processing measures independent of the number correct
scores, a multitrait-multimethod matrix was constructed.
Two traits (i.e., information processing and reasoning)
and four methods (i.e., Word Semantic, Word NonSemantic, Visual Semantic, and Visual Non-Semantic)
were examined. The results of this analysis provided the
following median correlations:
41
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 157 of 342 Page ID
#:352
42
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 158 of 342 Page ID
#:353
43
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 159 of 342 Page ID
#:354
Item Elimination
Again, since Form B was designed to serve as a retest,
the findings from analyses performed on LFT Form A
were used to determine which test items to eliminate
from Form B. We removed 8 SA items so 12 SA items
remain in Form B. Similarly, a P/T score was computed
for Form B by subtracting the number of unnecessary
mouse clicks from the number-correct score across all P/
T items.
Performance Differences
Form B was used to assess whether participants had
reached an asymptote of performance during Form A.
Different sequences could not be used in Form A for this
test because the item types are very heterogeneous, and
little information is available on item or sequence
difficulties. By matching test sequences, we can control
for manageable aspects of the test that impact test
performance. Table 3.3.21 presents the results of dependent t-tests comparing two performance measures. Those
results show no support for a change in participants
performance on Situational Awareness. However, the
roughly 8% performance increment on Planning and
Thinking Ahead was a significant increase in performance. This suggests that participants would benefit
from more practice before beginning the test.
Test Revisions
Test Sequences. To reduce the number of inappropriate responses by participants who double-click or
continuously click on the box stack, the associated error
signal (one red arrow) was changed to an error message.
The error message appears in red above the box stack
when participants try to move a box to the loading area
when one is not needed. The new error message reads,
You did not need to move a box.
44
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 160 of 342 Page ID
#:355
45
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 161 of 342 Page ID
#:356
What color was the LAST box you should have placed
in the loading area in order to correctly place all the
letters into boxes?
46
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 162 of 342 Page ID
#:357
correction to the software was implemented on February 26. Of the 429 cases on which data were collected,
151 cases had complete data.
Case Elimination
Because all participants proceed at the same rate
during practice and test sequences, test completion time
could not be used to assess participants test-taking
motivation. Likewise, because the test software automatically writes out data for each item indicating whether
the participant correctly selected the item, no cases
should have missing data.
Unmotivated Participants. It was believed that unmotivated participants would respond to very few or
none of the items, or respond with irrelevant answers.
The number-correct scores were used to identify unmotivated participants. The distribution of the 429 participants is provided in Table 3.3.24. An examination of
the data showed that no participant simply sat at the
computer and allowed the software to progress on its
own. Each participant entered some appropriate responses, and each got at least a few items correct. The
lowest score shown was 22 out of the 162 questions
correct (13.6%). While there may have been participants who were not trying their best, this screening
algorithm was unable to identify participants who blatantly did nothing at all. Therefore, all cases were kept
for the analyses.
Item Analyses
Table 3.3.25 presents findings from the reliability
analysis on the four test sequences (i.e., T1 to T4). The
three parts of the table show how the sequence reliabilities
measured by alpha differed as different groups of items
were deleted. The first part (With Change Items)
presents results that include all the items in each sequence. Each change item may be considered as two
items; the item is what was presented originally, and the
second is the item with the change in the bottom or
three-digit number. The middle columns include the
pre-change items and exclude the post-change items,
and the third part of the table removes both versions of
the change items (i.e., the original and the change part).
Notice, too, that the second and third parts of the table
show Actual and Expected alphas. The actual alphas are the results provided by the data. The expected
alphas are the ones estimated by the Spearman-Brown
formula if like items were deleted. In every case, the
alphas from the data are higher than the expected alphas.
This finding supports the notion that the change items
Scan Test
Data Collection/Software Problems
As the data collection proceeded on the Scan test, it
became clear that the software was not writing data for
change items nor was it recording item latencies. A
47
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 163 of 342 Page ID
#:358
Planes Test
Case Elimination
The Planes test consisted of three parts and cases were
eliminated from each part independently. The screening algorithms for each part were based on similar
premises.
Part 1 consisted of 48 items. Participants were eliminated from further analyses if any of three screening
criteria were satisfied. The first screen for this part was
a total latency less than 48 seconds. The second screen
was percent correct less than or equal to 40%. The final
screen was the skipping of six or more items. These
screening algorithms reduced the sample from 450 to
429 for Part 1.
The screening for Part 2 was similar. Participants
were eliminated from further analyses on these criteria:
(1). Part 2 total latency less than 1.2 minutes, (2). 40%
correct or less, or (3). missing data for six or more items.
These screening algorithms reduced the available sample
from 450 to 398 for Part 2.
Part 3, participants were eliminated on these criteria:
(1) Part 3 total latency less than 2.4 minutes, (2) 40%
correct or less, or (3). missing data for 12 or more items.
These screening algorithms reduced the available sample
from 450 to 366 for Part 3.
Participant elimination across all three test parts left
a final sample of 343 having data on all three parts.
Item Analyses
Scale Reliabilities and Item Analyses. Reliability
analyses were conducted to identify items within each
part of the Planes test that contribute to internal consistency. The corrected item-total correlation was computed for each item within each part as was the overall
alpha for that part. Table 3.3.27 presents an overview of
the results of these reliability analyses.
The Planes test is not a new test, having been developed previously as the Ships test (Schemmer et al.,
1996). In its alpha text form, the number of items was
cut in half to meet the time allowed for it in the pretest.
In reducing the number of items, the same proportion
was kept for all item types. However, there are many
parallels between the items in each of the three parts of
the test; a particular item that may not work well in Part
1 might work very well in Parts 2 or 3. For these reasons and
because data from all three parts were to be used to develop
a residual score for the coordinating ability component of
multitasking, eliminating items based on poor item-total
correlations alone was not considered desirable.
48
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 164 of 342 Page ID
#:359
Experiences Questionnaire
The following Experiences Questionnaire analyses
were performed on data from the first 9 of the 12 days
of pilot testing at Pensacola in February, 1997. The total
49
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 165 of 342 Page ID
#:360
EQ Format
The pilot test version of the EQ contained 201 items
representing 17 scales, including a Random Response
Scale. All items used the same set of five response
options: Definitely True, Somewhat True, Neither
True Nor False, Somewhat False, and Definitely False.
Time to Complete EQ
The mean amount of time required to complete the
EQ for the screened data set was 29.75 minutes (SD =
9.53, Range = 10-109). A few individuals finished in
approximately 10 minutes, which translates into roughly
3 seconds per response. The records of the fastest
finishers were checked for unusual response patterns
such as repeating response patterns or patterns of all the
same response (which would yield a high random response score anyway), and none were found. Thus, no
one was deleted from the data set due solely to time
taken to complete the test. It is not surprising to note
that the fastest finishers in the entire, unscreened sample
of 330 were deleted based on their scores on the random
response scale.
Data Screening
Three primary data quality screens are typically performed on questionnaires like the EQ: (a) a missing data
screen, (b) an unlikely virtues screen, and (c) a random
response screen. The missing data rule used was that if
more than 10% of the items on a particular scale were
missing (blank), that scale score was not computed. No
missing data rule was invoked for across-scale missing
data, so there could be a data file with, for example, all
scale scores missing. No one was excluded based on
responses to the unlikely virtues items, that is, those
items with only one likely response (Example: You
have never hurt someone elses feelings, where the only
likely response is Definitely False).
A new type of random response item was tried out in
the pilot test, replacing the more traditional, right/
wrong-answer type, such as Running requires more
energy than sitting still. There were four random
response items, using the following format: This item
is a computer check to verify keyboard entries. Please
select the Somewhat True response and go on to the next
item. The response that individuals were instructed to
select varied across the four items. A frequency distribution of the number of random responses (responses
other than the correct one) follows:
Number of
Random
Responses
0
1
2
3
4
N
222
52
34
18
4
330
Scale Scoring
EQ items were keyed 1 - 5, the appropriate items were
reversed (5 - 1), and the scale scores were computed as
(the mean item response) x 20, yielding scores ranging
from 20 to 100. The higher the score, the higher the
standing on the characteristic.
Descriptive Statistics and Reliability Estimates
Appendix B contains the descriptive statistics and
internal consistency reliabilities for 16 scales (Random
Response Scale excluded). The scale means were comfortingly low and the standard deviations were comfortingly high, relieving concerns about too little variance
and/or a ceiling effect. The Unlikely Virtues scale had
the lowest mean of all (51.85), as it should.
The scale reliabilities were within an acceptable range
for scales of this length and type. Most were in the .70s
and .80s. The two exceptions were Self Awareness (.55)
and Self-Monitoring/Evaluating (.54).
Four items had very low item-scale correlations, so
they were removed from their respective scales: Items 21
and 53 from the Decisiveness scale (item-scale correlations of -.02 and -.05 respectively), item 144 from the
Self-Monitoring/Evaluating scale (correlation of .04),
and item 163 from the Interpersonal Tolerance scale
Percent
67.3
15.8
10.3
5.5
1.2
100.0
50
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 166 of 342 Page ID
#:361
51
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 167 of 342 Page ID
#:362
the pilot test, this sample did not provide any information about how much faking good would actually
occur in an applicant population.
where a and b were chosen so that optimal performance would be around 100 and performance at the
average of the old scale would map onto 50. For the AT
Test, optimal performance was indicated by 0 on each
of the original measures so that the transformation
could be rewritten as:
New Scale = 100 / (1 + Old Scale/Old Mean).
Scoring
Initial inspection of the results suggested that crashes
and separation errors (safety) were relatively distinct
from (uncorrelated with) procedural errors. Consequently, four separate scores were generated to account
for the data. Initial scores were:
CRASHSEP = crashes + separation errors
PROCERR = total number of procedural errors of all kinds
PCTDEST = percent reaching target destination
TOTDELAY = total delay (handoff and enroute)
Case Elimination
During the initial analyses, prior to rescaling, there
were several cases with very high error rates or long delay
times that appeared to be outliers. The concern was that
these individuals did not understand the instructions
and so were not responding appropriately. (In one case,
it was suspected that the examinee was crashing planes
on purpose.) The rescaling, however, shrunk the high
end (high errors or long times) of the original scales
relative to the lower end, and after rescaling these cases
were not clearly identifiable as outliers. Inspection of the
data revealed that all of the cases of exceptionally poor
performance occurred on the last test trial. The fact that
the last trial was exceptionally difficult and that similar
problems were not noted on the earlier trials, suggested
that most of these apparent outliers were simply instances of low ability and not random or inappropriate
52
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 168 of 342 Page ID
#:363
Reliability
After revised scale scores were computed for each
trial, reliability analyses were performed. In this case, an
ANOVA (generalizability) model was used to examine
the variance in scores across trials, examinee groups (test
orders), and examinees (nested within groups). The
analyses were conducted for varying numbers of trials,
from all six (two practice and four test) down to the last
two (test) trials. Table 3.3.29 shows variance component estimates for each of the sources of variation.
Notwithstanding modest efforts to standardize across
trials, there was still significant variation due to Trial
main effects in many cases. These were ignored in
computing reliabilities (using relative rather than absolute measures of reliability) since the trials would be
constant for all examinees and would not contribute to
individual variation in total scores. Similarly, Group
and Group by Trial effects were minimal and were not
included in the error term used for computing
reliabilities. Group effects are associated with different
positions in the overall battery. There will be no variation of test position in the final version of the battery.
Single trial reliabilities were computed as the ratio of
the valid variance due to subjects nested within groups,
SSN(Group) to the total variance, expressed as the sum
of SSN(Group) and SSN*Trial. For each variable, the
single trial reliability based on the last two trials was
identical to the correlation between the scores for those
two trials. Reliabilities for means across higher numbers of trials were computed by dividing the SSN*T
error component by the number of trials. This is
exactly the Spearman-Brown adjustment expressed
in generalizability terms.
53
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 169 of 342 Page ID
#:364
Software Changes
After the Alpha Version pilot test, the AT test was
changed to have more extensive and more highly edited
instructions and was converted to a 32-bit version to run
under Windows 95. The practice scenarios were modified to teach specific aspects of the exercises (changing
speed and level in practice 1, changing directions in
practice 2, noticing airport landing directions, and
coping with pilot readback errors in practice 3). Specific
feedback was provided after each practice session keyed to
aspects of the examinees performance on the practice trial.
The new version of the scenario player provided
slightly different score information. In particular the
en route delay variable was computed as the total en
route time for planes that landed correctly. We modified the shell program to read the replay file and copy
information from the exit records (type XT) into the
examinees data file. This allowed us to record which
planes either crashed or were still flying at the end of the
scenario. We computed a total en route time to
replace the delay time provided by the Alpha version.
Reliability
Tables 3.3.29 and 3.3.30 show internal consistency
and test-retest reliability estimates for TW as well as for
AT. Analyses of these data suggested that averaging
across all three trials led to the most reliable composite
for use in analyses of the pilot data.
Summary and Recommendations
Time Wall Accuracy reliability estimates were modest,
although the test-retest correlations held up fairly well.
Preliminary results suggested that five or six trials may be
needed to get highly reliable results on all three measures.
Software Changes
The trial administration program was changed to allow
us to specify the number of Time Wall items administered
and to shut off the warm up trials for each administration.
The main program then called the trial administration
program 6 times. The first three trials had 5 Time Wall
items each and were considered test trials. The next three
trials had 25 Time Wall items each and were considered test
trials. After the practice trials, the examinees performance
was analyzed and specific feedback was given on how to
improve their score.
54
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 170 of 342 Page ID
#:365
Testing Times
Table 3.3.31 shows distributional statistics for instruction time and total time for the AT and TW tests
in their current form. While there was some variation in
instruction time, the total times were quite close to the
original targets (90 and 25 minutes, respectively).
Conclusions
The purpose of the pilot study was to determine if the
predictor battery required revisions prior to its use in the
proposed concurrent validation study. A thorough analysis of the various tests was performed. A number of
recommendations related to software presentation item changes, and predictor construct revisions - were
outcomes of the pilot study. The project team believed
that the changes made to the test battery represented a
substantial improvement over initial test development.
The beta battery, used in the concurrent validation
study, was a professionally developed set of tests that
benefited greatly from the pilot study.
REFERENCES
55
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 171 of 342 Page ID
#:366
Buckley, E. P., House, K., & Rood, R. (1978). Development of a performance criterion for air traffic
control personnel research through air traffic control simulation. (DOT/FAA/RD-78/71). Washington, DC: U.S. Department of Transportation,
Federal Aviation Administration, Systems Research and Development Service.
56
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 172 of 342 Page ID
#:367
Cobb, B. B. (1967). The relationships between chronological age, length of experience, and job performance ratings of air route traffic control specialists (DOT/FAA/AM-67/1). Oklahoma City, OK:
U.S. Department of Transportation, Federal Aviation Administration, Office of Aviation Medicine.
57
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 173 of 342 Page ID
#:368
Pulakos, E. D., & Borman, W. C. (1986). Rater orientation and training. In E. D. Pulakos & W. C.
Borman (Eds.), Development and field test report
for the Army-wide rating scales and the rater
orientation and training program (Technical Report #716). Alexandria, VA: U.S. Army Research
Institute for the Behavioral and Social Sciences.
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 174 of 342 Page ID
#:369
Trites & Cobb (1963.) Problems in air traffic management: IV. Comparison of pre-employment jobrelated experience with aptitude test predictors of
training and job performance of air traffic control
specialists. (DOT/FAA/AM-63/31). Washington,
DC: U.S. Department of Transportation, Federal Aviation Administration, Office of Aviation Medicine.
Shrout, P.E., & Fleiss, J.L. (1979). Intraclass correlations: Uses assessing rater reliability. Psychological Bulletin, 86, 420-428.
59
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 175 of 342 Page ID
#:370
DOT/FAA/AM-01/6
Documentation of Validity
for the AT-SAT
Computerized Test Battery
Volume II
R.A. Ramos
Human Resources Research Organization
Alexandria, VA 22314-1591
Michael C. Heil
Carol A. Manning
Civil Aeromedical Institute
Federal Aviation Administration
Oklahoma City, OK 73125
March 2001
Final Report
U.S. Department
of Transportation
Federal Aviation
Administration
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 176 of 342 Page ID
#:371
N O T I C E
This document is disseminated under the sponsorship of
the U.S. Department of Transportation in the interest of
information exchange. The United States Government
assumes no liability for the contents thereof.
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 177 of 342 Page ID
#:372
DOT/FAA/AM-01/6
4. Title and Subtitle
5. Report Date
March 2001
7. Author(s)
This document is a comprehensive report on a large-scale research project to develop and validate a
computerized selection battery to hire Air Traffic Control Specialists (ATCSs) for the Federal Aviation
Administration (FAA). The purpose of this report is to document the validity of the Air Traffic Selection
and Training (AT-SAT) battery according to legal and professional guidelines. An overview of the project
is provided, followed by a history of the various job analyses efforts. Development of predictors and
criterion measures are given in detail. The document concludes with the presentation of the validation of
predictors and analyses of archival data.
Unclassified
Unclassified
22. Price
179
Reproduction of completed page authorized
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 178 of 342 Page ID
#:373
TABLE OF CONTENTS
VOLUME II
Page
CHAPTER 4 - DEVELOPMENT OF CRITERION MEASURES OF AIR TRAFFIC CONTROLLER PERFORMANCE ................... 1
CBPM ..................................................................................................................................................... 1
CHAPTER 5.1 - FIELD PROCEDURES FOR CONCURRENT VALIDATION STUDY ....................................................... 13
CHAPTER 5.2 - DEVELOPMENT OF PSEUDO-APPLICANT SAMPLE ......................................................................... 17
CHAPTER 5.3 - DEVELOPMENT OF DATA BASE ................................................................................................. 21
CHAPTER 5.4 - BIOGRAPHICAL AND COMPUTER EXPERIENCE INFORMATION: DEMOGRAPHICS FOR THE VALIDATION
STUDY .................................................................................................................................................. 31
Total Sample ....................................................................................................................................... 31
Controller Sample ............................................................................................................................... 31
Pseudo-Applicant Sample .................................................................................................................... 32
Computer Use and Experience Questionnaire ..................................................................................... 32
Performance Differences ...................................................................................................................... 33
Relationship Between Cue-Plus and Predictor Scores .......................................................................... 33
Summary ............................................................................................................................................. 35
CHAPTER 5.5 - PREDICTOR-CRITERION ANALYSES ............................................................................................. 37
CHAPTER 5.6 - ANALYSES OF GROUP DIFFERENCES AND FAIRNESS ..................................................................... 43
CHAPTER 6 - THE RELATIONSHIP OF FAA ARCHIVAL DATA TO AT-SAT PREDICTOR AND CRITERION MEASURES .. 49
Previous ATC Selection Tests .............................................................................................................. 49
Other Archival Data Obtained for ATC Candidates ........................................................................... 51
Archival Criterion Measures ................................................................................................................ 52
Historical Studies of Validity of Archival Measures ............................................................................. 52
Relationships Between Archival Data and AT-SAT Measures .............................................................. 54
REFERENCES ................................................................................................................................................ 61
List of Figures and Tables
Figures
Figure 4.1.
Figure 4.2.
Figure 4.3.
Figure 4.4.
Figure 5.2.1.
Figure 5.2.2.
Figure 5.3.1.
Figure 5.3.2.
Figure 5.5.1.
Figure 5.5.2.
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 179 of 342 Page ID
#:374
Figure 5.6.2.
Figure 5.6.3.
Figure 5.6.4.
Figure 5.6.5.
Figure 5.6.6.
Fairness Regression for Hispanics Using AT-SAT Battery Score and Composite Criterion ......... 76
Fairness Regression for Females Using AT-SAT Battery Score and Composite Criterion ............ 77
Confidence Intervals for the Slopes in the Fairness Regressions ................................................. 78
Expected Score Frequency by Applicant Group ......................................................................... 79
Percent Passing by Recruitment Strategy .................................................................................... 80
Tables
Table 4.1.
Table 4.2.
Table 4.3.
Table 4.4.
Table 4.5.
Table 4.6.
Table 4.7.
Table 4.8.
Table 4.9.
Table 4.10.
Table 4.11.
Table 4.12.
Table 4.13.
Table 4.14.
Table 4.15.
Table 4.16.
Table 4.17.
Table 4.18.
Table 4.19.
Table 4.20.
Table 5.2.1.
Table 5.2.2.
Table 5.2.3.
Table 5.4.1.
Table 5.4.2.
Table 5.4.3.
Table 5.4.4.
Table 5.4.5.
Table 5.4.6.
Table 5.4.7.
Table 5.4.8
Table 5.4.9.
Table 5.4.10.
Table 5.4.11.
Table 5.4.12.
Table 5.4.13.
Table 5.4.14.
Table 5.4.15.
Table 5.4.16.
Table 5.4.17.
Table 5.4.18.
Table 5.4.19.
Table 5.4.20.
Table 5.4.21.
iv
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 180 of 342 Page ID
#:375
Table 5.4.22.
Table 5.4.23.
Table 5.4.24.
Table 5.4.25.
Table 5.4.26.
Table 5.4.27.
Table 5.4.28.
Table 5.4.29.
Table 5.4.30.
Table 5.4.31.
Table 5.4.32.
Table 5.5.1.
Table 5.5.2.
Table 5.5.3.
Table 5.5.4.
Table 5.5.5.
Table 5.5.6.
Table 5.6.1.
Table 5.6.2.
Table 5.6.3.
Table 5.6.4.
Table 5.6.5.
Table 5.6.6.
Table 5.6.7.
Table 5.6.8.
Table 5.6.9.
Table 6.1.
Table 6.2.
Table 6.3.
Table 6.4.
Table 6.5.
Table 6.6.
Table 6.7.
Table 6.8.
Table 6.9.
Table 6.10.
Table 6.11.
Appendices:
Appendix C - Criterion Assessment Scales ....................................................................................................... C1
Appendix D - Rater Training Script .................................................................................................................D1
Appendix E - AT-SAT High Fidelity Simulation Over the Shoulder (OTS) Rating Form ................................ E1
Appendix F - Behavioral and Event Checklist ...................................................................................................F1
Appendix G - AT-SAT High Fidelity Standardization Guide ........................................................................... G1
Appendix H - Pilot Test Rater Comparisons ................................................................................................... H1
Appendix I - Sample Cover Letter and Table to Assess the Completeness of Data Transmissions ...................... I1
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 181 of 342 Page ID
#:376
CHAPTER 4
DEVELOPMENT
OF
CRITERION MEASURES
OF
INTRODUCTION
An important element of the AT-SAT predictor development and validation project is criterion performance measurement. To obtain an accurate picture of
the experimental predictor tests validity for predicting
controller performance, it is important to have reliable
and valid measures of controller job performance. That
is, a concurrent validation study involves correlating
predictor scores for controllers in the validation sample
with criterion performance scores. If these performance
scores are not reliable and valid, our inferences about
predictor test validities are likely to be incorrect.
The job of air traffic controller is very complex and
potentially difficult to capture in a criterion development effort. Yet, the goal here was to develop criterion
measures that would provide a comprehensive picture of
controller job performance.
Initial job analysis work suggested a model of performance that included both maximum and typical performance (Bobko, Nickels, Blair & Tartak, 1994; Nickels,
Bobko, Blair, Sands, & Tartak, 1995). More so than
with many jobs, maximum can-do performance is very
important in controlling air traffic. There are times on
this job when the most important consideration is maximum performance - does the controller have the technical skill to keep aircraft separated under very difficult
conditions? Nonetheless, typical performance over time
is also important for this job.
Based on a task-based job analysis (Nickels et al.,
1995), a critical incidents study (Hedge, Borman,
Hanson, Carter & Nelson, 1993), and past research on
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 182 of 342 Page ID
#:377
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 183 of 342 Page ID
#:378
items had, for example, one best answer and one or two
others that represented fairly effective responses. These
test development sessions resulted in 30 scenarios and
99 items, with between 2 and 6 items per scenario.
An initial version of the test was then programmed to
run on a standard personal computer with a 17-inch
high-resolution monitor. This large monitor was needed
to realistically depict the display as it would appear on
an en route radar screen. The scenarios were initially
programmed using a radar engine, which had previously been developed for the FAA for training purposes.
This program was designed to realistically display airspace features and the movement of aircraft. After the
scenarios were programmed into the radar engine, the
SMEs watched the scenarios evolve and made modifications as necessary to meet the measurement goals. Once
realistic positioning and movement of the aircraft had
been achieved, the test itself was programmed using
Authorware. This program presented the radar screens,
voice communications, and multiple choice questions,
and also it collected the multiple choice responses.
Thus, the CBPM is essentially self-administering
and runs off a CD-ROM. The flight strips and status
information areas are compiled into a booklet, with one
page per scenario, and the airspace summary and sector
map (see Figures 4.1 and 4.2) are displayed near the
computer when the test is administered. During test
administration, controllers are given 60 seconds to
review each scenario before it begins. During this time,
the frozen radar display appears on the screen, and
examinees are allowed to review the flight strips and any
other information they believe is relevant to that particular scenario (e.g., the map or airspace summary).
Once the test items have been presented, they are given
25 seconds to answer the question. This is analogous to
the controller job, where they are expected to get the
picture concerning what is going on in their sector of
airspace, and then are sometimes required to react
quickly to evolving situations. We also prepared a
training module to familiarize examinees with the airspace and instructions concerning how to take the test.
After preparing these materials, we gathered a panel
of four experienced controllers who were teaching at the
FAA Academy and another panel of five experienced
controllers from the field to review the scenarios and
items. Specifically, each of these groups was briefed
regarding the project, trained on the airspace, and then
shown each of the scenarios and items. Their task was to
rate the effectiveness level of each response option.
Ratings were made independently on a 1-7 scale. Table
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 184 of 342 Page ID
#:379
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 185 of 342 Page ID
#:380
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 186 of 342 Page ID
#:381
Scenario Development
The air traffic scenarios were designed to incorporate
performance constructs central to the controllers job,
such as maintaining aircraft separation, coordinating,
communicating, and maintaining situation awareness.
Also, attention was paid to representing in the scenarios
the most important tasks from the task-based job analysis. Finally, it was decided that, to obtain variability in
controller performance, scenarios should be developed
with either moderate or quite busy traffic conditions.
Thus, to develop our HFPM scenarios, we started with
a number of pre-existing Aero Center training scenarios,
and revised and reprogrammed to the extent necessary
to include relevant tasks and performance requirements
with moderate- to high-intensity traffic scenarios. In all,
16 scenarios were developed, each designed to run no
more than 60 minutes, inclusive of start-up, position relief
briefing, active air traffic control, debrief, and performance
evaluation. Consequently, active manipulation of air traffic was limited to approximately 30 minutes.
The development of a research design that would
allow sufficient time for both training and evaluation
was critical to the development of scenarios and accurate
evaluation of controller performance. Sufficient training time was necessary to ensure adequate familiarity
with the airspace, thereby eliminating differential knowledge of the airspace as a contributing factor to controller
performance. Adequate testing time was important to
ensure sufficient opportunity to capture controller performance and allow for stability of evaluation. A final
consideration, of course, was the need for controllers in
our sample to travel to Oklahoma City to be trained and
evaluated. With these criteria in mind, we arrived at a
design that called for one-and one-half days of training,
followed by one full day of performance evaluation.
This schedule allowed us to train and evaluate two
groups of ratees per week.
Development of Measurement Instruments
High-fidelity performance data were captured by
means of behavior-based rating scales and checklists,
using trainers with considerable air traffic controller
experience or current controllers as raters. Development
and implementation of these instruments, and selection
and training of the HFPM raters are discussed below.
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 187 of 342 Page ID
#:382
Rater Training
Fourteen highly experienced controllers from field
units or currently working as instructors at the FAA
Academy were detailed to the AT-SAT project to serve
as raters for the HFPM portion of the project. Raters
arrived approximately three weeks before the start of
data collection to allow time for adequate training and
pilot testing. Thus, our rater training occurred over an
extended period of time, affording an opportunity for
ensuring high levels of rater calibration.
During their first week at the Academy, raters were
exposed to (1) a general orientation to the AT-SAT
project, its purposes and objectives, and the importance
of the high-fidelity component; (2) airspace training;
(3) the HFPM instruments; (4) all supporting materials
(such as Letters of Agreement, etc.); (5) training and
evaluation scenarios; and (6) rating processes and procedures. The training program was an extremely handson, feedback intensive process. During this first week
raters served as both raters and ratees, controlling traffic
in each scenario multiple times, as well as serving as
raters of their associates who took turns as ratees. This
process allowed raters to become extremely familiar
with both the scenarios and evaluation of performance
in these scenarios. With multiple raters evaluating performance in each scenario, project personnel were able
to provide immediate critique and feedback to raters,
aimed at improving accuracy and consistency of rater
observation and evaluation.
In addition, prior to rater training, we scripted
performances on several scenarios, such that deliberate
errors were made at various points by the individual
controlling traffic. Raters were exposed to these scripted
scenarios early in the training so as to more easily
facilitate discussion of specific types of controlling
errors. A standardization guide was developed with the
cooperation of the raters, such that rules for how observed behaviors were to be evaluated could be referred
to during data collection if any questions arose (see
Appendix G). All of these activities contributed to
enhanced rater calibration.
to the pilot test sites. In general, procedures for administering these two assessment measures proved to be
effective. Data were gathered on a total of 77 controllers
at the two locations. Test administrators asked pilot test
participants for their reactions to the CBPM, and many
of them reported that the situations were realistic and
like those that occurred on their jobs.
Results for the CBPM are presented in Table 4.4.
The distribution of total scores was promising in the
sense that there was variability in the scores. The coefficient alpha was moderate, as we might expect from a
test that is likely mutidimensional. Results for the
ratings are shown in Tables 4.5 and 4.6. First, we were
able to approach our target of two supervisors and two
peers for each ratee. A mean of 1.24 supervisors and 1.30
peers per ratee participated in the rating program. In
addition, both the supervisor and peer ratings had
reasonable degrees of variability. Also, the interrater
reliabilities (intraclass correlations) were, in general,
acceptable. The Coordinating dimension is an exception. When interrater reliabilities were computed across
the supervisor and peer sources, they ranged from .37 to
.62 with a median of .54. Thus, reliability improves
when both sources data are used.
In reaction to the pilot test experience, we modified
the script for the rater orientation and training program.
We decided to retain the Coordinating dimension for
the main study, with the plan that if reliability continued to be low we might not use the data for that
dimension. With the CBPM, one item was dropped
because it had a negative item-total score correlation.
That is, controllers who answered this item correctly
tended to have low total CBPM scores.
The primary purpose of the HFPM pilot test was to
determine whether our rigorous schedule of one-and
one-half days of training and one day of evaluation was
feasible administratively. Our admittedly ambitious
design required completion of up to eight practice
scenarios and eight graded scenarios. Start-up and shutdown of each computer-generated scenario at each radar
station, setup and breakdown of associated flight strips,
pre-and post-position relief briefings, and completion
of OTS ratings and checklists all had to be accomplished
within the allotted time, for all training and evaluation
scenarios. Thus, smooth coordination and timing of
activities was essential. Prior to the pilot test, preliminary dry runs had already convinced us to eliminate
one of the eight available evaluation scenarios, due to
time constraints.
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 188 of 342 Page ID
#:383
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 189 of 342 Page ID
#:384
the peer reliabilities, but the differences are for the most
part very small. Importantly, the combined supervisor/
peer ratings reliabilities are substantially higher than the
reliabilities for either source alone. Conceptually, it
seems appropriate to get both rating sources perspectives on controller performance. Supervisors typically
have more experience evaluating performance and have
seen more incumbents perform in the job; peers often
work side-by-side with the controllers they are rating,
and thus have good first-hand knowledge of their performance. The result of higher reliabilities for the combined ratings makes an even more convincing argument
for using both rating sources.
Scores for each ratee were created by computing the
mean peer and mean supervisor rating for each dimension. Scores across peer and supervisor ratings were also
computed for each ratee on each dimension by taking
the mean of the peer and supervisor scores. Table 4.13
presents the means and standard deviations for these
rating scores on each dimension, supervisors and peers
separately, and the two sources together. The means are
higher for the peers (range = 5.03-5.46), but the standard deviations for that rating source are generally
almost as high as those for the supervisor raters.
Table 4.14 presents the intercorrelations between
supervisor and peer ratings on all of the dimensions.
First, within rating source, the between-dimension correlations are large. This is common with rating data.
And second, the supervisor-peer correlations for the
same dimensions (e.g., Communicating = .39) are at
least moderate in size, again showing reasonable agreement across-source regarding the relative levels of effectiveness for the different controllers rated.
The combined supervisor/peer ratings were factor
analyzed to explore the dimensionality of the ratings.
This analysis addresses the question, is there a reasonable way of summarizing the 10 dimensions with a
smaller number of composite categories? The 3-factor
solution, shown in Table 4.15, proved to be the most
interpretable. The first factor was called Technical
Performance, with Dimensions, 1, 3, 6, 7, and 8 primarily defining the factor. Technical Effort was the label
for Factor 2, with Dimensions 2, 4, 5, and 9 as the
defining dimensions. Finally, Factor 3 was defined by a
single dimension and was called Teamwork.
Although the 3-factor solution was interpretable,
keeping the three criterion variables separate for the
validation analyses seemed problematic. This is because
(1) the variance accounted for by the factors is very
uneven (82% of the common variance is accounted for
Results
CBPM
Table 4.9 shows the distribution of CBPM scores. As
with the pilot sample, there is a reasonable amount of
variability. Also, item-total score correlations range
from .01 to .27 (mean = .11). The coefficient alpha was
.63 for this 84-item test. The relatively low item-total
correlations and the modest coefficient alpha suggest that
the CBPM is measuring more than a single construct.
Supervisor and Peer Ratings
In Tables 4.10 and 4.11, the number and percent of
ratings at each scale point are depicted for supervisors
and peers separately. A low but significant percentage of
ratings are at the 1, 2, or 3 level for both supervisor and
peer ratings. Most of the ratings fall at the 4-7 level, but
overall, the variability is reasonable for both sets of ratings.
Table 4.12 contains the interrater reliabilities for the
supervisor and peer ratings separately and for the two
sets of ratings combined. In general, the reliabilities are
quite high. The supervisor reliabilities are higher than
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 190 of 342 Page ID
#:385
by the first factor); (2) the correlations between unitweighted composites representing the first two factors is
.78; correlations between each of these composites and
Teamwork are high as well (.60 and .63 respectively);
and (3) all but one of the 10 dimensions loads on a
technical performance factor, so it seemed somewhat
inappropriate to have the one-dimension Teamwork
variable representing 1/3 of the rating performance
domain.
Accordingly, we formed a single rating variable represented by a unit-weighted composite of ratings on the
10 dimensions. The interrater reliability of this composite is .71 for the combined supervisor and peer rating
data. This is higher than the reliabilities for individual
dimensions. This would be expected, but it is another
advantage of using this summary rating composite to
represent the rating data.
HFPM
Table 4.16 contains descriptive statistics for the
variables included in both of the rating instruments
used during the HFPM graded scenarios. For the OTS
dimensions and the BEC, the scores represent averages
across each of the seven graded scenarios.
The means of the individual performance dimensions from the 7-point OTS rating scale are in the first
section of Table 4.16 (Variables 1 through 7). They
range from a low of 3.66 for Maintaining Attention and
Situation Awareness to a high of 4.61 for Communicating
Clearly, Accurately and Efficiently. The scores from each
of the performance dimensions are slightly negatively
skewed, but are for the most part, normally distributed.
Variables 8 through 16 in Table 4.16 were collected
using the BEC. To reiterate, these scores represent
instances where the controllers had either made a mistake or engaged in some activity that caused a dangerous
situation, a delay, or in some other way impeded the
flow of air traffic through their sector. For example, a
Letter of Agreement (LOA)/Directive Violation was judged
to have occurred if a jet was not established at 250 knots
prior to crossing the appropriate arrival fix or if a
frequency change was issued prior to completion of a
handoff for the appropriate aircraft. On average, each
participant had 2.42 LOA/Directive Violations in each
scenario.
Table 4.17 contains interrater reliabilities for the
OTS Ratings for those 24 ratees for whom multiple rater
information was available. Overall, the interrater
reliabilities were quite high for the OTS ratings, with
10
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 191 of 342 Page ID
#:386
11
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 192 of 342 Page ID
#:387
Conclusions
The 38-item CBPM composite provides a very good
measure of the technical skills necessary to separate
aircraft effectively and efficiently on the real job. The
.61 correlation with the highly realistic HFPM (Factor
1) is especially supportive of its construct validity for
measuring performance in the very important technical
proficiency-related part of the job. Additional ties to the
actual controller job are provided by the links of CBPM
items to the most important controller tasks identified
in the job analysis.
The performance ratings provide a good picture of
the typical performance over time elements of the job.
Obtaining both a supervisor and a peer perspective on
controller performance provides a relatively comprehensive view of day-to-day performance. High interrater
agreement across the two rating sources further strengthens the argument that the ratings are valid evaluations of
controller performance.
Thus, impressive construct validity evidence is demonstrated for both the CBPM and the rating composite.
Overall, we believe the 38-item CBPM and the rating
composite represent a comprehensive and valid set of
criterion measures.
12
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 193 of 342 Page ID
#:388
CHAPTER 5.1
FIELD PROCEDURES
FOR
The additional testing in 1998 ran in Chicago, Cleveland, Washington, DC, and Oklahoma City. The enroute centers of Chicago and Cleveland performed like
the original AT-SAT sites, testing their own controllers.
The en-route center at Leesburg, Virginia, which serves
the Washington, DC area, tested their controllers as well
as some from New York. At the Mike Monroney Aeronautical Center in Oklahoma City, the Civil Aeromedical Institute (CAMI), with the help of Omni personnel,
tested controllers from Albuquerque, Atlanta, Houston,
Miami, and Oakland. All traveling controllers were
scheduled by Caliber with the help of Arnold Trevette in
Leesburg and Shirley Hoffpauir in Oklahoma City.
Field Period
Data collection activities began early in the Ft. Worth
and Denver Centers in May, 1997. The remaining nine
centers came on line two weeks later. To ensure adequate
sample size and diversity of participants, one additional
field site Atlanta was included beginning in June
1997. The concurrent data collection activities continued in all locations until mid-July.
Of the four sites in 1998, Chicago started the earliest
and ran the longest, for a little over two months beginning in early March. Washington, DC began simultaneously, testing and rating for just under two months.
Cleveland and Oklahoma City began a couple of weeks
into March and ended after about four and five weeks,
respectively.
Atlanta, GA
Albuquerque, NM
Boston, MA
Denver, CO
Ft. Worth, TX
Houston, TX
Jacksonville, FL
Kansas City, MO
Los Angeles, CA
Memphis, TN
Miami, FL
Minneapolis, MN
13
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 194 of 342 Page ID
#:389
Supervisory Assessments
Participating controllers nominated two supervisory
personnel and two peers to complete assessments of them
as part of the criterion measurement. While the selection
of the peer assessors was totally at the discretion of the
controller, supervisory and administrative staff had more
leeway in selecting the supervisory assessors (although
not ones supervisor of record) from the much smaller
pool of supervisors in order to complete the ratings.
Throughout the data collection period, supervisors and
peers assembled in small groups and were given standardized instructions by on-site data collectors in the
completion of the controller assessments. To the extent
feasible, supervisors and peers completed assessments in
a single session on all the controllers who designated
them as their assessor. When the assessment form was
completed, controller names were removed and replaced
14
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 195 of 342 Page ID
#:390
tors transmitted completed test information (on diskettes) and hard copies of the Biographical Information
and performance assessment forms to the data processing
center in Alexandria, VA.
Site Shut Down
At the end of the data collection period, each site was
systematically shut down. The predictor and criterion
test programs were removed from the computers, as were
any data files. Record logs, signed consent forms, unused
test materials, training manuals and other validation
materials were returned to Caliber Associates. Chicago,
the last site of the second data collection effort, shut
down on Monday, May 11, 1998.
15
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 196 of 342 Page ID
#:391
16
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 197 of 342 Page ID
#:392
CHAPTER 5.2
DEVELOPMENT OF PSEUDO- APPLICANT SAMPLE
Anthony Bayless, Caliber Associates
RATIONALE FOR
PSEUDO-APPLICANT SAMPLE
This underestimate is the result of decreased variation in the predictor scores of job incumbents; they would all be
expected to score relatively the same on these predictors. When there is very little variation in a variable, the strength of its
association with another variable will be weaker than when there is considerable variation. In the case of these predictors,
the underestimated relationships are a statistical artifact resulting from the sample selection.
17
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 198 of 342 Page ID
#:393
18
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 199 of 342 Page ID
#:394
19
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 200 of 342 Page ID
#:395
CHAPTER 5.3
DEVELOPMENT
OF
DATABASE
Ani S. DiFazio
HumRRO
The soundness of the validity and fairness analyses
conducted on the beta test data, and of the recommendations based on those results, was predicated on reliable
and complete data. Therefore, database design, implementation, and management were of critical importance
in validating the predictor tests and selecting tests for
inclusion in Version 1 of the Test Battery. The Validation Analysis Plan required many diverse types of data
from a number of different sources. This section describes the procedures used in processing these data and
integrating them into a cohesive and reliable analysis
database.
Some sites wrote the transmittal diskette at the end of the test day, while others cut the data at the end of a shift. In these
cases, more than one diskette would be produced for each test day.
3
While a DTF was supposed to be produced for each diskette transmitted, some sites sent one DTF covering a number of
test days, and, conversely, more than one DTF describing a single diskette.
21
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 201 of 342 Page ID
#:396
Ensure that the test sites were transmitting all the data
they were collecting and that no data were inadvertently
falling through the cracks in the field.
Closely monitor the writing and transmittal of data by
the sites, so that problems would be quickly addressed
before large amounts of data were affected.
Identify and resolve problematic or anomalous files.
The Master Login software did not copy certain files, such as those with zero bytes.
In automating the DTF, we wanted one DTF record for each diskette transmitted. Because sites sometimes included the
information from more than one diskette on a hard copy DTF, more than one automated record was created for those
DTFs. Conversely, if more than one hard copy DTF was transmitted for a single diskette, they were combined to form one
automated DTF record.
6
This computerized comparison was made between the automated DTF and an ASCII capture of the DOS directory of the
diskette from the test site. The units of analysis in these two datasets were originally different. Since a record in the
directory capture data was a file (i.e., an examinee/test combination), there was more than one record per examinee. An
observation in the original DTF file was an examinee, with variables indicating the presence (or absence) of specific tests. In
addition, the DTF inventoried predictor tests in four testing blocks rather than as individual tests. Examinee/test-level data
were generated from the DTF by producing dummy electronic DTF records for each predictor test that was included in a
test block that the examinee took. Dummy CBPM DTF records were also generated in this manner. By this procedure, the
unit of analysis in the automated DTF and DOS directory datasets was made identical and a one-to-one computerized
comparison could be made between the DTF and the data actually received.
7
Conversely, this procedure was also used to identify and resolve with the sites those files that appeared on the diskette, but
not on the DTF.
5
22
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 202 of 342 Page ID
#:397
the site, such as the Individual Control Forms. Approximately three quarters into the AT-SAT 1 testing period,
the data processors developed a table for each site that
listed examinees by the types of data8 that had been
received for them. A sample of this table and the cover
letter to test site managers is provided in Appendix I. The
site managers were asked to compare the information on
this table to their Individual Control Forms and any
other records maintained at the site. The timing of this
exercise was important because, while we wanted to
include as many examinees as possible, the test sites still
had to be operational and able to resolve any discrepancies discovered. The result of this diagnostic exercise was
very encouraging. The only type of discrepancy uncovered was in cases where the site had just sent data that had
not yet been processed. Because no real errors of omission were detected and since AT-SAT 2 involved fewer
cases that AT-SAT 1, this diagnostic exercise was not
undertaken for AT-SAT 2.
Further quality assurance measures were taken to
identify and resolve any systematic problems in data
collection and transmission. Under the premise that
correctly functioning test software would produce files
that fall within a certain byte size range and that malfunctioning software would not, a diagnostic program was
developed to identify files that were too small or too big,
based on normal ranges for each test. The objective was
to avoid pervasive problems in the way that the test
software wrote the data by reviewing files with suspicious
byte sizes as they were received. To accomplish this, files
with anomalous byte sizes and the pertinent DALs were
passed on to a research analyst for review. A few problems
were identified in this way. Most notably, we discovered
that the software in the Scan predictor test stopped
writing data when the examinee did not respond to test
items. Also, under some conditions, the Air Traffic
Scenarios test software did not write data as expected;
investigation indicated that the condition was rare and
that the improperly written data could, in fact, be read
and used, so the software was not revised. No other
systematic problems in the way the test software wrote
data were identified.
This procedure was also one way to identify files with
problems of a more idiosyncratic nature. The identification of file problems by the data processors was typically
This table reported whether predictor and CBPM test data, participant biographical information forms, and SSN Request
Forms had been received.
9
ZIP disks are a virtually incorruptible data storage medium that hold up to 100 megabytes of data.
23
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 203 of 342 Page ID
#:398
10 The 23,107 files were comprised of the CBPM test, the 13 predictor tests, and one start-up (ST) file for controller
examinees and 13 predictor tests, and one start-up (ST) file for pseudo-applicants.
24
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 204 of 342 Page ID
#:399
Historical Data
Confidentiality of test participants was a primary
concern in developing a strategy for obtaining historical
data from the FAA computer archives and linking that
data to other AT-SAT datasets. Specifically, the objective was to ensure that the link between test examinees
and controllers was not revealed to the FAA, so that test
results could never be associated with a particular employee. Also, although the FAA needed participant controller Social Security Numbers (SSN) to identify and
extract cases from their historical archives, these SSNs
11
The total number of assessor discrepancies e-mailed to sites was 41. For 12 participant assessors, the test administrator
indicated the presence of an assessor biographical form on the DTF when a participant biographical form had actually been
completed. Therefore, the number of true assessor discrepancies was 29.
25
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 205 of 342 Page ID
#:400
in the data. Second, the test analysts performed diagnostics to identify observations that might be excluded from
further analysis, such as those examinees exhibiting
motivational problems. Obviously, historical data from
the FAA archives were not edited. Data collected on hard
copy instruments were subjected to numerous internal
and external diagnostic and consistency checks and
programmatic data editing. A primary goal in data
editing was to salvage as much of the data as possible
without jeopardizing accuracy.
Participant Biographical Data. Several different types
of problems were encountered with the participant biographical data:
More than one biographical information form completed by the same participant
Missing or out-of-range examinee identification number
Out-of-range date values
First, to correct the problem of duplicate12 biographical forms for the same examinee, all forms completed
after the first were deleted. Second, information from the
DTF sent with the biographical form often made it
possible to identify missing examinee numbers through
a process of elimination. Investigation of some out-ofrange examinee numbers revealed that the digits had
been transposed at the test site. Third, out-of-range date
values were either edited to the known correct value or set
to missing when the correct value was unknown.
Other data edits were performed on the controller and
pseudo-applicant participant biographical data. A number of examinees addressed the question of racial/ethnic
background by responding Other and provided openended information in the space allowed. In many cases,
the group affiliation specified in the open-ended response could be re-coded to one of the five specific
alternatives provided by the item (i.e., Native American/
Alaskan Native, Asian/Pacific Islander, African American, Hispanic, or Non-Minority). In these cases, the
open-ended responses were recoded to one of the closeended item alternatives. In other cases, a sixth racial
category, mixed race, was created and applicable openended responses were coded as such.
Two types of edits were applicable only to the controller sample. First, in biographical items that dealt with the
length of time (months and years) that the controller had
More than one biographical information form completed by the same assessor
Incorrect assessor identification numbers
Out-of-range date values
12
The word duplicate here does not necessarily mean identical, but simply that more than one form was completed by a
single participant. More often than not, the duplicate forms completed by the same participant were not identical.
26
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 206 of 342 Page ID
#:401
First, the same rule formulated for participants, deleting all duplicate biographical records completed after the
first, was applied. Second, by consulting the site Master
Rosters and other materials, misassigned or miskeyed13
rater identification numbers could be corrected. Third,
out-of-range date values were either edited to the known
correct value (i.e., the year that all biographical forms
were completed was 1997) or set to missing when the
correct value was unknown.
In addition to data corrections, the race and time
fields in the assessor data were edited following the
procedures established in the participant biographical
data. Open-ended responses to the racial/ethnic background item were re-coded to a close-ended alternative
whenever possible. In addition, when only the month or
year component in the time fields was missing, the
missing item was coded as zero. When full years were
reported in the month field (e.g., 24 months), the year
field was incremented by the appropriate amount and
the month field re-coded to reflect any remaining time
less than a year.
Since the test sites were instructed to give participants
who were also assessors a participant, rather than assessor, biographical form, data processors also looked for
biographical information on raters among the participant data. Specifically, if an assessor who provided a
CARS for at least one participant did not have an assessor
biographical form, participant biographical data for that
assessor were used, when available
Criterion Ratings Data. Of all the hard copy data
collected, the CARS data required the most extensive
data checking and editing. Numerous consistency checks
were performed within the CARS dataset itself (e.g.,
duplicate rater/ratee combinations), as well as assessing
its consistency with other datasets (e.g., assessor biographical data). All edits were performed programmatically, with hard copy documentation supporting each
edit maintained in a separate log. The following types of
problems were encountered:
First, the vast majority of missing or incorrect identification numbers and/or rater/ratee relationships were
corrected by referring back to the hard copy source and/
or other records. In some cases the test site manager was
contacted for assistance. Since the goal was to salvage as
much data as possible, examinee/rater numbers were
filled in or corrected whenever possible by using records
maintained at the sites, such as the Master Roster.
Problems with identification numbers often originated
in the field, although some key-punch errors occurred
despite the double-key procedure. Since examinee number on a CARS record was essential for analytic purposes,
six cases were deleted where examinee number was still
unknown after all avenues of information had been
exhausted.
Second, some raters provided ratings for the same
examinee more than once, producing records with duplicate rater/ratee combinations. In these cases, hard copy
sources were reviewed to determine which rating sheet
the rater had completed first; all ratings produced
subsequently for that particular rater/ratee combination were deleted.
Third, some cases were deleted based on specific
direction from data analysts once the data had been
scrutinized. These included rater/ratee combinations
with more than 3 of the 11 rating dimensions missing,
outlier ratings, ratings dropped due to information in the
Problem Logs, or incorrect assignment of raters to ratees
(e.g., raters who had not observed ratees controlling
traffic). Fourth, CARS items that dealt with the length of
time (months and years) that the rater had worked with
the ratee were edited, so that when only the month or
year component was missing, the missing item was
coded as zero. Where full years were reported in the
month field, the year field was incremented and the
month field re-coded to reflect any remaining time.
AT-SAT Database
As stated above, the database management plan called
for a main AT-SAT dataset that could address most
analytic needs, with satellite datasets that could provide
detailed information in specific areas. The AT-SAT
Database, containing data from the alpha and beta tests,
is presented in Figure 5.3.1. To avoid redundancy,
datasets that are completely contained within other
datasets are not presented separately in the AT-SAT
13
The miskeying was often the result of illegible handwriting on the hard copy forms.
27
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 207 of 342 Page ID
#:402
The following FAA-applied alphanumeric variables were assigned an SPSS system missing value when the original value
consisted of a blank string: CFAC, FAC, FORM, IOPT, OPT, ROPT, STATSPEC, TTYPE , and @DATE. The following
FAA-supplied variables were dropped since they contained missing values for all cases: REG, DATECLRD, EOD,
FAIL16PF, P_P, and YR.
15
This file also contains scored High Fidelity test data.
28
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 208 of 342 Page ID
#:403
Each Beta subdirectory contains data files. In addition, the Final Analytic Summary Data subdirectory
contains a codebook for XFINDAT5.POR. The
codebook consists of two volumes that are stored as
Microsoft Word files CBK1.DOC and CBK2.DOC.
The CBK1.DOC file contains variable information
generated from an SPSS SYSFILE INFO. It also contains a Table of Contents to the SYSFILE INFO for ease
of reference. The CBK2.DOC file contains frequency
distributions for discrete variables, means for continuous data elements, and a Table of Contents to these
descriptive statistics.16
16
Means were generated on numeric FAA-generated historical variables unless they were clearly discrete.
29
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 209 of 342 Page ID
#:404
CHAPTER 5.4
BIOGRAPHICAL AND COMPUTER EXPERIENCE INFORMATION:
DEMOGRAPHICS FOR THE VALIDATION STUDY
Patricia A. Keenan, HumRRO
CONTROLLER SAMPLE
This chapter presents first, the demographic characteristics of the participants in both the concurrent validation and the pseudo-applicant samples. The data on
the controller sample are presented first, followed by the
pseudo-applicant information. The latter data divided
between civilian and military participants. It should be
noted that not all participants answered each question in
the biographical information form, so at times the numbers will vary or cumulative counts may not total 100%.
Participant Demographics
A total of 1,232 FAA air traffic controllers took part
in the concurrent validation study. 912 controllers were
male (83.7%), 177 controllers were female (16.3%). 143
participants did not specify their gender so their participation is not reflected in analyses. The majority of the
data was collected in 1997. A supplementary data collection was conducted in 1998 to increase the minority
representation in the sample. A total of 1,081 controllers
participated in the 1997 data collection; 151 additional
controllers participated in 1998. Table 5.4.4 shows the
cross-tabulation of race and gender distribution for the
1997 and 1998 samples, as well as the combined numbers across both years. 143 individuals did not report
their gender and 144 did not report their race. These
individuals are not reflected in Table 5.4.4. The average
age of the controllers was 37.47 (SD = 5.98), with ages
ranging from 25 to 60 years. The mean was based on
information provided by 1,079 of the participants; age
could not be calculated for 153 participants.
Also of interest was the educational background of the
controllers. Table 5.4.5 shows the highest level of education achieved by the respondents. No information on
education was provided by 145 controllers.
TOTAL SAMPLE
Participant Demographics
A total of 1,752 individuals took part in the study
(incumbents and pseudo-applicants); 1,265 of the participants were male (72.2%) and 342 were female
(19.5%). 145 participants did not indicate their gender;
149 did not identify their ethnicity. The cross-tabulation of ethnicity and gender, presented in Table 5.4.1,
represents only those individuals who provided complete information about both their race and gender.
The sample included incumbent FAA controllers,
supervisors and staff (Controller sample) as well as
pseudoapplicants from Keesler Air Force base (Military
PA sample) and civilian volunteers from across the
country (Civilian PA sample). The pseudo-applicants
were selected based on demographic similarity to expected applicants to the controller position. The estimated average age of the total sample was 33.14 years
(SD = 8.43). Ages ranged from 18 to 60 years. This
number was calculated based on the information from
1,583 participants; 169 people did not provide information about their date of birth and were not included in
this average.
Participants were asked to identify the highest level of
education they had received. Table 5.4.2 presents a
breakdown of the educational experience for all participants. (151 people did not provide information about
their educational background.) The data were collected
at 18 locations around the U.S. Table 5.4.3 shows the
number of participants who tested at each facility.
Professional Experience
The controllers represented 17 enroute facilities. The
locations of the facilities and the number of controller
participants at each one are shown in Table 5.4.6. A total
of 1,218 controllers identified the facility at which they
are assigned; 14 did not identify their facility.
One goal of the study was to have a sample composed
of a large majority of individuals with air traffic experience, as opposed to supervisors or staff personnel. For
this reason, participants were asked to identify both their
current and previous positions. This would allow us to
identify everyone who had current or previous experience in air traffic control. Table 5.4.7 indicates the
average number of years the incumbents in each job
31
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 210 of 342 Page ID
#:405
category had been in their current position. 142 controllers did not indicate their current position. The air traffic
controller participant sample included journeyman controllers, developmental controllers, staff and supervisors,
as well as holding several other positions. These other
positions included jobs described as Traffic Management Coordinator.
Overall, the participants indicated they had spent an
average of 4.15 years in their previous position. These
positions included time as journeyman controller, developmental controller, staff, supervisor or other position.
Those responding Other included cooperative education students, Academy instructors, and former Air
Force air traffic controllers.
One goal of the biographical information form was to
get a clear picture of the range and length of experience
of the participants in the study. To this end they were
asked the number of years and months as FPL, staff, or
supervisor in their current facility and in any facility. The
results are summarized in Table 5.4.8. Few of the respondents had been in staff or supervisory capacity for more
than a few months. Half of the respondents had never
acted in a staff position and almost two-thirds had never
held a supervisory position. The amount of staff experience ranged from 0 to 10 years, with 97.6% of the
participants having less than four years of experience.
The findings are similar for supervisory positions; 99%
of the respondents had seven or fewer years of experience.
This indicates that our controller sample was indeed
largely composed of individuals with current or previous
controller experience.
Also of interest was the amount of time the incumbents (both controllers and supervisors) spent actually
controlling air traffic. Respondents were asked how they
had spent their work time over the past six months and
then to indicate the percentage of their work time they
spent controlling traffic (i.e., plugged-in time) and the
percentage they spent in other job-related activities (e.g.,
crew briefings, CIC duties, staff work, supervisory duties). The respondents indicated that they spent an
average of 72.41% of their time controlling traffic and
23.33% of their time on other activities.
To determine if individual familiarity with computers could influence their scores on several of the tests in
the predictor battery, a measure of computer familiarity
and skill was included as part of the background items.
The Computer Use and Experience (CUE) Scale, developed by Potosky and Bobko (1997), consists of 12 5point Likert-type items (1= Strongly Disagree, 2 =
Disagree, 3 = Neither Agree nor Disagree, 4 = Agree, 5 =
Strongly Agree), which asked participants to rate their
knowledge of various uses for computers and the extent
to which they used computers for various reasons. In
addition, 5 more items were written to ask participants
about actual use of the computer for such purposes as
playing games, word processing and using e-mail. The
resulting 17-item instrument is referred to in this report
as the CUE-Plus.
Item Statistics
The means and standard deviations for each item are
presented in Table 5.4.10. The information reported in
the table includes both the Air Traffic Controller participants and the pseudo-applicants. Overall, the respondents show familiarity with computers and use them to
different degrees. Given the age range of our sample, this
is to be expected. As might be expected, they are fairly
familiar with the day-to-day uses of computers, such as
doing word processing or sending email. Table 5.4.11
shows the item means and standard deviations for each
sample, breaking out the civilian and military pseudoapplicant samples and the controller participants. The
means for the samples appear to be fairly similar. Table
5.4.12 shows the inter-item correlations of the CUEPlus items. All the items were significantly correlated
with each other.
Reliability of Cue-Plus
Using data from 1,541 respondents, the original 12item CUE Scale yielded a reliability coefficient (alpha) of
.92. The scale mean was 36.58 (SD = 11.34). The CUEPlus, with 17 items and 1,533 respondents, had a reliability coefficient (alpha) of .94. The scale mean was
51.47 (SD = 16.11). Given the high intercorrelation
between the items, this is not surprising. The item-total
statistics are shown in Table 5.4.13. There is a high
degree of redundancy among the items. The reliability
coefficient for the samples are as follows: controllers, .93,
PSEUDO-APPLICANT SAMPLE
A total of 518 individuals served as pseudo-applicants
in the validation study; 258 individuals from Keesler Air
Force Base and 256 civilians took part in the study. The
racial and gender breakdown of these samples is shown
in Table 5.4.9.
32
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 211 of 342 Page ID
#:406
parison group. The differences were very low to moderate, with the absolute value of the range from .04 to .31.
The highest d scores were in the Military PA sample.
Caucasians scored higher than the comparison groups in
all cases except for the Civilian PA, in which AfricanAmericans scored higher than Caucasians.
Factor Analysis
Principal components analysis indicated that CUEPlus had two factors, but examination of the second
factor showed that it made no logical sense. Varimax and
oblique rotations yielded the same overall results. The
item I often use a mainframe computer system did not
load strongly on either factor, probably because few
individuals use mainframe computers. The varimax rotation showed an inter-factor correlation of .75. Table
5.4.14 shows the eigenvalues and percentages of variance
accounted for by the factors. The eigenvalues and variance accounted for by the two-factor solution are shown
in Table 5.4.15. The first factor accounts for over half of
the variance in the responses, with the second factor
accounting for only 6%. The last column in Table 5.4.16
shows the component matrix when only one factor was
specified. Taken together, the data suggests that one factor
would be the simplest explanation for the data structure.
Summary
All in all, these results show the CUE-Plus to have very
small differences for both gender and race. To the extent
that the instrument predicts scores on the test battery,
test differences are not likely to be attributable to computer familiarity.
PERFORMANCE DIFFERENCES
Gender Differences
The overall mean for the CUE-Plus was 51.31 (SD
= 16.09). To see whether males performed significantly
different than females on the CUE-Plus, difference
scores were computed for the different samples. The
difference score (d) is the standardized mean difference
between males and females. A positive value indicates
superior performance by males. The results are reported
in Table 5.4.16. For all samples, males scored higher on
the CUE (i.e., were more familiar with or used computers for a wider range of activities), but at most, these
differences were only moderate (.04 to .42).
Ethnic Differences
Performance differences on the CUE-Plus between
ethnic groups were also investigated. The means, standard deviations and difference scores (d) for each group
is presented in Table 5.4.17. The table is split out by
sample type (e.g., Controller, Military PA, Civilian PA).
Comparisons were conducted between Caucasians and
three comparison groups: African-Americans, Hispanics, and all non-Caucasian participants. A positive value
indicates superior performance by Caucasians; a negative value indicates superior performance by the com-
33
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 212 of 342 Page ID
#:407
The Letter Factory test scores on Situational Awareness and Planning and Thinking Ahead are highly correlated with the individual CUE-Plus items for the
pseudo-applicants, while the controllers Planning and
Thinking Ahead scores were more often correlated with
the CUE-Plus items than were their Awareness scores.
One explanation for these high correlations is that the
more comfortable one is with various aspects of using a
computer, the more cognitive resources can allocated for
planning. When the use of the computer is automatic,
more concentration can be focused on the specific task.
The Time-Wall perception scores (Time Estimate
Accuracy and Perceptual Accuracy) are highly correlated
with the individual CUE items for the pseudo-applicants and correlated to a lesser extent for the controllers.
The reverse is true for the Perceptual Speed variable: the
controller scores are almost all highly correlated with
CUE-Plus items, while only two of the items are correlated for the pseudo-applicants. The Time-Wall test will
not be included in the final test battery, so this is not a
consideration as far as fairness is concerned.
Using a mainframe computer correlated with only
one of the test battery scores for the controller sample,
but correlated highly with several test scores for the
pseudo-applicants. The fact that controllers use mainframes in their work probably had an effect on their
correlations.
Regression Analyses
Regression analyses were conducted to investigate the
extent to which the CUE-Plus and four demographic
variables predict test performance. The dependent variables predicted were the measures that are used in the test
battery. Dummy variables for race were calculated, one
to compare Caucasians and African-Americans, one to
compare Hispanics to Caucasians, and the third to
compare all minorities to Caucasians. Those identified
as Caucasian were coded as 1, members of the comparison groups were coded as 0. 1,497 cases were analyzed.
Thus, five variables were used in the regression analyses:
three race variables, education, age, gender and score
on CUE-Plus.
Analogy Test
Age was a fairly consistent predictor for the Information Processing (see Table 5.4.27) and Reasoning variables (see Table 5.4.28), although it did not predict
Reasoning performance in the Caucasian/Minority and
Caucasian/African-American equations. Education was
a negative predictor for Information Processing, but was
positively related to Reasoning. CUE-Plus was a predic-
Applied Math
The variables described above were entered as predictors for the total number of items correct. For all three
comparisons, all variables were included in the final
model. That model accounted for approximately 20% of
the variance for all three comparisons. Gender was the
34
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 213 of 342 Page ID
#:408
Dials Test
The number of items correct on the Dials test was
predicted by gender, education, race and CUE-Plus.
Table 5.4.29 shows the statistics associated with the
analysis. Males are predicted to score higher than females; those with higher education are predicted to
perform better on the test than those with less education.
Race was positively related with Dials scores, indicating
that Caucasians tended to score higher than their comparison groups. CUE-Plus was a significant, but weak
predictor for the Caucasian/Minority and Caucasian/
African-American models. It did not predict performance in the Caucasian/Hispanic model. The four
variables accounted for between 8% and 10% of the
variance in Dials test performance.
SUMMARY
This chapter described the participants in the ATSAT validation study. The participants represented both
genders and the U.S. ethnicities likely to form the pool
of applicants for the Air Traffic Controller position.
In addition to describing the demographic characteristics of the sample on which the test battery was validated, this chapter also described a measure of computer
familiarity, CUE. CUE was developed by Potosky and
Bobko (1997) and revised for this effort (CUE-Plus).
The CUE-Plus is a highly reliable scale (alpha = .92);
factor analysis indicated that there was only one interpretable factor. Analysis of the effect of gender on CUEPlus scored showed moderate differences for the controller
sample, none for the pseudo-applicant sample; males
scored higher on the CUE-Plus than did females. There
were also small to moderate differences in CUE-Plus for
ethnicity. The strongest differences were found in the
military pseudo-applicant sample.
CUE-Plus items showed a moderate to high correlation with the variables assessed in the validation study.
The CUE-Plus was also shown to be a fairly weak but
consistent predictor of performance on the variables that
were included in V 1.0 test battery. Although there were
some performance differences attributable to gender,
race and computer experience none of these were extremely strong. The effects of computer skill would be
washed out by recruiting individuals who have strong
computer skills.
35
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 214 of 342 Page ID
#:409
CHAPTER 5.5
PREDICTOR-CRITERION A NALYSES
Gordon Waugh, HumRRO
Overview of the Predictor-Criterion Validity
Analyses
The main purpose of the validity analyses was to
determine the relationship of AT-SAT test scores to air
traffic controller job performance. Additional goals of
the project included selecting tests for the final AT-SAT
battery, identifying a reasonable cut score, and the
development of an approach to combine the various ATSAT scores into a single final score. Several steps were
performed during the validity analyses:
Zero-Order Validities
It is important to know how closely each predictor
score was related to job performance. Only the predictor
scores related to the criteria are useful for predicting job
performance. In addition, it is often wise to exclude tests
from a test battery if their scores are only slightly related
to the criteria. A shorter test battery is cheaper to develop,
maintain, and administer and is more enjoyable for the
examinees.
Therefore, the zero-order correlation was computed
between each predictor score and each of the three
criteria (CBPM, Ratings, and Composite). Because some
tests produced more than one score, the multiple correlation of each criterion with the set of scores for each
multi-measure test was also computed. This allowed the
assessment of the relationship between each test, as a
whole, and the criteria. These correlations are shown in
Table 5.5.1 below.
Ideally, we would like to know the correlation between the predictors and the criteria among job applicants. In this study, however, we did not have criteria
information for the applicants (we did not actually use
real applicants but rather pseudo-applicants). That would
require a predictive study design. The current study uses
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 215 of 342 Page ID
#:410
38
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 216 of 342 Page ID
#:411
39
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 217 of 342 Page ID
#:412
wi = R
ri
[Equation 5.5.1]
j =1
Equation 5.5.4]
where k = the number of predictors, wi = the raw-score
weight of the ith predictor, and xi = the raw score of the
ith predictor.
The effects of using various weighting schemes are
shown in Table 5.5.3. The table shows the validities both
before and after correcting for shrinkage and range
restriction. Because the regression procedure fits an
equation to a specific sample of participants, a drop in
the validity is likely when the composite predictor is used
in the population. The amount of the drop increases as
sample size decreases or the number of predictors increases. The correction for shrinkage attempts to estimate the amount of this drop. The formula used to
estimate the validity corrected for shrinkage is referred to
by Carter (1979) as Wherry (B) (Wherry, 1940). The
formula is :
Predictor Composite
The predictor composite was computed using the
combined predictor weights described above. Before
applying the weights, the predictor scores had to be
transformed to a common metric. Thus, each predictor
was standardized according to the pseudo-applicant
sample. That is, a predictors transformed score was computed as a z-score according to the following formula:
z=
x p
p
5.5.3]
i =1
[Equation 5.5.2]
n 1
R = 1 (1 R 2 )
n k 1
[Equation 5.5.5]
40
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 218 of 342 Page ID
#:413
As noted above, the final AT-SAT score was computed using the Combined method of weighting the
predictors. Only the regression method had a higher
validity. In fact, the Combined method probably has a
higher validity if we consider that its correction for
shrinkage overcorrects to some extent. Finally, the regression-weighted validity is based on all 35 scales
whereas the Combined validity is based on just 26
tests. Thus, the Combined weighting method produces the best validity results.
The Combined method produced the second-best
results in terms of mean group differences and fairness.
Only the Optimal low d-score weighting method had
better results in these areas, and its validity was much
lower than the Combined methods validity. None of the
weighting methods produced a statistically significant
difference in standardized regression slopes among the
groups. Thus, the Combined weighting method was the
best overall. It had the highest validity and the secondbest results in terms of group differences and fairness.
Therefore, the Combined weighting method was used to
compute the final AT-SAT battery score.
41
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 219 of 342 Page ID
#:414
than using no screening. That is, if all of the pseudoapplicants were hired (or some were randomly selected to
be hired), their performance level would be much lower
than the current Controllers.
select applicants much above the mean of current controllers. In the past, of course, the OPM test was combined with a nine-week screening program resulting in
current controller performance levels. The AT-SAT is
expected to achieve about this same level of selectivity
through the pre-hire screening alone.
Table 5.5.6 shows the percent of high performers
expected for different cutpoints on the AT-SAT and
OPM batteries. This same information is shown graphically in Figure 5.5.2. Here, high performance is defined
as the upper third of the distribution of performance in
the current workforce as measured by our composite
criterion measure. If all applicants scoring 70 or above on
the AT-SAT are selected, slightly over one-third would
be expected to be high performers. With slightly greater
selectivity, taking only applicants scoring 75.1 or above,
the proportion of high performers could be increased to
nearly half. With a cutscore of 70, it should be necessary
to test about 5 applicants to find each hire. At a cutscore
of 75.1, the number of applicants tested per hire goes up
to about 10. By comparison, 1,376 applicants would
have to be tested for each hire to obtain exactly one-third
high performers using the OPM screen.
42
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 220 of 342 Page ID
#:415
CHAPTER 5.6
ANALYSES OOF GROUP DIFFERENCES
AND
FAIRNESS
INTRODUCTION
A personnel selection test may result in differences
between white and minority groups. In order to continue
to use a test that has this result, it is required to demonstrate that the test is job- related or valid. Two types of
statistical analyses are commonly used to assess this issue.
The analysis of mean group differences determines the
degree to which test scores differ for a minority group as
a whole (e.g., females, blacks, Hispanics) when compared with its reference group (i.e., usually whites or
males). Fairness analysis determines the extent to which
the relationship between test scores and job performance differs for a minority group compared to its
reference group.
Our sample contained enough blacks and Hispanics to analyze these groups separately but too few
members of other minority groups to include in the
analyses. It was decided not to run additional analyses
with either all minorities combined or with blacks and
Hispanics combined because the results differed considerably for blacks vs. Hispanics. Thus, the following
pairs of comparison groups were used in the fairness
43
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 221 of 342 Page ID
#:416
GROUP DIFFERENCES
Analyses
Only the pseudo-applicant sample was used for the
group difference analyses. This sample best represented
the population of applicants. Therefore, air traffic controllers were excluded from these analyses.
The Uniform Guidelines on Employee Selection Procedures (Federal Register, 1978, Section 4.D.) state that
evidence of adverse impact exists when the passing rate
for any group is less than four-fifths of the passing rate for
the highest group:
Therefore, the passing rates for each test were computed for all five groups (males, females, whites, blacks,
Hispanics). Then the passing rates among the groups
were compared to see if the ratio of the passing rates fell
below four-fifths. Separate comparisons were done within
the gender groups and within the racial groups. That is,
males and females were compared; and blacks and Hispanics were compared to whites.
The Uniform Guidelines (Section D.4.) state that
adverse impact might exist even if the passing rate for the
minority group is greater than four-fifths the reference
groups passing rate:
44
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 222 of 342 Page ID
#:417
The predictor.
The group (a nominal dichotomous variable which
indicates whether the person is in the focal or reference
group). If this independent variable is significant, it
indicates that, if a separate regression were done for each
of the two groups, the intercepts of the regression lines
would be significantly different. Because the predictors
in this study were rescaled for these analyses such that the
intercepts occurred at the cut scores, a difference in
intercepts means that the two regression lines are at
different elevations at the cut score. That is, they have
different criterion scores at the predictors cut score.
The predictor by group interaction term. This is the
product of group (i.e., 0 or 1) and the predictor score. If this
independent variable is significant, it indicates that, if a
separate regression were done for each of the two groups,
the slopes of the regression lines would be significantly
different. The standardized slopes equal the validities.
FAIRNESS
Analyses
The fairness analyses requires analyses of job performance as well as test scores. As a consequence, all fairness
analyses were performed on the concurrent validation
45
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 223 of 342 Page ID
#:418
[Equation 5.6.1]
17
Linear regression assumes that the standard deviation of the criterion scores is the same at every predictor score. This is
called homoscedasdicity. In practice, this assumption is violated to varying degrees. Thus, in theory, the standard error of
estimate should equal the standard deviation of the criterion scores at the predictors cut scoreand at every other predictor
score as well. In practice, this is only an approximation.
46
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 224 of 342 Page ID
#:419
TARGETED RECRUITMENT
DISCUSSION
Although many of the tests, including the final ATSAT battery score, exhibited differences between
groups, there is no reliable evidence that the battery is
unfair. The fairness analyses show that the regression
slopes are very similar among the groups (white, male,
female, black, Hispanic). There are differences among
the intercepts (at the cut score), but these differences
favor the minority groups. Thus, there is strong evidence that the battery is fair for females, blacks, and
Hispanics. These results show that the test battery is
equally valid for all comparison groups. In addition,
differences in mean test scores are associated with
corresponding differences in job performance measures. For all groups, high test scores are associated
with high levels of job performance and low scores are
associated with lower levels of job performance.
47
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 225 of 342 Page ID
#:420
CHAPTER 6
THE RELATIONSHIP
OF
TO
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 226 of 342 Page ID
#:421
50
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 227 of 342 Page ID
#:422
Biographical Questionnaire
Additional information about controller demographics and experience was obtained from data provided by
Academy entrants during the first week they attended
one of the Academy screening programs and obtained
from the Consolidated Personnel Management Information System (CPMIS). New entrants completed a
Biographical Questionnaire (BQ). Different BQ items
were used for those entering the Nonradar Screen Program at various times. The BQ questions concerned the
amount and type of classes taken, grades earned in high
school, amount and type of prior air traffic and/or aviation
experience, reason for applying for the job, expectations
about the job, and relaxation techniques used.
51
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 228 of 342 Page ID
#:423
52
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 229 of 342 Page ID
#:424
53
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 230 of 342 Page ID
#:425
54
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 231 of 342 Page ID
#:426
(r = .22.) The final score in the Nonradar Screen program was significantly correlated with training times in
both phases of field training and with time to reach FPL
status, but not with either Indication of Performance
measure. The final score in the Nonradar Screen program was also significantly correlated with both ATSAT criterion measures, although the correlation with
the CBPM (.34) was much higher than the correlation
with the rating composite (.12). The final score in the
Radar Training program was also significantly correlated
with training times, and was significantly correlated with
the Indication of Performance for initial radar training.
It was also significantly correlated with both the ATSAT rating composite (.17) and the CBPM score (.21).
Table 6.3 shows correlations of the performancebased components of the archival selection procedures
(Nonradar Screen program and Radar Training program) with both the archival and AT-SAT criterion
measures. The correlations at the top of the table are
intercorrelations between archival selection procedure
components. Of the OPM component scores, only the
Abstract Reasoning Test and the MCAT were significantly correlated.
Correlations of components of the OPM battery with
component scores from the Nonradar Screen program
and the Radar Training program were fairly low, although some statistically significant correlations with
scores from the laboratory phases were observed. The
MCAT was significantly correlated with Instructor Assessment and Technical Assessment from both the
Nonradar Screen and Radar Training programs, and was
significantly correlated with the Nonradar CST. Abstract Reasoning was significantly correlated with only
the nonradar Average Technical Assessment and the
nonradar CST. The OKT had a small but statistically
significant correlation with the Nonradar Average Instructor Assessment.
The correlation between the Average Instructor Assessment and Average Technical Assessment from each
course was very high (.79 and .83, for the Nonradar
Screen program and Radar Training program, respectively.) Across programs the Average Instructor Assessment and Average Technical Assessment had significant
correlations that ranged between about .02 and .35. The
Controller Skills Tests for both courses had significant
correlations with the Nonradar Average Technical and
Average Instructor Assessment. While the Nonradar
CST was significantly correlated with the Radar Average
Instructor and Technical Assessments, the Radar CST
55
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 232 of 342 Page ID
#:427
56
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 233 of 342 Page ID
#:428
57
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 234 of 342 Page ID
#:429
58
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 235 of 342 Page ID
#:430
59
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 236 of 342 Page ID
#:431
60
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 237 of 342 Page ID
#:432
REFERENCES
of the AT-SAT predictor tests, especially those involving dynamic activities. The Abstract Reasoning test
had a particularly high correlation with the Analogies
Reasoning score, but was also correlated with other
AT-SAT predictors.
Other tests, administered experimentally to air traffic
control candidates between the years of 1981 and 1995,
provided additional support for the construct validity of
AT-SAT predictor tests. For example, the Math Aptitude Test from the ETS Factor Reference Battery
(Ekstrom et al., 1976), the Dial Reading Test, and a
biographical item reporting high school math grades
(which was previously shown to be correlated with
success in the Nonradar Screen program) had high
correlations with the Applied Math Test. The Angles
and Dials tests were also correlated with Dial Reading,
Math Aptitude, and the biographical item reporting
high school math grades. These results are not surprising,
considering the consistent relationship, observed over
years of research, between aptitude for mathematics and
various measures of performance in air traffic control.
Finally, a multiple linear regression analysis was conducted which showed that several of the AT-SAT tests
contributed to the prediction of the variance in the ATSAT composite criterion measure over and above the
OPM rating and the final score in the Nonradar Screen
program. The OPM battery and Nonradar Screen program provided an effective, though expensive, two-stage
process for selecting air traffic controllers that was used
successfully for many years. It appears that the AT-SAT
battery has equivalent, or better, predictive validity than
did the former selection procedure, and costs much less
to administer. Thus, it should be an improvement over
the previous selection process.
To maintain the advantage gained by using this new
selection procedure, it will be necessary to monitor its
effectiveness and validity over time. This will require
developing parallel forms of the AT-SAT tests, conducting predictive validity studies, developing and validating
new tests against criterion measures of ATC performance,
and replacing old tests with new ones if the former become
compromised or prove invalid for any reason.
Aerospace Sciences, Inc. (1991). Air traffic control specialist pre-training screen preliminary validation.
Fairfax, VA: Aerospace Sciences, Inc.
Alexander, J., Alley, V., Ammerman, H., Fairhurst, W.,
Hostetler, C., Jones, G., & Rainey, C. (1989,
April). FAA air traffic control operation concepts:
Volume VII, ATCT tower controllers (DOT/
FAA/AP-87/01, Vol. 7). Washington, DC: U.S.
Department of Transportation, Federal Aviation
Administration.
Alexander, J., Alley, V., Ammerman, H., Hostetler, C.,
& Jones, G. (1988, July). FAA air traffic control
operation concepts: Volume II, ACF/ACCC terminal and en route controllers (DOT/FAA/AP87/01, Vol. 2, CHG 1). Washington, DC: U.S.
Department of Transportation, Federal Aviation
Administration.
Alexander, J., Ammerman, H., Fairhurst, W., Hostetler,
C., & Jones, G. (1989, September). FAA air traffic
control operation concepts: Volume VIII,
TRACON controllers (DOT/FAA/AP-87/01, Vol.
8). Washington, DC: U.S. Department of Transportation, Federal Aviation Administration.
Alley, V., Ammerman, H., Fairhurst, W., Hostetler, C.,
& Jones, G. (1988, July). FAA air traffic control
operation concepts: Volume V, ATCT/TCCC
tower controllers (DOT/FAA/AP-87/01, Vol. 5,
CHG 1). Washington, DC: U.S. Department of
Transportation, Federal Aviation Administration.
Ammerman, H., Bergen, L., Davies, D., Hostetler, C.,
Inman, E., & Jones, G. (1987, November). FAA
air traffic control operation concepts: Volume VI,
ARTCC/HOST en route controllers (DOT/FAA/
AP-87/01, Vol. 6). Washington, DC: U.S. Department of Transportation, Federal Aviation Administration.
Ammerman, H., Fairhurst, W., Hostetler, C., & Jones,
G. (1989, May). FAA air traffic control task knowledge requirements: Volume I, ATCT tower controllers (DOT/FAA/ATC-TKR, Vol. 1). Washington, DC: U.S. Department of Transportation,
Federal Aviation Administration.
61
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 238 of 342 Page ID
#:433
Buckley, E. P., House, K., & Rood, R. (1978). Development of a performance criterion for air traffic
control personnel research through air traffic control simulation. (DOT/FAA/RD-78/71). Washington, DC: U.S. Department of Transportation,
Federal Aviation Administration, Systems Research
and Development Service.
62
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 239 of 342 Page ID
#:434
Cobb, B. B. (1967). The relationships between chronological age, length of experience, and job performance ratings of air route traffic control specialists
(DOT/FAA/AM-67/1). Oklahoma City, OK: U.S.
Department of Transportation, Federal Aviation
Administration, Office of Aviation Medicine.
Fleishman, E.A., & Quaintance, M.K. (1984). Taxonomies of human performance. Orlando, FL: Academic Press.
63
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 240 of 342 Page ID
#:435
Human Technology, Inc. (1991). Cognitive task analysis of en route air traffic controller: Model extension and validation (Report No. OPM-87-9041).
McLean, VA: Author.
Human Technology, Inc. (1993). Summary Job Analysis. Report to the Federal Aviation Administration
Office of Personnel, Staffing Policy Division.
Contract #OPM-91-2958, McLean, VA: Author.
64
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 241 of 342 Page ID
#:436
Pulakos, E. D., & Borman, W. C. (1986). Rater orientation and training. In E. D. Pulakos & W. C.
Borman (Eds.), Development and field test report
for the Army-wide rating scales and the rater orientation and training program (Technical Report
#716). Alexandria, VA: U.S. Army Research Institute for the Behavioral and Social Sciences.
Trites & Cobb (1963.) Problems in air traffic management: IV. Comparison of pre-employment jobrelated experience with aptitude test predictors of
training and job performance of air traffic control
specialists. (DOT/FAA/AM-63/31). Washington,
DC: U.S. Department of Transportation, Federal Aviation Administration, Office of Aviation Medicine.
Shrout, P.E., & Fleiss, J.L. (1979). Intraclass correlations: Uses assessing rater reliability. Psychological
Bulletin, 86, 420-428.
65
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 242 of 342 Page ID
#:437
66
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 243 of 342 Page ID
#:438
Federal Aviation
Administration
DOT/FAA/AM-13/3
2IFHRI$HURVSDFH0HGLFLQH
Washington, DC 20591
The Validity
y of the Air Traffic
Selection and Training
g ((AT-SAT))
Test Battery in Operational Use
Dana Broach
Cristina L. Byrne
Carol A. Manning
Linda Pierce
Darendia McCauley
M. Kathryn Bleckley
Federal Aviation Administration
Civil Aerospace
Medical Institute
p
Federal Aviation Administration
Oklahoma City, OK 73125
March 2013
Final Report
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 244 of 342 Page ID
#:439
NOTICE
This document is disseminated under the sponsorship
of the U.S. Department of Transportation in the interest
of information exchange. The United States Government
assumes no liability for the contents thereof.
___________
This publication and all Office of Aerospace Medicine
technical reports are available in full-text from the Civil
Aerospace Medical Institutes publications Web site:
www.faa.gov/go/oamtechreports
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 245 of 342 Page ID
#:440
Technical Report Documentation Page
1. Report No.
DOT/FAA/AM-13/3
4. Title and Subtitle
5. Report Date
The Validity of the Air Traffic Selection and Training (AT-SAT) Test
Battery in Operational Use
March 2013
7. Author(s)
16. Abstract
Applicants for the air traffic control specialist (ATCS) occupation from the general public and graduates from
post-secondary institutions participating in the FAAs Air Traffic Collegiate Training Initiative (AT-CTI) must
take and pass the Air Traffic Selection and Training (AT-SAT) test battery as part of the selection process. Two
concurrent, criterion-related validation studies demonstrated that AT-SAT was a valid predictor of ATCS job
performance (American Institutes for Research, 2012; Ramos, Heil, & Manning, 2001a,b). However, the
validity of AT-SAT in operational use has been questioned since implementation in 2002 (Barr, Brady, Koleszar,
New, & Pounds, 2011; Department of Transportation Office of the Inspector General, 2010). The current
study investigated the validity of AT-SAT in operational use.
Method. AT-SAT and field training data for 1,950 air traffic controllers hired in fiscal years 2007 through 2009
were analyzed by correlation, cross-tabulation, and logistic regression with achievement of Certified Professional
Controller (CPC) status as the criterion.
Results. The correlation between AT-SAT and achievement of CPC status was .127 (n=1,950, p<.001). The
correlation was .188 when corrected for direct restriction in range. A larger proportion of controllers in the Well
Qualified score band (85-100) achieved CPC status than in the Qualified (70-84.99) band. The logistic
regression model did not fit the data well (2=30.659, p<.001, -2LL=1920.911). AT-SAT modeled only a small
proportion of the variance in achievement of CPC status (Cox and Snell R2=.016, Nagelkerke R2=.025). The
logistic regression coefficient for AT-SAT score of .049 was significant (Wald=30.958, p<.001).
Discussion. AT-SAT is a valid predictor of achievement of CPC status at the first assigned field facility.
However, the correlation is likely attenuated by time and intervening variables such as the training process itself.
Other factors might include the weighting of subtest scores and use of a narrow criterion measure. Further
research on the validity of AT-SAT in relation to multiple criteria is recommended.
17. Key Words
Unclassified
Unclassified
14
22. Price
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 246 of 342 Page ID
#:441
ACKNOWLEDGMENTS
Research reported in this paper was conducted under the Air Traffic Program Directive/
Level of Effort Agreement between the Human Factors Research and Engineering Division
(ANG-C1), FAA Headquarters, and the Aerospace Human Factors Research Division (AAM500) of the FAA Civil Aerospace Medical Institute.
The opinions expressed are those of the authors alone, and do not necessarily reflect
those of the Federal Aviation Administration, the Department of Transportation, or Federal
government of the United States.
Correspondence concerning this report should be addressed to Dana Broach, Aerospace
Human Factors Research Division (AAM-500), P.O. Box 25082, Oklahoma City, OK 73125.
E-mail: [email protected]
iii
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 247 of 342 Page ID
#:442
CONTENTS
The Validity of the Air Traffic Selection and Training (AT-SAT) Test Battery in Operational Use-------- 1
Background ------------------------------------------------------------------------------------------------------------ 2
Method -------------------------------------------------------------------------------------------------------------------- 2
Sample ------------------------------------------------------------------------------------------------------------------ 2
Measures ---------------------------------------------------------------------------------------------------------------- 3
Analyses----------------------------------------------------------------------------------------------------------------- 4
Results---------------------------------------------------------------------------------------------------------------------- 5
Discussion ----------------------------------------------------------------------------------------------------------------- 6
References----------------------------------------------------------------------------------------------------------------- 8
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 248 of 342 Page ID
#:443
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 249 of 342 Page ID
#:444
for range restriction or criterion unreliability. With correction for incidental range restriction, the correlation was .68
(Waugh, 2001). The second concurrent criterion-related
validation study was conducted by the American Institutes
for Research (AIR; 2012). The current operational version
of AT-SAT was administered to 302 incumbent air traffic
control tower (ATCT) controllers. As in the original en
route validation study, two classes of job performance data
were collected: Behavioral Summary Scale (BSS) ratings of
job performance by peers and supervisors; and performance
on the Tower Computer-Based Performance Measure (see
Horgen, et al., 2012). The correlation between an optimallyweighted composite of AT-SAT subtest scores and the composite of the two criterion measures was .42 without any
corrections (AIR, p. 47). These two studies independently
demonstrated that AT-SAT is a valid predictor of ATCS
job performance. The current study develops a third line
of evidence for the validity of AT-SAT by investigating the
degree to which achievement of CPC status at the first field
facility can be predicted from AT-SAT scores.
METHOD
Sample
The sample for this study consisted of air traffic
controllers hired in fiscal years 2007-2009. Sufficient time
has elapsed for most persons hired in these fiscal years to
complete the field training sequence, averaging two to three
years. To identify the sample, records were extracted from
the Air Traffic Organizations Air Traffic Controller National
Training Database (ATC NTD) and matched with AT-SAT
examination records at the individual level. The ATC NTD
contains data for persons who reported to a field facility for
on-the-job training (OJT); data for persons who failed or
withdrew from FAA Academy training and did not enter
OJT at a field facility are not in the NTD. The ATC NTD
contained records for 11,450 new hires at field facilities as
of July 2012, of which 6,941 were for general public or
CTI hires. This pool was reduced to 6,865 records after
screening for complete identifiers and duplicates. These
records were then filtered by fiscal year of entry-on-duty
and valid AT-SAT scores, resulting in a sample of 2,569
first facility training records for new controllers. Records
for new hires who left the field facility training for other
reasons (unrelated to performance, per NTD; n=160), who
requested transfer prior to completion of facility training
(n=156), or who were still in facility training (n=303) were
dropped, leaving a total of 1,950 records for analysis.
All of the controllers in the sample had been hired
under vacancy announcements open to the general public
and AT-CTI graduates. Most (69%) were hired under a general public announcement. The sample was predominantly
2
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 250 of 342 Page ID
#:445
Table 1. Demographic characteristics and descriptive statistics
Characteristic
Applicants (N=15,173) Sample (N=1,950)
Race/National Origin (RNO) Group
Asian
464 (3.1%)
45 (2.3%)
Black
3,039 (20.0%)
175 (9.0%)
Hawaiian-Pacific Island
77 (0.5%)
6 (0.3%)
Hispanic-Latino
814 (5.4%)
65 (3.3%)
Native American-Alaskan Native
63 (0.4%)
10 (0.5%)
White
8,906 (58.7%)
1,1,73 (60.2%)
Multi-racial1
1,059 (7.0%)
102 (5.2%)
No RNO group(s) marked
738 (4.9%)
96 (4.9%)
Missing data
13 (0.1%)
278 (14.3%)
Sex
Female
3,449 (22.7%)
307 (15.7%)
Male
11,127 (73.3%)
1,330 (68.2%)
Missing data
597 (3.9%)
313 (16.1%)
Age Mean (SD)
25.2 (3.25)
25.2 (2.84)
AT-SAT Mean (SD)
85.87 (9.39)
90.99 (6.27)
Notes:
Subtest
Dials (DI)
Applied Math (AM)
Scan (SC)
Angles (AN)
Letter Factory (LF)
Air Traffic Scenarios Test (ATST)
Analogies (AY)
Experience Questionnaire (EQ)
Description
Scan and interpret readings from a cluster of analog
instruments
Solve basic distance, rate, and time problems
Scan dynamic display to detect targets that change over time
Determine interior and exterior angle of intersecting lines
Manage virtual production line, box products, perform
quality control
Direct aircraft to destination in low-fidelity radar simulation
Solve verbal and non-verbal analogies
Life experiences, preferences, and typical behavior in
situations
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 251 of 342 Page ID
#:446
Table 3. Training outcome at first field facility as coded in the NTD
135
22
10
5
4
2
6.0%
1.0%
0.4%
0.2%
0.2%
0.1%
212
1,560
9.4%
69.2%
Analyses
Three analyses were conducted. First, the simple Pearson product-moment correlation between AT-SAT score
and field training outcome (achievement of CPC status)
at the first assigned field facility was computed, without
corrections for direct range restriction on the predictor
or criterion unreliability. This raw correlation provides a
conservative, lower-bound estimate of AT-SATs validity as
a predictor of field training outcome. The correlation was
then corrected for direct range restriction on the predictor
(AT-SAT) using the Ghiselli, Campbell, and Zedeck (1981,
p. 299) equation 10-12. The corrected correlation provides
a less biased estimate of the AT-SATs validity as a predictor of field training outcome. No correction for criterion
unreliability was made. Second, a 2-by-2 (AT-SAT score
band [Qualified, Well Qualified] by first facility training
outcome [Not CPC, CPC]) F2 analysis was conducted.
The odds of certifying by score band were estimated. Third,
logistic regression was used to model the relationship of
AT-SAT score to achievement of CPC status at the first
field facility. The odds of certifying by AT-SAT score were
estimated from the logistic regression equation. All analyses
were conducted using SPSS version 20.
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 252 of 342 Page ID
#:447
RESULTS
Table 4. Cross-tabulation of AT-SAT score band by field training outcome (expectancy table)
Total
261
1,681
1,942
Observed Outcome
Unsuccessful
CPC
Overall %
Predicted Outcome
Unsuccessful
CPC
217
173
656
904
% Correct
55.6%
57.9%
57.5%
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 253 of 342 Page ID
#:448
Odds(CPCat1stfacility)
ATSATWeightedCompositeScore
Figure 1. Odds of achieving CPC at the first field facility by AT-SAT composite score
DISCUSSION
Nevertheless, the logistic regression coefficient for ATSAT score of .049 was significant (Wald=30.958, p <.001).
The odds of certifying at the first assigned field facility were
computed from the logistic regression equation as a function
of AT-SAT score (Figure 1; see Norusis, 1990, pp.49-50).
A new hire with an AT-SAT score of 70 had slightly better than even odds (1.5 to 1) of achieving CPC status. In
comparison, a new hire with an AT-SAT score of 85 had
slightly better than 3-to-1 odds of achieving CPC status.
In other words, new hires with higher AT-SAT scores had
better odds of achieving CPC status at the first field facility
than new hires with lower AT-SAT scores.
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 254 of 342 Page ID
#:449
The uncorrected correlation between AT-SAT and
achievement of CPC status in this study was small in
Cohens 1988 frequently cited categorization of effect sizes.
In comparison, Bertua, Anderson, and Salgado (2005) reported average uncorrected correlations from .15 to .30
between various types of cognitive ability tests and criterion
measures. Other point estimates of the validity of cognitive
ability tests range from .29 to .51 (Bobko, Roth, & Potosky,
1999; Hunter & Hunter, 1984; Schmidt & Hunter, 1998).
In another meta-analysis, Robbins, et al. (2004) reported an
average correlation of .121 between college admissions test
(ACT, SAT) scores and retention in 4-year college programs.
While the AT-SAT correlation with field training outcome
was low, it is within the range of values reported for other
cognitively-loaded selection instruments.
Moreover, AT-SAT predicted achievement of CPC status
several years after testing despite many intervening variables.
Both time and intervening variables attenuate predictorcriterion relationships (Barrett, Caldwell, & Alexander, 1989;
Barrett, Alexander, & Doverspike, 1992; Beier & Ackerman,
2012; Murphy 1989; Van Iddekinge & Ployhart, 2008). The
average time between testing and completion of field training
or loss was 34 months (SD=10.9 months). It might also be
the case that not all of the field attrition was due to lack of
aptitude. For example, losses might be due to economic factors such as a lack of affordable housing and lifestyle factors
(e.g., lengthy commute or the availability of affordable and
flexible childcare). Losses for these reasons are unlikely to
be predictable from an aptitude test. Better information is
needed to understand and categorize losses in field training
for future investigations of the validity of AT-SAT.
Even though the correlation was modest and despite
the intervening variables, AT-SAT as a selection procedure
could have practical utility. ATCS selection is a large-scale,
high-stakes selection process. ATCS training is expensive,
with an estimated cost per developmental of $93,000 per
year (FAA, 2012). Selection of only applicants from the Well
Qualified score band would have increased the net success
rate to 82%, avoiding 77 unnecessary field failures in this
cohort. Reducing the field failures by 77 persons would have
avoided about $7M ($93,000 x 77 persons) in cumulative
lost costs in personnel compensation and benefits for this
sample of new hires.1
In closing, the current study provides additional empirical evidence that AT-SAT is a valid selection procedure
for the ATCS occupation. Persons with higher scores on
AT-SAT were more likely to successfully certify at their first
field facility. Field attrition among developmental controllers has often been framed as a problem in initial selection
and placement. However, only a small proportion of the
variance in achievement of CPC status was explained by
aptitude test scores collected two or three years earlier, as
evidenced by the small correlation between AT-SAT and
CPC status. There are several possible explanations for this
observation. First, achievement of CPC status is a binary
criterion representing minimally acceptable performance
at the completion of training. Binary criteria inherently
limit the value of any correlation as the distribution shifts
away from a 50/50 split (Ghiselli et al., 1981). In contrast,
multiple criterion measures were used in the concurrent,
criterion-related validation studies, measures that encompassed the broad range of controller work behaviors. Those
criterion measures assessed typical job performance on multiple dimensions from peer and supervisor perspectives and
maximal technical job performance on meaningful interval
scales. Further investigation of AT-SATs validity in relation
to additional criterion measures such as performance in
FAA Academy initial qualifications training, organizational
citizenship behavior, counter-productive work behavior,
job knowledge, and post-CPC technical job performance
are recommended. This will require the development and
collection of psychometrically sound measures of individual
controller job performance. Second, the weights given
to the subtest scores might not be optimal for predicting achievement of CPC status. AT-SAT was originally
weighted to select those whose job performance would be
higher than average; a different weighting approach might
be required to predict CPC status, a far different criterion.
Finer-grained analyses of subtest scores and their weights
are recommended in continuing evaluations. Third, the
relationship of predictor and achievement of CPC status
might be attenuated by time and intervening variables.
Research on the training process itself, as delivered at field
facilities, and investigations into the reasons developmental
controllers do not achieve CPC status are recommended.
Careful attention must be given to the reasons why and when
new controllers leave field training in order to understand
what can be predicted from performance on an aptitude
test battery and what cannot.
1
The actual avoided costs depend on when each ind ividual left field
training. The FAA estimates the cost of training at $93,000/year, or
$7,750/month. If 47 developmental controllers left training after 10
months, 25 at 20 months, and 5 at 30 months, the avoided lost costs
would be (47 x 10 x $7,750) + (25 x 20 x $7,750) + (5 x 30 x $7,750), or
$8,680,000. The $7M figure is a rough-order-of magnitude or benchmark
estimate based on the assumption that attrition occurs in the first year.
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 255 of 342 Page ID
#:450
REFERENCES
Barr, M., Brady, T., Koleszar, G., New, M., & Pounds, J.
(September 22, 2011). FAA Independent Review Panel
on the selection, assignment and training of air traffic
control specialists. Washington, DC: Federal Aviation
Administration. http://www.faa.gov/news/updates/
media/IRP%20Report%20on%20Selection%20
Assignment%20Training%20of%20ATCS%20
FINAL%2020110922.pdf
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 256 of 342 Page ID
#:451
Norusis, M. J. (1990). SPSS advanced statistics users guide.
Chicago, IL: SPSS Inc.
Schmidt, F.L. & Hunter, J.E. (1998). The validity of selection methods in personnel psychology: Practical
and theoretical implications for 85 years of research
findings. Psychological Bulletin, 124, 262-274.
Van Iddekinge, C.H. & Ployhart, R.E. (2008). Developments in the criterion-related validation of selection
procedures: A critical review and recommendations
for practice. Personnel Psychology, 61, 871-925.
Robbins, S.B., Lauver, K., Huy, L., Davis, D., Langley, R.,
& Carlstrom, A. (2004). Do psychosocial and study
skill factors predict college outcomes? A meta-analysis.
Psychological Bulletin, 130(2), 261-288.
Waugh, G. (2001). Analysis of group differences and fairness. In R.A. Ramos, M.C. Heil, & C.A. Manning
(Eds.), Documentation of validity for the AT-SAT computerized test battery Volume II (pp. 43-47). (Report
No. DOT/FAA/AM-01/6). Washington, DC: Federal
Aviation Administration Office of Aviation Medicine.
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 257 of 342 Page ID
#:452
EXHIBIT 11
4/ I3nOI4
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 258 of 342 Page ID
INFORUM
#:453I Fargo. ND
I
1hi: lo111m
M
of fargo- Moorh~~1d
4/13{2014
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 259 of 342 Page ID
#:454
INFORUM I Fargo, ND
earned degrees could skip the first five weeks of a 12-week FAA-mandated training session at the Mike
Monroney Aeronautical Center in Oklahoma City.
Now, a person who has no experience in the field can take the course after passing an initial test to measure
things such as one's ability to handle stress.
But Molinaro said it still requires a certain skill set to pass all of the tests and work in ATC.
"We're looking at not just basic knowledge, we're looking at reaction time, working under stress,
multitasking, thinking in three dimensions, things like that," he said.
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 260 of 342 Page ID
#:455
EXHIBIT 12
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 261 of 342 Page ID
#:456
Applic on
~tatu::.
Central
Sy M yklfoonabon
4/22/2016
AVIATOR
Status
Case 2:15-cv-05811-CBM-SS Document
27-1:: Application
Filed 04/25/16
Page 262 of 342 Page ID
#:457
Application Status
Application Status for Jorge A Rojas on
announcement FAA-ATO-15-ALLSRCE-40166
Thank you for submitting your application for announcement
FAA-ATO-15-ALLSRCE-40166. Based upon your responses to the
Biographical Assessment, we have determined that you are NOT
eligible for this position as a part of the current vacancy
announcement.
The biographical assessment measures ATCS job applicant
characteristics that have been shown empirically to predict
success as an air traffic controller in the FAA. These
characteristics include factors such as prior general and ATCspecific work experience, education and training, work habits,
academic and other achievements, and life experiences among
other factors. This biographical assessment was independently
validated by outside experts.
Your Application
Would you like to view your application?
Feedback
Help us improve the application process...
Survey Information
Application Status Page viewed on: April 22, 2016 5:23 PM(Central
Time).
https://jobs.faa.gov/aviator/Modules/MyApplications/ApplyStatus.aspx?vid=40166
1/1
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 263 of 342 Page ID
#:458
EXHIBIT 13
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 264 of 342 Page ID
#:459
Should you wish to inquire as to the status of your request, please contact the assigned FOIA coordinator(s).
Please refer to the above referenced number on all future correspondence regarding this request.
Sincerely,
Alan Billings
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 265 of 342 Page ID
#:460
EXHIBIT 14
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 266 of 342 Page ID
#:461
Jorge Rojas <[email protected]>
Rojas v. FAA (FOIA) - Case No. CV 15-5811 CBM (SS) - Status/26(f) Availability
Medrano, Alarice (USACAC) <[email protected]>
To: Jorge Rojas <[email protected]>
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 267 of 342 Page ID
#:462
EXHIBIT 15
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 268 of 342 Page ID
#:463
Federal Aviation
Administration
Memorandum
Date:
To:
From:
Subject:
FEB 1 1 2016
::to~c{~e*cer,
The FAA is evaluating potential replacements for the AT-SAT, which bas been used to hire
A ir Traffic Controllers for the past 14 years. The FAA has engag_ed A PTMerrics an external
consulting firm. to assist the agency in this effort. We are asking randomly selected CPCs,
like you, to voluntarily complete a pilot version of these assessments to help us evaluate their
effectiveness as a future selection tool.
Your individual test data will be accessible only to APT Metrics and the third-parties
providing the software and proctoring of the assessments. APTMetrics will also require
supervisors to complete performance ratings. Your individual performance ratings will only
be avai lable to APTMetrics and will be used for validation research purposes only. Neither the
individual test results nor the individual perfonnance ratings will be shared with the FAA or
impact the CPCs participating in the process.
Participants will complete the assessments at a nearby testing center run by PSl, an
independent third-party. T he assessment should take approximately 6 hours to complete.
including a 15 minute break and a 45 minute lunch break, and will be completed during duty
time.
Your Frontline Manager w ill provide you with several options for dates when you can
schedule your participation in the assessment. PSI has assessment centers nationwide. You
wi ll be provided with the locations upon scheduling for your assessment. You must bring one
form of photo ID with you to the PSI location to securely check into the assessment. Please do
not bring additional personal belongings s uch as backpacks or cell phones. To rei terate, your
participation will be during duty time.
Assessments begin January 20 16 through March 31 , 20 16
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 269 of 342 Page ID
#:464
2
Do NOT perform general internet searches in order to determine what PSI sites are
near you or what hours those sites are open. The FAA has made special arrangements
for the project, and you will only get correct information about what sites are available
(and when) by following the instructions above.
Do NOT call the testing sites directly. If you need to contact PSI, please call their
main customer support number: 800-733-9267
If you need additional assistance or have questions please contact. Suzanne Styc, Acting
Director of Resource Enterprise at 202-267-0556.
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 270 of 342 Page ID
#:465
EXHIBIT 16
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 271 of 342 Page ID
#:466
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 272 of 342 Page ID
#:467Hudson, Rainer & Dobbs LLP
R. Lawrence Ashe Jr., Esq. Senior Counsel - Parker,
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 273 of 342 Page ID
#:468
APTMetrics
APTMe
M tr
trics I-O psychologists provide employment litigation support and frequently serve as expert
witnesses assisting both defendants and plaintiffs counsels in class-action employment discrimination,
harassment and wage and hour law suits.
Our litigation support services include: examining whether statistical evidence supports the filing of classaction employment-discrimination lawsuits; identifying relevant materials and information that need to be
evaluated to determine whether employment discrimination has taken place; reviewing relevant
documentation to determine whether a test or other employment procedure (e.g., performance appraisal
system) is valid and job-related according to legal and professional guidelines and standards; drafting
questions to be used by lawyers during depositions; conducting job analysis to determine if jobs meet
exemption criteria in wage-hour cases; memorializing findings and conclusions regarding validity evidence
in expert reports; and testifying in court about expert opinions and conclusions.
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 274 of 342 Page ID
#:469 and legally defensible selection procedures.
that leverages our firms expertise in developing validated
Contact us today to learn how we can help you conduct criminal background checks without breaking the
law.
OFCCP Audit Support
We offer assistance to employers faced with OFCCP compliance evaluations. Our OFCCP audit support
services include:consulting with contractors and their legal counsel to assess risk, reviewing
documentation to determine whether a test or other employment procedure is valid and job-related,
researching adverse impact findings, conducting compensation analyses, establishing the validity of
employment practices, and assisting employment counsel in their negotiations with agency officials.
HR Process Audits
Our HR process audit services are designed to help our client organizations meet and sustain the goal of
providing consistent and fair treatment to their employees. We proactively assess areas where HR
processes can be improved to derive the most value from diverse talent. We use a multi-phase approach in
working with internal or external counsel and our clients HR departments to make recommendations and
implement improvements to HR processes.
Job Analysis for Wage and Hour Issues
Under wage and hour law, the classification of employees as eligible for overtime (non-exempt) or ineligible
(exempt) is based on the type of tasks employees perform at work.
This focus on work performed makes job analysis the ideal tool for ensuring accurate and legally defensible
decisions regarding exemption status. Job analysis allows for the collection of structured verifiable data to
document work requirements and support exemption decisions. For employers that fail to conduct job
analysis on a proactive basis to make exemption decisions, we also conduct post hoc job analyses to
defend against legal challenges.
Consulting Solutions
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 275 of 342 Page ID
#:470
&,+
#&##%+
%$%+&#,+!&%*)&-!)&$')!*&3
4 66!&-+,*!$?'*&!2,!'&$(+1 '$'!+,+
4 -%&*+'-*'&+-$,&,+
4
&'*%,!'&, &'$'1+(!$!+,+
*'++!'&$!&,*!,1
.!&8+((*'
&!$0(*,!+
-+,'%*+*.!
!-)*!+0,''#!)
4 *,!!+/'%&8'/&-+!&++
1
4 *,!!+/'%&8'/&+%$$
-+!&++1,
Global
Strategies
for Talent
Management.
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 276 of 342 Page ID
#:471
,))*&/')+!*
4
4
4
4
4
4
4
4
4
*++++%&,
%($'1 $,!'&
!,!,!'&-(('*,
!.*+!,1,*,13+-*%&,
'&$1+!+
'%(,&1'$!&
*'*%&&%&,
,!&'***+3)-!+!,!'&+
*&!2,!'&$-*.1+
Global
Strategies
for Talent
Management.
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 277 of 342 Page ID
#:472
,)6*&#,+!&%*#+&)$
=
#+!&%
#+!&
&%
&
% =
$'#&0#+!&%0*+$
+%
)* !'****$%+,!+
'
&
&
&
=
&%#0*!*0*+$
0
0
BBE?
+)!*
BE?
*=
BBE?
E?6)"0*+$
6
0
,)-0
+)!*
,
0 + **=
)%!1+!&%#,)-00*+$
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 278 of 342 Page ID
#:473
944!*
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 279 of 342 Page ID
#:474
,)")&,%.!+ *+!%
%!+!+!&%
A 0(*,!,&+++,!%'&1
A '-*,(('!&,0(*,!&+,,$%&,
4
4
4
4
4
A
A
A
A
'*
**'%! 3!,
'*&,&$1
'0'
'8'$'%(&1
&.!,,+,!%'&1'&,+,!&'*
',!,!'&+/!,
*',!. *'++-!,+
.$'(%&,&.$!,!'&'&/+$,!'&+1+,%+
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 280 of 342 Page ID
#:475
&##!%,*+!&%>@
A *1'-'&*&'-,$$
$$&,'1'-*,+,+'*!&,*.!/+:
4 +
4 '
4
'&<,#&'/
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 281 of 342 Page ID
#:476
!%!+!&%
# ,'
!$) . -.-
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 282 of 342 Page ID
#:477
+!**+7
!$# '!""
# !#
#%#("#"
"(""""#
&"#"
!"#( #!"#
%#!"
$#'!
$!#"
!"
!!%$#"
#!%&"
"#( #!#("#"
""""##!"
("#" "#"
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 283 of 342 Page ID
#:478
&##!%,*+!&%>A
A
+1'-*'%(&1-**&,$1-+!&/*!,,&
,+,+:
4
4
4
4
4
'4&',,$$
'4-,'&+!*!&-+!&!&, -,-*
+4'*+%$$&-%*'"'+
+4'*/!*&'"'+
.&'!9
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 284 of 342 Page ID
#:479
'$$.35
).# 1
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 285 of 342 Page ID
#:480
0)*+* ##%7
A .*+!%(,
A $!!,1
4 '&
4
&)-,
A
++.*+
$,*&,!.+
A
&'&+!+,&,
%!&!+,*,!'&
4 ,
4 $!&'&,-!+
-,+!, '%(&1
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 286 of 342 Page ID
#:481
-)* $'+!%!+!&%
A !+(*'('*,!'&,$1/*(*',,*'-((($!&,+(++, ,+,, &
%"'*!,1*'-((($!&,+
A +-$$1,*%!&15
4 I?J, +'*MEO*-$
4 ,&*.!,!'&,+,
A (*+&'.*+!%(,%1!.&'*%&1,1(+',+,+
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 287 of 342 Page ID
#:482
A I'-,'J !,+(++,
,+,
A MEO(++*,
($% #&
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 288 of 342 Page ID
#:483
)%*!%*+!%*.
A
&)*'!!-!%& A )+)$' *!*&%:-#!!+0;
-)*!$'+)(,!)
&'**!%*&)*
4 1'&MEO*-$
4 *,!$&,,!+,!$
!&!!&
4 %($+!2!++-+
A !*')++)+$%+
+*+!%**
4
&'&+!+,&,%!&!+,*,!'&
4 $,!.-+',+,
4 -*1,*!$&(-&!,!.
%+
4 ,!'&$
4 ,,(,$(*'*%&4
&',(($!&,$'/
A #!%+!*9,)%+&!%+!0
#**-)*#+)%+!-*
4 -+,#&'/$
4 -+,%'&+,*,/'-$
$++.*+
4 -+,%'&+,*,+-+,&,!$$1
, +%.$!!,1
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 289 of 342 Page ID
#:484
$'#&0)*%+&,**,#
%5
A '&$1++*'&-,
A *'-*+ ..$!!,1
A 1'-%&,+* '*$++.*+
$,*&,!.+
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 290 of 342 Page ID
#:485
#!%+!*%+&,**,# %5
A $!(*'-*+*%!&!+,*!&'&+!+,&,$1
A -,+'*+*+,,'' !
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 291 of 342 Page ID
#:486
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 292 of 342 Page ID
#:487
+ ),)-0*,#+*
A ++-%!&"'<+,+#+'*/'*# .!'*+'&', &4
'* &.*1$!,,$4/ ,!+, + $$!'"'&$1+!+: ,
!+4,* '/%- ,!%/'-$'&&,'-(,, "'
&$1+!+4.&/ &, **$!,,$'*&' &+,', %"'*
,+#+'*/'*# .!'*+', "':
4 .*PJ,'K1*+'$
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 293 of 342 Page ID
#:488
+ ),)-0*,#+*
A '&!,!'&+, ,+ '*,&, + $$!'.$!,!'&
+,-15
4 &!&, &,-*', "'-,!+&'*+
4 '&!,!'&+, ,*+-$,!&, %*&'.*+
!%(,'*$$ $$&+
4 &+!&(($!&,('(-$,!'&
4 &+!&%!&!+,*,!'&%'
4 *&!2,!'&$ &+B664%**4'/&+!2!&C
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 294 of 342 Page ID
#:489
%+ ,)-00*5
A +*(*'++!'&$$1*!.=*-$+
', -%>
A +, +!&!&+&1'-*(*'++!'&$"-%&,
A #!,!'&$(*'++!'&$"-%&,
A #$$!&(-,
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 295 of 342 Page ID
#:490
#.(/-.
3*/*.*0'$.
. -.
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 296 of 342 Page ID
#:491
2%(5)250$1&(
(23/(:+26&25(+,*+
217+(7(67$5($/62+,*+
3(5)250(56217+(-2%
(23/(:+26&25(/2:
217+(7(67$5($/62/2:
3(5)250(56217+(-2%
(67&25(6
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 297 of 342 Page ID
#:492
#!!+0&#+!&%)&,)
%)%*
A $,!'&(*'-*+(*'.!+%($+
' .!'*/ ! $$'/-+,'%#
!&*&+'-,5
4
4
4
4
4
,+!!$!,!+(*+'&('+++++
,, (*+'&#&'/+
,, (*+'&&'
,(*+'&!+/!$$!&,''
'/(*+'&/!$$ .!&, -,-*
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 298 of 342 Page ID
#:493
#!+!&%!%
A $!!,1**+,', *,'/ ! ,+,+'*+
*"'*$,
A (*'++'.$!,!'&!&.'$.+-%-$,!&
.!&,'(*'.!+'-&+!&,!!+!+'*,
(*'('+-+'
, ,+,
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 299 of 342 Page ID
#:494
&,)*&#!!+0-!%
A .!&+('&+,'&,&,B'&,&,$!!,1C
4 %'&+,*,!'&, ,, '&,&,',+,!+*(*+&,,!.'!%('*,&,
+(,+'(*'*%&'&, "'
A .!&+'&$,!'&+,', **!$+B*!,*!'&
$!!,1C
4 ,,!+,!$%'&+,*,!'&'*$,!'&+ !(,/&+'*+'&,+,
&"'(*'*%&'+%($'%($'1+
A .!&+'&
&,*&$,*-,-*
B'&+,*-,$!!,1C
4 %'&+,*,!'&, ,,+,%+-*+'&+,*-,B+'%, !&$!.,'
&-&*$1!& -%&,*!,'* *,*!+,!4+- +'&+!&,!'-+&++C
&, '&+,*-,!+!%('*,&,'*+-++-$"'(*'*%&
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 300 of 342 Page ID
#:495
&%+%+#!!+0+,0
2%1$/<6,6
(67(9(/230(17
!$/,'$7,21
(7$66,1*&25(6
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 301 of 342 Page ID
#:496
0 **,*!%&%+%+#!!+0
A '%(* &+!."'&$1+!+
A '%(,&!&,+,'&+,*-,!'&
A +,'&,&,*$,,'"'<+'&,&,
A +,'&,&,*(*+&,,!.'"'<+'&,&,
A 0%!&,!'&'$++.*+$,*&,!.+
A (++!&+'*, ,+$,+, '+/ '&
,,*(*'*%, "'
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 302 of 342 Page ID
#:497
)!+)!&%6#+#!!+0+,0
2%1$/<6,6
(9(/2325&48,5((676
(9(/23
(5)250$1&(($685(6
5<87,/27
5<87,/27
2//(&7(67$7$
33/,&$1762503/2<((6
2//(&7(5)250$1&($7$
(/$7((67&25(6
(5)250$1&(($685(6
67$%/,6+'0,1,675$7,9( 6(
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 303 of 342 Page ID
#:498
0 **,*!%)!+)!&%6#+#!!+0
A )-1'"'(*'*%&*!,*!
A (+1 '%,*!)-$!,1',+,&
*!,*!'&%+-*
A *''**$,!'&&++*1,'
+,$!+ .$!!,1
A 0%!&,!'&'$++.*+$,*&,!.+
A ((*'(*!,&++', (++!&+'*
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 304 of 342 Page ID
#:499
%%&,#!+7
A '.*+!%(,
A *&+('*,!&.$!!,1*'%&', *"'
'*$',!'&
A &*$!2!&.$!!,1*'%', *+,-!+
'+!%!$*"'+
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 305 of 342 Page ID
#:500
*'&%*!!#!+0&)#!+!&%
A $!,!'&!+, "'!&,*+('&+!!$!,1',+,
.$'(*&,+,-+*
A &, -+',+,!*+*'%, ,+-(('*,
1, ,+,.$'(*4, ,+,-+**++(!$
*+('&+!!$!,1'*.$!,!'&
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 306 of 342 Page ID
#:501
+%&,*+&)7
A '&$1+!+/!$$!&,!1
/ ,,'++++
A
,!+&',&++*1,'
%+-*.*1!%('*,&,
9
A
,!+&++*1, ,.*1
%+-*
!%('*,&,9
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 307 of 342 Page ID
#:502
&%#0*!*!*+ &,%+!&%
"
"
"
2
%
1
$
/
<
6
,
6
"25.&7,9,7,(6(5)250('
&23($1'))(&72)"25.
%&#&"!%
(&+1,&$/.,//6(48,5('
!"$ &!$%
&!$
"$$" "&"!%
203(7(1&,(6(48,5('
'8&$7,21(48,5(0(176
;3(5,(1&((('('
&$'&'$!&$()%
$"$ !
!
&!$%
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 308 of 342 Page ID
#:503
&##!%,*+!&%>B
A '+1'-*'%(&1-+'*%$"'&$1++
+, +!+'*+$,!'&(*'-*+:
4 '
4 +
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 309 of 342 Page ID
#:504
/$'#&*+'!!+!&% +)!/
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 310 of 342 Page ID
#:505
0'*&*+*+&&%*!)
#!
4 *")$.$0
$'$.3 -.$)"
4 *)*")$.$0 -/, 4 ). ,0$ 1
4 )*1' " -.$)"
4 ,!*,()
-- --( ).
4 ). ,0$ 1
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 311 of 342 Page ID
#:506
#!!+0
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 312 of 342 Page ID
#:507
#+!%*+
A = ! !$!,1>%+-*%&,,''$+B/'*#+%($+4
.!'4++++%&,&,*+C*%'*(,$
,'&!,+
A &,+,+(!!,!'&+ .&.$'(4!5
4 -+,'%+!&'*!&,!1'%%*!$$1
.!$$,+,:
4 ,!+((*'(*!,,+,!&%!-%:
4 ! *!$!,1P'*'+,$1
A , *-+,'%'*'%%*!$$1.!$$4%-+,
.$!,'*1'-*"'+
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 313 of 342 Page ID
#:508
&$!%****$%+*
"
$"$ !
"!&(
&$'&'$
!&$()
"!
"!&(
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 314 of 342 Page ID
#:509
)!+)!&)*+#!* !%'')&')!+
,+6&&)*
A -,8'+'*++ '-$5
4 '&+!+,&,/!, &'*%$0(,,!'&+'(*'!!&1
/!, !&, /'*#'*
4 *%!,, +$,!'&')-$!!(($!&,+
4 $$'/&'*&!2,!'&,'%,!*%,!.,!'&'$+
4 .'-%&,*,!'&$
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 315 of 342 Page ID
#:510
+ "**+*!)%%*!#7
#!!+0
$'#$%++!&%
+'&"'&$1+!+
*!&!&
,&*!2
&'!&%'&!,'*!&
'&+!+,&,
(($+(*'++
$!,
'%%-&!,!'&
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 316 of 342 Page ID
#:511
%&!%
&%!+&)!%
4 '9(56(03$&7
4 (67217(17
4 '0,1,675$7,21668(6
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 317 of 342 Page ID
#:512
&'!-,*+!&%*&,+*+!%
F6
G6
=
; ,+, +&'.*+!%(,6<
.&'(*'$%+4*! ,:>
H6
=
; ,+,!+.$!,@ *-+,-+9<
.&'(*'$%+4*! ,:>
I6
.&'(*'$%+4*! ,:>
J6
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 318 of 342 Page ID
#:513
*+#!!+0,)-0
'*'(1', *+-$,+'<+*&,+,$!!,1
$
!-*.1%!$1'-**)-+,,'5
&'D,*!+6'%
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 319 of 342 Page ID
#:514
&%++ %&)$+!&%
2 %4
& '*&$!*$
'&$''*
*!&4EKMGE
GEH6KJJ6LLLN
$&,'$-,!'&+D
6'%
///6
6'%
Global
Strategies
for Talent
Management.
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 320 of 342 Page ID
#:515
EXHIBIT 17
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 321 of 342 Page ID
#:516
U.S. Deportment
of Transportation
Federal Aviation
Administration
October 8, 2014
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 322 of 342 Page ID
2
#:517
Selection and Training (AT-SAT) exam, on which they also had to achieve a passing score.
Below is a more detailed description of the Interim Hiring Process, its purpose and the
process used in its development.
Those who successfully complete the above fi ve-stage assessment then are employed as
ATCS trainees. Trainees are required to pass a rigorous training program at the FAA
Academy located at the Mike Monroney Aeronautical Center in Oklahoma City, Oklahoma.
Successful completion of academy training within uniformly applicable time limits is
followed by assignment to an air traffic facility, where the A TCS trainee serves on-site in a
developmental training status until they achieve Certified Professional Contro ller (CPC)
status.
Prescreening applicants on the Biographical Assessment prior to allowing them to take the
AT-SAT resulted in considerable financial savings (over $7 million), shortened the hiring
cycle and helped the Agency meet its goal of hiring the applicants most like ly to succeed as
an ATCS.
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 323 of 342 Page ID
3
#:518
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 324 of 342 Page ID
4
#:519
the instrument thereby giving them an unfair advantage in competing for a job under merit
principles.
While we are unable to share the specifics of the question weighting or individual scores, we
can share the minimal passing score for the Biographical Assessment was based on the
professionally developed test-validation study and is set to predict that 84 percent of the
appl icants who passed the Biographical Assessment would be expected to successfully
complete the FAA Academy and achieve CPC status.
Treatment of CTI Graduates Under Prior and the Interim Hiring Processes
The FAA created the AT-CTI program to establish partnerships with post-secondary
educational institutions to encourage interest in employment opportunities in the aviation
industry as a whole. The AT-CTI program was not designed or intended to serve only the
FAA to the exclusion of the employment opportunities in the aviation industry nor was the
program designed or intended to be the FAA's only source of applicants for ATCS positions.
The FAA has always used the AT-CTI program in conjunction with other recruitment sources
when hiring A TCS. Because we implemented the Biographical Assessment as an initial
screening process for the 28,000 applicants for the A TCS position, not all AT-CTI students
that were eligible under prior vacancy announcements were found eligible under the February
vacancy announcement. It should be noted however, that 65 percent (1 ,034 of the 1,5 91) of
individuals who received a tentative offer of employment has some combination of AT-CT[
schooling, veterans' preference, or some specific aviation-related work history and
experience. In addi tion, under the Interim Hiring Process, AT-CTI students and graduates
received conditional offers of employment at three times the rate of non-AT-CTI students and
graduates.
While your letter did not request demographic information about non-AT-CTI program
students and graduates, you may also be interested to know that of the approximately 1,59 I
applicants who received tentative offer letters during the interim hiring process approximately
904 disclosed their demographic data (race, national origin, and gender). Of the 904
approximately: 650 (7 1percent) were male and 260 (29 percent) were female; 544 (60
percent) were White; 153 (17 percent) were Hispanic or Latino; 92 (I 0 percent) were Black or
African American; 57 (6 percent) were Asian; 48 (5 percent) were Multi-ethnic; 6 (I percent)
were Native Hawaiian/Pacific Islander; 4 (.4 percent) were American lndian. Please note that
demographic data was not accessed or used during the selection process. Indeed, information
about a test-taker's demographic identity was not available to FAA decision makers involved
in the applicant assessment process under the Interim Hiring Process.
FAA's Continued Relationship with AT-CTI programs
AT-CTI programs are an essential component of the FAA ' s multi-faceted program to ensure a
predictable supply of highly skilled air traffic controllers in the years to come. The programs
are important to the FAA and to the aviation industry. The FAA will continue to work with
AI-CTI schools to encourage interest in employment opportunities in the a via ti on industry
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 325 of 342 Page ID
5
#:520
generally and with the FAA, specifically. AT-CTI students and graduates are encouraged to
apply to FAA vacancy announcements in which they feel they are qualified.
In sum, we will continue to monitor our recruitment and assessment strategies to ensure we
hire the best qualified individuals into the ATCS profession. Our commitment to aviation
safety remains our top priority, and these changes to our hiring processes serve to enhance the
effort.
If I can be of further assistance, please contact me or Roderick D. Hall , Assistant
Administrator for Government and Industry Affairs, at (202) 267-3277.
Sincerely,
~~!or~
Human Resource Management
Enclosure
Transmitted Correspondence
cc: White House Office of
Presidential Correspondence
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 326 of 342 Page ID
#:521
..
f)(57 .
03/01/2014
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 327 of 342 Page ID
#:522
only have one shot at passing the AT-SAT (test In which could not be retaken until after a
year and If passed your score is good for 3 years). This is significant due to the fact that
one must be hired by the FAA to be an air traffic controller before their 31st birthday. After
graduation I decided to stay living in western PA in hopes that I had performed well enough
in the CCBC program to actually be hired as an air traffic controller in the air traffic control
tower CCBC operates to train Its students. Fortunately after 6 months of waiting that day
came in May of 2013. I have been able to work as an air traffic controller (although a nonFAA air traffic controller) and train students while waiting to be hired by the FAA. Recently,
I was also hired to teach one of the final ATC classes students will take at CCBC. Since I
graduated there have been two FAA CTI hiring announcements. The first was in August of
2013 which my application was not considered. The FAA had decided to only consider
those applicants who had previously applied to hiring panels before this August 2013
panel, automatically disqualifying anyone who Is applying first time. The second en
announcement came sometime first quarter of 2013. The FAA used the applications of
those who applied to the August 2Q 13 panel (myself included) for this announcement.
However, due to sequestration/budget Issues again my group was no considered and that
panel was completely scrapped. That brings us to today. The FAA will now be coming out
with a public hiring announcement (anyone can apply). Furthermore, previous en students
like myself must re-take the AT-SAT. The clock is ticking for me in terms of age as well as
so many others. I befleve my hard work, determination, and skill set should be considered
In the hiring process. However, I do not believe this new process will take into account
these facts as well as the possibility of me aging out.
In brief, the new off-the-street hiring will not consider whether a person is a graduate of a
CTI school, and will not consider the applicant's score on an aptitude test (the AT-SAn
which was specifically designed to determine-and has been shown to be an excellent
predictor of the suitability of applicants. Rather, a "biographical questionnaire" Is to be
Introduced. These changes are pursuant to a Barrier Analysis which was conducted In
recent years-Itself an odd notion. If you refer to Page 44 of the FAA's A Plan for the Future
10-Year Strategy for the Air Traffic Control Workforce 2013-2022, you will find that the
FAA's goal is to maintain a pool of 2000-3000 applicants at any given time. At the end of
FY2012, that pool contained more than 5000 persons-many of them CTI graduates. What,
then, motivated the Barrier Analysis which prompted the new hiring protocols?
The Barrier Analysis was the latest attempt of many, over the years, to understand why the
air traffic workforce is less diverse than Is ideal. In so attempting, the conclusion appears
to have been reached that the FAA should seek out new applicants explicitly on the basis of
race, et alla. To quote from page 152 of the Barrier Analysis itself, "[Race and National
Origin} and gender diversity should be explicitly considered when determining the sources
for the applicants ... "
s 10-140014-035
--
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 328 of 342 Page ID
#:523
This recommendation resulted from the analysis' finding that 4 of 7 hlrlng phases resulted
In adverse action against minorities. But adverse action is not a term to be used lightly. It
is, specifically, any action taken in the employment process which results in discriminatory
hiring practices. One of several mistakes made by Dr.s Outtz and Hanges in the Analysis (a
report which concludes, on page 155, by stating that the Analysis "was rendered
unacceptable" due to extreme time limitations) was to confuse correlation for causation.
Yes, there are problems with diversity; no, the FAA's hiring process is not the cause of
them. As the Analysis itself shows, as Page 16 of the FAA Independent Review Panel on the
Selection, Assignment and Training of Air Traffic Control Specialists clearly expresses (an
air traffic control trainer urging, "Please do not send me any more public hlresi and as can
be found in any of the Investigations into the validity of the AT-SAT battery of aptitude
tests, the existing hiring process of utilizing CTI schools coupled with AT-SAT testing
produces highly successful and qualified candidates who invariably outperform off-thestreet hires and even persons with veterans referrals. It is quantifiably, unmistakably,
outstandingly clear that the CTI program Is successful, that the AT-SAT Is an outstanding
predictor of excellence, and that there are thousands of qualified candidates ready and
waiting to be hired from this combination.
And yet, the FAA has chosen, for all intents and purposes, to abandon all of this.
Sir, It is a noble goal to ensure diversity in the workplace. The new hiring program appears
to be a last-ditch effort at achieving this diversity against all odds. I do not know
why diversity is problematic, but I do know three things:
Firstly, the AT-SAT exams and CTI schools are not the causes of problems in diversity.
Attempt after attempt to modify hiring processes and reweight test scores have failed
because, in the final product, when it comes to only hiring those who, when all is said and
done, are most capable, the adverse impact still remains. Something cultural-something
fundamental-is the cause of these problems, not FAA hiring policy.
Secondly, I have spent tens of thousands of dollars going to school because it was made
very clear that that was the preferred, and at times only, method of becoming an air traffic
controller. The 36 CTI schools have invested millions of dollars in designing curricula,
hiring instructors (often former controllers) Installing simulators and equipment, and
coordinating internships with ATC facilities. Now, alt of that investment appears to have
been for naught.
Thirdly, uexplicitly" considering applicants on the basis of race, national origin, or
gender- and especially when doing so instead of on the basis of their relevant educational
510-140814-035
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 329 of 342 Page ID
#:524
background or aptitude test scores-is not only discriminatory. but potentially dangerous.
insomuch as It alms to diversify a workforce by looking at non-relevant traits before. and
instead of, those which have been shown, over and over. to matter signlflcantly.
I understand and appreciate your effons to bring opponunltles for all. I feel like I have
done everything I can and should have done to pursue my dream. My results of this new
hiring process just came In a days ago. Regardless of my to years of responsible
progressive work experience 140+ college credits and FAA CTI Associates Degree. FAA
Control Tower Operator certificate, FAA multi-engine, commercial pilot, and Instrument
ratings, and other qualifications that were asked on the BQ I was denied. At this point I
feel helpless and could use some support. Please consider reviewing this matter in
preserving fairness for alt. I've always viewed your Presidency as finally having someone
who can understand the Issues that average Americans face. From reading your books and
hearing about your life I feel comfortable sending this letter to you and knowing the very
least your staff would see It. If it crosses your desk even better.
Sincerely,
Kyte Nagle
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 330 of 342 Page ID
#:525
Please look Into these matters, please encourage the FAA to provide preferential treatment
for CTI graduates as they have in the past, and please consider the following points in so
doing:
Off-the-street hiring has been shown, repeatedly and concretely, to be less effective
than hiring CTI graduates.
Off-the-street hiring is more expensive for the FAA, as it must train new hires
"from scratch", including costs sunk in those who fail the training program.
Without requiring a college degree, as the CTI program now does, the new hiring
scheme lowers standards in general. A person who failed out of college In the CTI program
Is now eligible, provided they have three years of work experience.
The FAA's website has, for years, made very clear that the only paths into air
traffic control are prior experience or the CTI program. This dearly implies some
significance to the en program and has, thus, been enormously misleading for the
thousands of students who have Invested In that program-most of them borrowing huge
sums of money from the federal government to finance their educations (in other words,
the result of this new hiring scheme is direct, quantifiable, and substantial harm to
thousands of young Americans) .
The FAA misled the 36 CTI schools, who now find that substantial portions of
their educational frameworks serve no purpose. The cancellation of a program is not, in
Itself, offensive. Doing so without advanced notice, and for reasons not only dubious
but, indeed, proven to be ineffective, is strikingly unethical and distasteful.
The Barrier Analysis contains numerous mathematical and typographical errors,
likely due to the aforementioned acknowledgement by the analysis' authors that It was
rushed and that such hurrying compromised its usefulness.
The Barrier Analysis, in finding adverse impact, appears to omit several
enormously Important considerations. It refers to and appears only to consider 4-year
degrees or 4-year schools, yet 1S of the 36 CTI schools offer two-year degrees (which the
FAA has found perfectly suitable for hiring) and many are community colleges. In other
words, the diversity represented in CTI schools is greater than the Analysis Indicates.
10.
Hoe
1:t-035
- -.- -
--
'
.l
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 331 of 342 Page ID
#:526
The new hiring scheme was presented without consulting stakeholders. In itself,
this suggests underhanded dealings. as it Is clear that had involved and invested parties
been participants in the discussion. the myriad concerns and problems mentioned here
would have been brought to light sooner.
It very much appears that some small contingency within the FAA. or some party
presenting external pressure, has influenced the decision- making process in an
Irrational, Irresponsible, and legally questionable manner. The new hiring scheme Is
clearly targeted at meeting racial quotas, which, otherwise known as racism, Is patently
Immoral and quite likely Impermissible.
s 10 14 081 4~ 035
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 332 of 342 Page ID
#:527
.}
-- .
.'IL I
._ _ 1 .
(
'
------
--- .
.....
0
Ul
(\I
RECEIVED
MAR 12(014
\
\
OD Mail Operation
--- - ------A~---~--
--------
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 333 of 342 Page ID
#:528
-...u1 , ,: 1lt
t.f
\ ... '-\
Phone:
Address:
Subject: AGL_Casework_50_07232014_095712.pdf
Message:
C-us..: Con11m:11t :
Jll'1~ 1 r i
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 334 of 342 Page ID
#:529
Correspondence
Cover
Sheet
510-140814-035
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 335 of 342 Page ID
#:530
A.GL8w8DOT
510-140814-035
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 336 of 342 Page ID
#:531
EXHIBIT 18
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 337 of 342 Page ID
#:532
1
2
3
4
5
6
7
Attorneys at Law
814 W. Roosevelt
Phoenix, Arizona 85007
(602) 258-1000 Fax (602) 523-9000
10
WESTERN DIVISION
11
12
13
14
Defendant
16
17
18
19
20
21
22
23
24
25
Summary Judgment.
26
27
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 338 of 342 Page ID
#:533
1
2
3
4
5
6
7
8
9
11
Phoenix, Arizona 85007
12
814 W. Roosevelt Street
10
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
4. The purpose of this civil action includes identifying the ability of the 2015
Biographical Assessment to identify characteristics needed for the Air Traffic Control
Specialist position.
5. Based on a review of the 2014 and 2015 Biographical Assessments, it is clear
that they are significantly different.
6. Based on the FAAs responses to FOIA requests, including requests 2015008178 and 2016-000431, I calculated the pass and failure rates at the initial application
stage of the 2014 hiring announcement.
7. Based on the FAAs responses to FOIA requests, including requests 2015007021 and 2015-009349, I calculated the pass and failure rates at the initial application
stage of the 2015 hiring announcement.
8. While proceeding pro se in this matter, I corresponded with Defendants Counsel
Alarice M. Medrano, who indicated that the Agency remanded the appeal of the subject
FOIA request for action because the Agency had searched for 2014 records instead of
2015 records. Furthermore, it was made clear that the subject of the FOIA request was
the study proving that the testing instrument was valid.
9. I compiled the Adverse Impact Ratios using the FAAs FOIA responses to
requests for information concerning the demographics (race, ethnicity, and gender) for
the 2014 application for Air Traffic Control Specialists. Based on guidance from the
Equal Employment Opportunity Commission concerning the calculation of adverse
impact ratios, I found that adverse impact existed. I utilized FOIA responses from the
FAA, including 2015-008178 and 2016-000431 to compile the rates.
10. Exhibit 1 of Plaintiffs Response to Defendants Motion for Summary Judgment
(MSJ) is a true and correct copy of a letter dated December 8, 2015 from FAA
Administrator Michael Huerta to Kelly A. Ayotte, Chair of the Subcommittee on
Aviation Operations, Safety, and Security. This letter is available online. The letter
concerns the FAAs changes to the hiring process and the validation of the examination.
2
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 339 of 342 Page ID
#:534
1
2
3
4
5
6
7
8
9
11
Phoenix, Arizona 85007
12
814 W. Roosevelt Street
10
13
14
15
16
17
18
19
20
21
22
23
24
25
26
11. Exhibit 2 of the MSJ contains two documents, and is a true and correct copy of
the portion of the application process of the 1) 2014 and 2) 2015 application process
for Air Traffic Control Specialist applicants. The document was provided to me by an
individual(s) impacted by the FAAs changes to the hiring process. The answer choices
selected have been redacted. The 2014 examination is presented first and has the middle
pages removed. The 2015 examination has all pages except the first and last 2 removed
to protect the identity of those taking and/or providing me said documents.
12. Exhibit 3 of the MSJ is a true and correct copy of a transcript taken by a private
company concerning a Telephonic Conference concerning FAA Hiring Practices, held
on January 4, 2014. The document is available online.
13. Exhibit 4 of the MSJ is a true and correct copy of an email sent by Joseph
Teixeira, former Vice President for Safety & Technical Training for FAA. The email
concerns the revisions to the hiring process, including the use of the biographical
questionnaire. The email was sent to institutions a part of the AT-CTI program and the
email is widely available online. The email was sent on or about December 30, 2013.
14. Exhibit 5 of the MSJ is a true and correct copy of a portion of a conversation
held between an individual impacted by the changes to the hiring process and Matthew
Borten, an FAA representative, concerning the use of the biographical questionnaire.
This conversation took place during an FAA sanctioned and sponsored Virtual Career
Fair concerning the new hiring process for the 2014 cycle. The segment of the
conversation is available online.
15. Exhibit 6 of the MSJ is a true and correct copy of a presentation given by the
Federal Aviation Administration (FAA) to stakeholders affected or briefed on the
changes to the Air Traffic Control Specialist hiring process. The document, dated on or
about January 2015, was provided to me by a member institution of the Association of
Collegiate Training Institutions.
27
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 340 of 342 Page ID
#:535
1
2
3
4
5
6
7
8
9
11
Phoenix, Arizona 85007
12
814 W. Roosevelt Street
10
13
14
15
16
17
18
19
20
21
22
23
24
25
26
16. Exhibit 7 of the MSJ is a true and correct copy of an e-mail sent by the FAA to
individuals impacted by the changes to the FAA hiring process. The document, dated
on or about January 27, 2014, was provided to me by a member of the Association of
Collegiate Training Institution. Furthermore, the e-mail was widely distributed.
17. Exhibit 8 of the MSJ is a true and correct copy of the National Black Coalition
of Federal Aviation Employees (NBCFAE) google group, NBCFAEinfoWESTPAC,
and provides information concerning the hiring announcement. The post was written
from the account of James Swanson, an NBCFAE member. The Exhibit is widely
available online and was initially posted on or about January 24, 2014.
18. Exhibit 9 of the MSJ is a true and correct copy of the FAAs website prior to
issuing the off the street, open source vacancy announcement. The webpage
screenshot was widely distributed amongst the community of students impacted by the
changes.
19. Exhibit 10 of the MSJ is a true and correct copy of three documents available
from FAAs website concerning the validation study of the previous examination used
to test Air Traffic Control Specialist applicants. One document is titled,
Documentation of Validity for the AT-SAT Computerized Test Battery, Volume I,
and is dated March 2001. The second document is Volume II of the same study. The
third document is a March 2013 report available from the FAA website concerning
The Validity of the Air Traffic Selection and Training (AT-SAT) Test Battery in
Operational Use.
20. Exhibit 11 of the MSJ is a true and correct copy of an article published online by
Anna Burleson, dated March 5, 2014, concerning the changes to the ATCS hiring
program.
21. Exhibit 12 of the MSJ is a true and correct copy of the rejection notifications
received by applicants for the 2014 and 2015 vacancy announcements. The first page
27
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 341 of 342 Page ID
#:536
1
2
3
4
5
6
7
8
9
11
Phoenix, Arizona 85007
12
814 W. Roosevelt Street
10
13
14
15
16
17
18
19
20
of the exhibit was provided by an individual impacted by the changes, and the second
page is my own.
22. Exhibit 13 of the MSJ is a true and correct copy of the acknowledgement letter
for FOIA request 2015-006130. This letter was sent to me by the FAA.
23. Exhibit 14 of the MSJ is a true and correct copy of an email between Alarice M.
Medrano, Assistant U.S. Attorney with the Department of Justice, and myself. The
email concerns the Agencys revised search concerning the documents at issue.
24. Exhibit 15 of the MSJ is a true and correct copy of a memorandum issued by
FAA Chief Operating Officer Teri Bristol to FAA employees, on February 11, 2016,
concerning the validation of the AT-SAT examination. This memorandum is widely
available online.
25. Exhibit 16 of the MSJ is a true and correct copy of content available on the APT
Metrics website, including their main home page, a section titled Litigation Support,
and a copy of the presentation Testing the Test available on the APT Metrics website.
This information was retrieved on or about April 22, 2016.
26. Exhibit 17 of the MSJ is a true and correct copy of a letter sent by FAA
Administrator Huerta to Kyle Nagle, an individual interested in the changes to the
hiring process for Air Traffic Control Specialists. The letter dated October 8, 2014 is
available amongst the community of those impacted. This letter was in response to Mr.
Nagles letter to Vice President of the United States Joe Biden.
21
22
23
I swear or affirm under penalty of perjury under United States laws that my answers
on this form are true and correct. 28 U.S.C. sec. 1746; 18 U.S.C. sec. 1621.
24
25
26
27
///
5
Case 2:15-cv-05811-CBM-SS Document 27-1 Filed 04/25/16 Page 342 of 342 Page ID
#:537
4
5
6
Notary Public:
9
10
My Commission Expires:
11
~r
Q)
12
-+-'
0 -Ql r-0
0 !!?
~ U5 :g
""
Ql
~ > 0
~ .t::!
g .'.(
o:'.
Q)
co
~ :;:t _g
~
t
;:i
14
.. ~
(/J
13
ct1l
15
CL
16
17
18
19
20
21
22
23
24
25
26
27