Computer Science Curricula 2023
Computer Science Curricula 2023
Curricula 2023
Version Gamma
August 2023
1
Steering Committee members:
ACM members:
Amruth N. Kumar, Ramapo College of NJ, Mahwah, NJ, USA (ACM Co-Chair)
Monica D. Anderson, University of Alabama, Tuscaloosa, AL, USA
Brett A. Becker, University College Dublin, Ireland
Richard L. Blumenthal, Regis University, Denver, CO, USA
Michael Goldweber, Denison University, Granville, OH, USA
Pankaj Jalote, Indraprastha Institute of Information Technology, Delhi, India
Susan Reiser, University of North Carolina Asheville, Asheville, NC, USA
Titus Winters, Google Inc., New York, NY, USA
IEEE-CS members:
Rajendra K. Raj, Rochester Institute of Technology (RIT), Rochester, NY, USA (IEEE-
CS Co-Chair)
Sherif G. Aly, American University in Cairo, Egypt
Douglas Lea, SUNY Oswego, NY, USA
Michael Oudshoorn, High Point University, High Point, NC, USA
Marcelo Pias, Federal University of Rio Grande (FURG), Brazil
Christian Servin, El Paso Community College, El Paso, TX, USA
Qiao Xiang, Xiamen University, China
AAAI members:
2
Table of Contents
Executive Summary
Section 1 - Introduction
• Introduction
– History
– The Task Force
– The Principles
– The Process
• Introduction
– Changes since CS 2013
• Knowledge Areas
• Society, Ethics and Professionalism Knowledge Unit
• Topics Individually Referable
• Skill Levels
• Learning Outcomes
• Professional Dispositions
• Core Topics
• Core Hours
• Competency Areas
• Course Packaging
– The Revision Process
• How to adopt/adapt the knowledge model
• The Body of Knowledge
– Algorithmic Foundations (AL)
– Architecture and Organization (AR)
– Artificial Intelligence (AI)
– Data Management (DM)
– Foundations of Programming Languages (FPL)
– Graphics and Interactive Techniques (GIT)
– Human-Computer Interaction (HCI)
– Mathematical and Statistical Foundations (MSF)
– Networking and Communication (NC)
– Operating Systems (OS)
– Parallel and Distributed Computing (PDC)
– Security (SEC)
– Society, Ethics and Professionalism (SEP)
– Software Development Fundamentals (SDF)
– Software Engineering (SE)
– Specialized Platform Development (SPD)
– Systems Fundamentals (SF)
• Core Topics and Hours
• Course Packaging
3
– Software
– Systems
– Applications
• Curricular Packaging
• Definition of Competency
• Combined Knowledge and Competency (CKC) Model
• A Framework for Identifying Tasks
• Representative Tasks
– Software Competency Area
– Systems Competency Area
– Applications Competency Area
• A Format for Competency Specification
• Sample Competency Specifications
– Software Competency Area
– Systems Competency Area
– Applications Competency Area
• Adapting/Adopting the Competency Model
4
Executive Summary
CS2023 is an update of the curricular guidelines last published as CS2013 by a joint task force of the
ACM and IEEE Computer Society. Given the increasing importance of Artificial Intelligence, AAAI also
joined forces in the CS2023 task force. These curricular guidelines address computer science, one of
seven computing disciplines for which such guidelines have been issued to date.
The content of 17 of the 18 knowledge areas listed in CS2013 report has been updated. Computational
Science has been dropped as a separate knowledge area (KA). Given the pervasive nature of
computing applications, the role of “Society, Ethics and Professionalism” has been elevated across the
curriculum. Mathematical requirements have been widened beyond Discrete Mathematics to also
include Probability and Statistics. The notion of core topics has been streamlined from Tier I and Tier II
in CS2013 to CS core (required topics) and KA core (recommended topics).
Given the increasing interest among educators in a competency model of the curriculum to aid in
assessment, a framework has been provided for adopting institutions to create their own competency
model tailored to local needs. As distinguished from knowledge model, the model used in all previous
computer science curricular guidelines, competency model is offered as a complement and not a
substitute for knowledge model. Adopting institutions may use either or both the models when
designing their curriculum – steps have been listed for each. The knowledge model is presented in
section 2 and a competency framework is presented in section 3.
Finally, the CS2023 task force engaged area experts to provide guidelines to computer science
educators on social, professional, programmatic and pedagogical issues. Computer science is at a
crossroads where it would be remiss to just pay lip service to these issues. These guidelines are
provided in section 4.
The process used by the CS2023 task force has been collaborative (90+ KA committee members),
international (all five continents), data-driven (5 large and 70 small-scale surveys) and transparent
(csed.acm.org). Community engagement included numerous conference presentations and periodic
postings to over a dozen Special Interest Group mailing lists. The iterative process has included at least
two review and revise cycles for most knowledge areas.
Course designers may want to start with “Course packaging” section of the corresponding knowledge
area in Section 2 as well as a separate section on “Course packaging” for courses that draw upon
multiple knowledge areas. Curriculum designers may want to start with one of several options provided
under “Curricular packaging” in Section 2. Curriculum assessment teams may want to start with the
steps enumerated in Section 3 for building a competency model tailored to local needs. Program
evaluators may want to use “Course packaging” and “Curricular packaging” (Section 2) and
competency framework (Section 3) to compare computer science programs and facilitate credit
transfers between institutions. Computer science educators and researchers may want to visit Section
4 for additional guidance on important social, professional, programmatic and pedagogical issues of the
day.
5
6
Introduction
History
Several successive curricular guidelines for computer science have been published over the years as
the discipline has continued to evolve:
Curriculum 68 [1]: The first curricular guidelines were published by the Association for Computing
Machinery (ACM) over 50 years ago.
Curriculum 78 [2]: The curriculum was revised and presented in terms of core and elective courses.
Computing Curricula 1991 [3]: The ACM teamed up with the Institute of Electrical and Electronics
Engineers – Computer Society (IEEE-CS) for the first time to produce revised curricular guidelines.
Computing Curricula 2001 [4]: For the first time, the guidelines focused only on Computer Science,
with other disciplines such as computer engineering and software engineering being spun off into
their own distinct curricular guidelines.
Computer Science Curriculum 2008 [5]: This was presented as an interim revision of Computing
Curricula 2001.
Computer Science Curricula 2013 [6]: This was the most recent version of the curricula published
by the ACM and IEEE-CS.
CS2023 is the next revision of computer science curricula. It is a joint effort between the ACM, IEEE-
CS, and for the first time, the Association for the Advancement of Artificial Intelligence (AAAI).
All prior versions of computer science curricula focused on what is taught, referred to as a knowledge
model of curricula. In such a model, related topics are grouped into a knowledge unit, and related
knowledge units are grouped into a knowledge area. Computer Science Curricula 2013 [6] contained
163 knowledge units grouped into 18 knowledge areas. Learning outcomes were identified for each
knowledge unit. Distinction was made between core topics that every computer science graduate must
know and non-core topics that were considered to be optional.
Over the last decade, the focus of curricular design has been changing from what is taught to what is
learned. What is learned is referred to as a competency model of the curriculum. Some of the early
efforts to design a competency model of a curriculum were for Software Engineering (SWECOM) in
2014 [14] and Information Technology (IT2017 guidelines) [7]. These were followed by Computing
Curricula CC2020 report [8] which proposed a competency model for various computing disciplines,
Computer Science, Information Systems, and Data Science among them. On the heels of CC2020,
competency models of curricula were produced for Information Systems 2020 [9], Associate-degree
CyberSecurity [13] and Data Science 2021 [10]. The CS2023 task force set out to revise the knowledge
model from the CS2013 report [6] as well as build a competency model of computer science curricula,
while maintaining consistency between the two models.
It is appropriate here to acknowledge the computer science curricular work independently done by
other professional bodies. These include a model curriculum for undergraduate degrees in computer
science and engineering by All India Council for Technical Education in 2022 [11], and the “101 plan” of
7
the Ministry of Education in China in 2023. Similarly, professional bodies have drafted curricular
guidelines on specific areas of computer science such as parallel and distributed computing [12].
This report limits itself to computer science curricula. But, a holistic view requires consideration of the
interrelatedness of computer science with other computing disciplines such as software engineering,
security, and data science. For an overview of the landscape of computing education, please see the
section “Computing Interrelationships” (pp 29-30) in the CC 2020 report [8].
The CS2023 task force consisted of a Steering Committee of 17 members and a committee for each of
the 17 knowledge areas.
The ACM and IEEE-Computer Society each appointed a co-chair in January 2021 and March 2021
respectively. The rest of the Steering Committee was put together as follows:
The requirements for the Steering Committee members were that they were subject experts willing to
work on a volunteer basis, willing to commit to at least ten hours a month to CS2023 activities, commit
to attending at least two in-person meetings a year; and were aligned with the CS2023 vision of both
revising the CS2013 knowledge model and producing an appropriate competency model.
Individuals who had nominated themselves in response to the Call for Participation in February
2021;
Industry experts; and
Other Steering Committee members who shared interest in the knowledge area.
Knowledge Area committee members met once a month to discuss curricular revision. While the
revision effort was in progress, additional subject experts who expressed interest in volunteering were
added to the committees.
8
The Principles
The principles that have guided the work of the CS2023 task force are:
Collaboration: Each knowledge area was revised by a committee of international experts from
academia and industry.
Data-driven: Data collected through surveys of academics and industry practitioners was used to
inform the work of the task force.
Community outreach: The work of the task force was continually posted and updated on the
website csed.acm.org. It was presented at multiple conferences including the annual SGCSE
Technical Symposium. In addition, its work was publicized through repeated postings to over a
dozen ACM Special Interest Group (SIG) mailing lists.
Community input: Multiple channels were provided for the community to contribute, including
feedback forms and email addresses for knowledge areas and versions of the curricular guidelines.
Continuous review and revision: Each version of the curricular draft was anonymously reviewed
by multiple outside experts. Revision reports were produced to document how the reviews were
addressed in subsequent versions of the drafts.
Transparency: The work of CS2023 was documented for review and comments by the community
on the website: csed.acm.org. Available information included composition of knowledge area
committees, results of surveys, and the process used to form the task force.
The Process
In 2021, surveys were conducted of the current use of CS2013 curricular guidelines and the
importance of various components of curricula. The surveys were filled out by 212 academics in the
United States, 191 academics from abroad and 865 industry respondents. The summaries of the
surveys were incorporated into curricular revision.
In May 2022, Version Alpha of the curriculum draft was released. It contained a revised version of
the CS 2013 knowledge model. It was publicized internationally and feedback solicited. The draft of
each knowledge area was sent out to reviewers suggested by the knowledge area committee. Their
reviews were incorporated into the subsequence version of the curricular draft.
In March 2023, Version Beta of the curriculum draft was released. It contained a preliminary
competency model. This draft was again sent out to reviewers suggested by the knowledge area
9
committee as well as educators who had nominated themselves through online forms. Their
reviews were incorporated into the subsequence version of the curricular draft.
In August 2023, Version Gamma, the pre-release version of CS2023 will be posted online for a final
round of comments and suggestions. It will contain course and curricular packaging information,
core topics and hours, a framework for identifying atomic tasks to build a competency model and
summaries of articles on curricular practices.
The report will be released in December 2023.:
10
Section 2
Knowledge
Model
11
12
Introduction
The CS 2013 report [6] provided a knowledge model of computer science curricula. In the model,
related topics were grouped together into 163 knowledge units which were in turn grouped together into
18 knowledge areas. The report listed learning outcomes for each knowledge unit and skill level for
each learning outcome. It identified core topics at two levels: Tier I accounting for 165 hours of
classroom instruction and Tier II accounting for 143 hours. It also included course and curriculum
exemplars from several institutions of higher education.
Apart from updating the content of the CS 2013 knowledge model, several changes were made to the
model, some cosmetic and others systemic, as detailed in this section.
13
Topics Individually Referable
The topics in each knowledge unit and knowledge area have been enumerated for the purpose of
making them individually referable. The enumeration should not be construed as implying any order or
dependency among the topics. The recommended syntax for referring to a topic is:
<Knowledge area abbreviation>--<Knowledge unit abbreviation>: Decimal.alphabet.roman
For example, AI-Search: 3.c.i is:
3. Heuristic graph search for problem-solving
c. Local minima and the search landscape
i. Local vs global solutions
Skill Levels
CS2013 used the following three skill levels:
Familiarity: “What do you know about this?”
Usage: “What do you know how to do?”
Assessment: “Why would you do that?”
Application is increasingly being emphasized in computer science education, especially in the context
of electronic books and online courseware. So, in CS2023, “Usage” was split into “Apply” and “Develop”
and four skills levels loosely aligned with revised Bloom’s taxonomy [18] were adopted as shown in
Table 1.
Evaluate
14
It should be understood that Explain is the pre-requisite skill for the other three levels. The verbs
corresponding to each of the four skill levels were adopted from the work of ACM and CCECC [19].
Learning Outcomes
Learning outcomes are associated with each knowledge unit. In CS2013, skill levels were associated
with each learning outcome. These skill levels were descriptive, not prescriptive, and hence, redundant.
In CS2023, the learning outcomes were retained and expanded, but no longer associated with skill
levels. In acknowledgment that the learning outcomes were at only one or some of the possible skills
levels for each topic, learning outcomes have been renamed Illustrative Learning Outcomes.
Professional Dispositions
Professional dispositions are malleable values, beliefs and attitudes that enable behaviors desirable in
the workplace, e.g., persistent, self-directed, etc. Whereas CS 2103 guidelines [6] emphasized the
importance of dispositions in passing (Professional Practice, pp 15-16), any consideration of a
competency model of the curriculum demands a more integrated treatment of dispositions.
Dispositions are generic to knowledge areas. Some dispositions are more important at certain stages in
a student’s development than others, e.g., persistent is important in introductory courses, whereas self-
directed is important in advanced courses. Collaborative applies to courses with group projects
whereas meticulous applies to mathematical foundations. So, associating dispositions with knowledge
areas as opposed to individual competency statements (e.g. [16, 17]) makes it easier for the instructor
to repeatedly and consistently promote dispositions during the accomplishment of tasks to which the
knowledge area contributes.
In CS2023, the most relevant professional dispositions have been listed for each knowledge area. One
of the sources of professional dispositions was the CC2020 report [8]. Professional dispositions serve
as one of the bridges between the knowledge model (Section 2) and competency model (Section 3) of
CS2023 curricular guidelines.
Core Topics
In CS2013 [6], core hours were defined along two tiers: Tier I (165 hours) and Tier II (143 hours).
Computer science programs were expected to cover 100% of Tier I core topics and at least 80% of Tier
II topics. While proposing this scheme, CS2013 was mindful that the number of core hours has been
steadily increasing in curricular recommendations, from 280 hours in CC2001 [4] to 290 hours in
CS2008 [5] and 308 hours in CS2013 [6]. Accommodating the increasing number of core hours poses a
challenge for computer science programs that may want to restrict the size of the program either by
design or due to necessity.
15
Figure 1: CS2013 Core Topics
In CS2023, a sunflower model of core topics was adopted. In it, core topics are designated as:
Topics in Mathematical and Statistical Foundations (MSF) knowledge area are designated as
KA core to indicate that they will be required for KA core topics in other knowledge areas, e.g.,
Statistics is needed for Machine Learning in Artificial Intelligence (AI) knowledge area. CS2023
guidelines do not purport to recommend any course packaging in Mathematical and Statistical
Foundations (MSF) knowledge area other than Discrete Structures.
Multiple distinctive courses can be carved out of some knowledge areas such as Graphics and
Interactive Techniques (GIT) and Specialized Platform Development (SPD). In such cases, KA
cores have been proposed for each possible course, e.g., Animation, Visualization and Image
Processing arising out of Graphics and Interactive Techniques (GIT) each have their own KA
cores. Their respective KA core hours are not to be considered additive for the GIT knowledge
area.
Core Hours
Estimating the number of hours needed to cover core topics is a tradition of curricular guidelines. The
hours are those spent in the classroom imparting knowledge regardless of the pedagogy used, and do
not include the time needed for learners to develop skills or dispositions. The time needed to cover a
16
topic in class depends on the skill-level (Explain/Apply/Evaluate/Develop) to which it is taught. So, in
CS2023, skill levels were identified for each core topic in order to justify the estimation of core hours.
The skill levels and hours for core topics can be found in the “Core Topics and Hours” table later in this
section. The skill levels identified for core topics should be treated as recommended, not prescriptive.
Table 2 shows the change in the number of core hours from CS2013 to CS2023.
Most knowledge areas contain a knowledge unit on Society, Ethics and Professionalism (SEP) to
emphasize the pervasive importance of these issues. But, core topics and hours for SEP issues are
identified only in the SEP knowledge area and not in the SEP knowledge units of other knowledge
areas. This omission is meant to give educators the flexibility to decide how to cover the core SEP
topics among the courses arising out of the various knowledge units.
Competency Areas
Knowledge areas, when chosen coherently, will constitute the competency area(s) of a program. Some
competency areas are:
Software, consisting of the knowledge areas Software Development Fundamentals (SDF),
Algorithmic Foundations (AL), Foundations of Programming Languages (FPL) and Software
Engineering (SE).
Systems, consisting of some of the following knowledge areas: Systems Fundamentals (SF),
Architecture and Organization (AR), Operating Systems (OS), Parallel and Distributed Computing
(PDC), Networking and Communication (NC), Security (SEC) and Data Management (DM).
17
Applications, consisting of some of the following knowledge areas: Graphics and Interactive
Techniques (GIT), Artificial Intelligence (AI), Specialized Platform Development (SPD), Human-
Computer Interaction (HCI), Security (SEC) and Data Management (DM).
Note that software competency area is in part a pre-requisite of the other two competency areas. These
competency areas are another bridge between the knowledge model (Section 2) and competency
model (Section 3) of CS2023 curricular guidelines.
The above list of competency areas is meant to be neither prescriptive nor comprehensive. Programs
may choose to design their competency area(s) based on institutional mission and local needs. Some
other competency areas that have been suggested include Computing for the social good, scientific
computing, and Secure computing.
Course Packaging
A knowledge area is not a course. Many courses may be carved out of one knowledge area and one
course may contain topics from multiple knowledge areas. In CS2013, course and curricular exemplars
from various institutions were included in the curricular guidelines. But, adopting such exemplars from
one institution to another would be affected by institutional context, including the level of preparedness
of students, the availability of teaching expertise, the availability of pre- and co-requisite courses, etc.
Instead, in CS2023, canonical packaging of courses has been provided in terms of knowledge areas
and knowledge units, as was done in CC2001 [4].
Course packaging recommendations assume a course that meets for about 40 hours. The hours listed
against each knowledge unit represent the weight suggested for the knowledge unit in a typical course.
A course that meets for fewer or more hours may scale the hours accordingly and/or include
fewer/more knowledge units in its coverage. The hours correspond to classroom coverage of the
material regardless of the pedagogy used. Classroom coverage deals primarily with imparting
knowledge, not the development of skills or dispositions. Course packaging instructions are suggestive,
not prescriptive. They are offered as a basis for comparison of course offerings across institutions.
Anonymous reviews were an integral part of the process. As summarized in Figure 1 in the Introduction
to Section 1, curricular drafts went through two review and revision cycles. Table 3 lists the number of
reviewers contacted and the number of reviews received for each knowledge area on its Alpha and
Beta versions. For Version Beta, two numbers are listed: the number of reviewers proposed by the
knowledge area committee, followed by the number of self-nominations received through online forms.
As is clear from the table, the level of community interest and involvement was not uniform across all
the knowledge areas. Often, only a small fraction of the contacted reviewers returned reviews. The
number of self-nominations also varied across knowledge areas. Nevertheless, knowledge area
committees completed the review loop by posting a revision report after each review cycle. The reports
are accessible from the respective knowledge area pages on the website csed.acm.org.
18
Artificial Intelligence (AI) 1 10
Data Management (DM) 10 2 /2 2
Foundations of Prog. Languages (FPL) 10 3 16/4 4
Graphics and Interactive Techniques 3 3 3 3
(GIT)
Human-Computer Interaction (HCI) 9 3 15/2 2
Networking and Communication (NC) 9 1 10/3 4
Operating Systems (OS) 7 3 6/1 2
Parallel and Distributed Computing (PDC) 4 4 /1 1
Security 6/2 3
Society, Ethics and Professionalism (SEP) 8 7/1 3
Software Development Fundamentals 4 2 10/2 8
(SDF)
Software Engineering (SE) 4 3 10/1 4
Specialized Platform Development (SPD) 9 3 8
Systems Fundamentals (SF) 5 1 5/2 2
Table 3. Summary of review and revision process.
The process for determining core topics and hours was as follows:
1. CS2013 Tier I core topics were converted to CS core topics, and Tier II core topics into KA core
topics;
2. Topics were moved among CS core, KA core and non-core by the knowledge area committee as
appropriate;
3. Skill levels were identified for each CS and KA core topic in order to justify the core hours dedicated
to the topic;
4. Core topics shared between knowledge areas were identified so that their hours would be counted
only once;
5. 70 CS core surveys were conducted wherein educators were asked whether topics identified as CS
core should remain in CS core;
6. Based on the results of the surveys filled out by 198 educators, CS core topics were whittled down.
Mathematical and Statistical Foundations topics were determined using multiple inputs: from the
mathematical requirements identified in other knowledge areas, from the computer science theory
community, from various reports (example: the Park City report on data science) and, critically, two
surveys, one distributed to faculty and one to industry practitioners. The first survey was issued to
computer science faculty (with nearly 600 faculty responding) across a variety of institutional types and
in various countries to obtain a snapshot of current practices in mathematical foundations and to solicit
opinion on the importance of particular topics beyond the traditional discrete mathematics. The second
survey was sent to industry employees (approximately 680 respondents) requesting their views on
curricular topics and components.
2. Based on the selected competency area(s), select the knowledge areas of coverage while taking
into account availability of instructional expertise and coverage of CS core topics;
3. For each knowledge area identified in step 2, start with one or more course packaging suggestions.
For each course:
19
a. Add/subtract/scale knowledge units as appropriate;
20
Body of Knowledge
21
Artificial Intelligence (AI)
Preamble
Artificial intelligence (AI) studies problems that are difficult or impractical to solve with traditional
algorithmic approaches. These problems are often reminiscent of those considered to require human
intelligence, and the resulting AI solution strategies typically generalize over classes of problems. AI
techniques are now pervasive in computing, supporting everyday applications such as email, social
media, photography, financial markets, and intelligent virtual assistants (e.g., Siri, Alexa). These
techniques are also used in the design and analysis of autonomous agents that perceive their
environment and interact rationally with it, such as self-driving vehicles and other robots.
Traditionally, AI has included a mix of symbolic and subsymbolic approaches. The solutions it provides
rely on a broad set of general and specialized knowledge representation schemes, problem solving
mechanisms, and optimization techniques. These approaches deal with perception (e.g., speech
recognition, natural language understanding, computer vision), problem solving (e.g., search, planning,
optimization), acting (e.g., robotics, task-automation, control), and the architectures needed to support
them (e.g., single agents, multi-agent systems). Machine learning may be used within each of these
aspects, and can even be employed end-to-end across all of them. The study of Artificial Intelligence
prepares students to determine when an AI approach is appropriate for a given problem, identify
appropriate representations and reasoning mechanisms, implement them, and evaluate them with
respect to both performance and their broader societal impact.
Over the past decade, the term “artificial intelligence” has become commonplace within businesses,
news articles, and everyday conversation, driven largely by a series of high-impact machine learning
applications. These advances were made possible by the widespread availability of large datasets,
increased computational power, and algorithmic improvements. In particular, there has been a shift
from engineered representations to representations learned automatically through optimization over
large datasets. The resulting advances have put such terms as “neural networks” and “deep learning”
into everyday vernacular. Businesses now advertise AI-based solutions as value-additions to their
services, so that “artificial intelligence” is now both a technical term and a marketing keyword. Other
disciplines, such as biology, art, architecture, and finance, increasingly use AI techniques to solve
problems within their disciplines.
For the first time in our history, the broader population has access to sophisticated AI-driven tools,
including tools to generate essays or poems from a prompt, photographs or artwork from a description,
and fake photographs or videos depicting real people. AI technology is now in widespread use in stock
trading, curating our news and social media feeds, automated evaluation of job applicants, detection of
medical conditions, and influencing prison sentencing through recidivism prediction. Consequently, AI
technology can have significant societal impacts that must be understood and considered when
developing and applying it.
22
Changes since CS 2013
To reflect this recent growth and societal impact, the knowledge area has been revised from CS 2013
in the following ways:
● The name has changed from “Intelligent Systems” to “Artificial Intelligence,” to reflect the most
common terminology used for these topics within the field and its more widespread use outside
the field.
● An increased emphasis on neural networks and representation learning reflects the recent
advances in the field. Given its key role throughout AI, search is still emphasized but there is a
slight reduction on symbolic methods in favor of understanding subsymbolic methods and
learned representations. It is important, however, to retain knowledge-based and symbolic
approaches within the AI curriculum because these methods offer unique capabilities, are used
in practice, ensure a broad education, and because more recent neurosymbolic approaches
integrate both learned and symbolic representations.
● There is an increased emphasis on practical applications of AI, including a variety of areas (e.g.,
medicine, sustainability, social media, etc.). This includes explicit discussion of tools that employ
deep generative models (e.g., ChatGPT, DALL-E, Midjourney) and are now in widespread use,
covering how they work at a high level, their uses, and their shortcomings/pitfalls.
● The curriculum reflects the importance of understanding and assessing the broader societal
impacts and implications of AI methods and applications, including issues in AI ethics, fairness,
trust, and explainability.
● The AI knowledge area includes connections to data science through cross-connections with
the Data Management knowledge area.
● There are explicit goals to develop basic AI literacy and critical thinking in every computer
science student, given the breadth of interconnections between AI and other knowledge areas
in practice.
Core Hours
Fundamental Issues 2 1
Search 2 + 3 (AL) † 4
23
Fundamental Knowledge Representation and 1 + 1 (MSF) ‡ 2
Reasoning
Machine Learning 4 4
Planning
Agents
Robotics
Total 11 13
Knowledge Units
24
5. Nature of agents
a. Autonomous, semi-autonomous, mixed-initiative autonomy
b. Reflexive, goal-based, and utility-based
c. Decision making under uncertainty and with incomplete information
d. The importance of perception and environmental interactions
e. Learning-based agents
f. Embodied agents
i. sensors, dynamics, effectors
6. AI Applications, growth, and Impact (economic, societal, ethics)
KA Core:
7. Practice identifying problem characteristics in example environments
8. Additional depth on nature of agents with examples
9. Additional depth on AI Applications, growth, and Impact (economic, societal, ethics)
Non-Core:
10. Philosophical issues
11. History of AI
AI-Search: Search
CS Core:
● State space representation of a problem
a. Specifying states, goals, and operators
b. Factoring states into representations (hypothesis spaces)
c. Problem solving by graph search
i. e.g., Graphs as a space, and tree traversals as exploration of that space
ii. Dynamic construction of the graph (you’re not given it upfront)
● Uninformed graph search for problem solving (See also: AL-Fundamentals:12)
a. Breadth-first search
b. Depth-first search
i. With iterative deepening
c. Uniform cost search
● Heuristic graph search for problem solving (See also: AL-Strategies)
a. Heuristic construction and admissibility
b. Hill-climbing
c. Local minima and the search landscape
i. Local vs global solutions
d. Greedy best-first search
25
e. A* search
● Space and time complexities of graph search algorithms
KA Core:
● Bidirectional search
● Beam search
● Two-player adversarial games
a. Minimax search
b. Alpha-beta pruning
i. Ply cutoff
● Implementation of A* search
Non-Core:
● Understanding the search space
a. Constructing search trees
b. Dynamic search spaces
c. Combinatorial explosion of search space
d. Search space topology (ridges, saddle points, local minima, etc.)
● Local search
● Constraint satisfaction
● Tabu search
● Variations on A* (IDA*, SMA*, RBFS)
● Two-player adversarial games
a. The horizon effect
b. Opening playbooks / endgame solutions
c. What it means to “solve” a game (e.g., checkers)
● Implementation of minimax search, beam search
● Expectimax search (MDP-solving) and chance nodes
● Stochastic search
. Simulated annealing
a. Genetic algorithms
b. Monte-Carlo tree search
26
7. Design and implement a simulated annealing schedule to avoid local minima in a problem.
8. Design and implement A*/beam search to solve a problem, and compare it against other search
algorithms in terms of the solution cost, number of nodes expanded, etc.
9. Apply minimax search with alpha-beta pruning to prune search space in a two-player
adversarial game (e.g., connect four).
10. Compare and contrast genetic algorithms with classic search techniques, explaining when it is
most appropriate to use a genetic algorithm to learn a model versus other forms of optimization
(e.g., gradient descent).
11. Compare and contrast various heuristic searches vis-a-vis applicability to a given problem.
KA Core:
4. Random variables and probability distributions
a. Axioms of probability
b. Probabilistic inference
c. Bayes’ Rule (derivation)
d. Bayesian inference (more complex examples)
5. Independence
6. Conditional Independence
7. Markov chains and Markov models
8. Utility and decision making
27
AI-ML: Machine Learning
CS Core:
1. Definition and examples of a broad variety of machine learning tasks
a. Supervised learning
i. Classification
ii. Regression
b. Reinforcement learning
c. Unsupervised learning
i. Clustering
2. Fundamental ideas:
a. No free lunch: no one learner can solve all problems; representational design decisions
have consequences
b. sources of error and undecidability in machine learning
3. A simple statistical-based supervised learning such as linear regression or decision trees
a. Focus on how they work without going into mathematical or optimization details; enough
to understand and use existing implementations correctly
4. The overfitting problem / controlling solution complexity (regularization, pruning – intuition only)
a. The bias (underfitting) - variance (overfitting) tradeoff
5. Working with Data
a. Data preprocessing
i. Importance and pitfalls of
b. Handling missing values (imputing, flag-as-missing)
i. Implications of imputing vs flag-as-missing
c. Encoding categorical variables, encoding real-valued data
d. Normalization/standardization
e. Emphasis on real data, not textbook examples
6. Representations
a. Hypothesis spaces and complexity
b. Simple basis feature expansion, such as squaring univariate features
c. Learned feature representations
7. Machine learning evaluation
a. Separation of train, validation, and test sets
b. Performance metrics for classifiers
c. Estimation of test performance on held-out data
d. Tuning the parameters of a machine learning model with a validation set
e. Importance of understanding what your model is actually doing, where its
pitfalls/shortcomings are, and the implications of its decisions
8. Basic neural networks
a. Fundamentals of understanding how neural networks work and their training process,
without details of the calculations
b. Basic introduction to generative neural networks (large language models, etc.)
9. Ethics for Machine Learning (See also: SEP-Context)
a. Focus on real data, real scenarios, and case studies.
b. Dataset/algorithmic/evaluation bias
28
KA Core:
10. Formulation of simple machine learning as an optimization problem, such as least squares
linear regression or logistic regression
a. Objective function
b. Gradient descent
c. Regularization to avoid overfitting (mathematical formulation)
11. Ensembles of models
a. Simple weighted majority combination
12. Deep learning
a. Deep feed-forward networks (intuition only, no math)
b. Convolutional neural networks (intuition only, no math)
c. Visualization of learned feature representations from deep nets
d. Other architectures (generative NN, recurrent NN, transformers, etc.)
13. Performance evaluation
a. Other metrics for classification (e.g., error, precision, recall)
b. Performance metrics for regressors
c. Confusion matrix
d. Cross-validation
i. Parameter tuning (grid/random search, via cross-validation)
14. Overview of reinforcement learning
15. Two or more applications of machine learning algorithms
a. E.g., medicine and health, economics, vision, natural language, robotics, game play
16. Ethics for Machine Learning
a. Continued focus on real data, real scenarios, and case studies (See also: SEP-Context)
b. Privacy (See also: SEP-Privacy)
c. Fairness (See also: SEP-Privacy)
Non-Core:
17. General statistical-based learning, parameter estimation (maximum likelihood)
18. Supervised learning
a. Decision trees
b. Nearest-neighbor classification and regression
c. Learning simple neural networks / multi-layer perceptrons
d. Linear regression
e. Logistic regression
f. Support vector machines (SVMs) and kernels
g. Gaussian Processes
19. Overfitting
a. The curse of dimensionality
b. Regularization (math computations, L2 and L1 regularization)
20. Experimental design
a. Data preparation (e.g., standardization, representation, one-hot encoding)
b. Hypothesis space
29
c. Biases (e.g., algorithmic, search)
d. Partitioning data: stratification, training set, validation set, test set
e. Parameter tuning (grid/random search, via cross-validation)
f. Performance evaluation
i. Cross-validation
ii. Metric: error, precision, recall, confusion matrix
iii. Receiver operating characteristic (ROC) curve and area under ROC curve
21. Bayesian learning (Cross-Reference AI/Reasoning Under Uncertainty)
a. Naive Bayes and its relationship to linear models
b. Bayesian networks
c. Prior/posterior
d. Generative models
22. Deep learning
a. Deep feed-forward networks
b. Neural tangent kernel and understanding neural network training
c. Convolutional neural networks
d. Autoencoders
e. Recurrent networks
f. Representations and knowledge transfer
g. Adversarial training and generative adversarial networks
23. Representations
a. Manually crafted representations
b. Basis expansion
c. Learned representations (e.g., deep neural networks)
24. Unsupervised learning and clustering
a. K-means
b. Gaussian mixture models
c. Expectation maximization (EM)
d. Self-organizing maps
25. Graph analysis (e.g., PageRank)
26. Semi-supervised learning
27. Graphical models (See also: AI/Probabilistic Representation and Reasoning)
28. Ensembles
a. Weighted majority
b. Boosting/bagging
c. Random forest
d. Gated ensemble
29. Learning theory
a. General overview of learning theory / why learning works
b. VC dimension
c. Generalization bounds
30. Reinforcement learning
a. Exploration vs. exploitation trade-off
b. Markov decision processes
30
c. Value and policy iteration
d. Policy gradient methods
e. Deep reinforcement learning
31. Explainable / interpretable machine learning
a. Understanding feature importance (e.g., LIME, Shapley values)
b. Interpretable models and representations
32. Recommender systems
33. Hardware for machine learning
a. GPUs / TPUs
34. Application of machine learning algorithms to:
a. Medicine and health
b. Economics
c. Education
d. Vision
e. Natural language
f. Robotics
g. Game play
h. Data mining (Cross-reference IM/Data Mining)
35. Ethics for Machine Learning
a. Continued focus on real data, real scenarios, and case studies (See also: SEP-Context)
b. In depth exploration of dataset/algorithmic/evaluation bias, data privacy, and fairness
(See also: SEP-Privacy, SEP-Context)
c. Trust / explainability
31
11. Visualize the training progress of a neural network through learning curves in a well-established
toolkit (e.g., TensorBoard) and visualize the learned features of the network.
12. Implement simple algorithms for supervised learning, reinforcement learning, and unsupervised
learning.
13. Determine which of the three learning styles is appropriate to a particular problem domain.
14. Compare and contrast each of the following techniques, providing examples of when each
strategy is superior: decision trees, logistic regression, naive Bayes, neural networks, and belief
networks.
15. Evaluate the performance of a simple learning system on a real-world dataset.
16. Characterize the state of the art in learning theory, including its achievements and its
shortcomings.
17. Explain the problem of overfitting, along with techniques for detecting and managing the
problem.
18. Explain the triple tradeoff among the size of a hypothesis space, the size of the training set, and
performance accuracy.
CS/KA Core: For the CS core, cover at least one application and an overview of the societal issues of
AI/ML. The KA core should go more in-depth with one or more additional applications, more in-depth on
deep generative models, and an analysis and discussion of the social issues.
1. Applications of AI to a broad set of problems and diverse fields, such as medicine, health,
sustainability, social media, economics, education, robotics, etc. (choose one for CS Core, at
least one additional for KA core)
a. Formulating and evaluating a specific application as an AI problem
b. Data availability and cleanliness
i. Basic data cleaning and preprocessing
ii. Data set bias
c. Algorithmic bias
d. Evaluation bias
2. Deployed deep generative models
a. High-level overview of deep image generative models (e.g. as of 2023, DALL-E,
Midjourney, Stable Diffusion, etc.), how they work, their uses, and their
shortcomings/pitfalls.
b. High-level overview of large language models (e.g. as of 2023, ChatGPT, Bard, etc.),
how they work, their uses, and their shortcomings/pitfalls.
3. Societal impact of AI
32
a. Ethics
b. Fairness
c. Trust / explainability
d. Privacy and usage of training data
e. Human autonomy and oversight
f. Sustainability
33
6. Compare and contrast the basic techniques for representing uncertainty.
7. Compare and contrast the basic techniques for qualitative representation.
8. Apply situation and event calculus to problems of action and change.
9. Explain the distinction between temporal and spatial reasoning, and how they interrelate.
10. Explain the difference between rule-based, case-based and model-based reasoning techniques.
11. Define the concept of a planning system and how it differs from classical search techniques.
12. Describe the differences between planning as search, operator-based planning, and
propositional planning, providing examples of domains where each is most applicable.
13. Explain the distinction between monotonic and non-monotonic inference.
AI-Planning: Planning
Non-Core:
34
1. Review of propositional and first-order logic
2. Planning operators and state representations
3. Total order planning
4. Partial-order planning
5. Plan graphs and GraphPlan
6. Hierarchical planning
7. Planning languages and representations
a. PDDL
8. Multi-agent planning
9. MDP-based planning
10. Interconnecting planning, execution, and dynamic replanning
a. Conditional planning
b. Continuous planning
c. Probabilistic planning
AI-Agents: Agents
(Cross-reference HCI/Collaboration and Communication)
Non-Core:
1. Agent architectures (e.g., reactive, layered, cognitive)
2. Agent theory (including mathematical formalisms)
3. Rationality, Game Theory
. Decision-theoretic agents
a. Markov decision processes (MDP)
b. Bandit algorithms
4. Software agents, personal assistants, and information access
a. Collaborative agents
b. Information-gathering agents
c. Believable agents (synthetic characters, modeling emotions in agents)
5. Learning agents
6. Cognitive architectures (e.g., ACT-R, SOAR, ICARUS, FORR)
a. Capabilities (perception, decision making, prediction, knowledge maintenance, etc.)
b. Knowledge representation, organization, utilization, acquisition, and refinement
c. Applications and evaluation of cognitive architectures
7. Multi-agent systems
a. Collaborating agents
b. Agent teams
35
c. Competitive agents (e.g., auctions, voting)
d. Swarm systems and biologically inspired models
e. Multi-agent learning
8. Human-agent interaction
a. Communication methodologies (verbal and non-verbal)
b. Practical issues
c. Applications
i. Trading agents, supply chain management
36
b. Transformers
c. Multi-modal embeddings (e.g., images + text)
d. Generative language models
AI-Robo: Robotics
(See also: SPD/Robot Platforms)
Non-Core:
1. Overview: problems and progress
a. State-of-the-art robot systems, including their sensors and an overview of their sensor
processing
b. Robot control architectures, e.g., deliberative vs. reactive control and Braitenberg
vehicles
c. World modeling and world models
d. Inherent uncertainty in sensing and in control
2. Sensors and effectors
a. Sensors: LIDAR, sonar, vision, depth, stereoscopic, event cameras, microphones,
haptics, etc.
b. Effectors: wheels, arms, grippers, etc.
3. Coordinate frames, translation, and rotation (2D and 3D)
4. Configuration space and environmental maps
5. Interpreting uncertain sensor data
6. Localization and mapping
7. Navigation and control
8. Forward and inverse kinematics
9. Motion path planning and trajectory optimization
10. Joint control and dynamics
11. Vision-based control
12. Multiple-robot coordination and collaboration
13. Human-robot interaction (See also: HCI)
a. Shared workspaces
b. Human-robot teaming and physical HRI
c. Social assistive robots
37
d. Motion/task/goal prediction
e. Collaboration and communication (explicit vs implicit, verbal or symbolic vs non-verbal or
visual)
f. Trust
38
2. List at least three image-segmentation approaches, such as thresholding, edge-based and
region-based algorithms, along with their defining characteristics, strengths, and weaknesses.
3. Implement 2d object recognition based on contour- and/or region-based shape representations.
4. Distinguish the goals of sound-recognition, speech-recognition, and speaker-recognition and
identify how the raw audio signal will be handled differently in each of these cases.
5. Provide at least two examples of a transformation of a data source from one sensory domain to
another, e.g., tactile data interpreted as single-band 2d images.
6. Implement a feature-extraction algorithm on real data, e.g., an edge or corner detector for
images or vectors of Fourier coefficients describing a short slice of audio signal.
7. Implement an algorithm combining features into higher-level percepts, e.g., a contour or polygon
from visual primitives or phoneme hypotheses from an audio signal.
8. Implement a classification algorithm that segments input percepts into output categories and
quantitatively evaluates the resulting classification.
9. Evaluate the performance of the underlying feature-extraction, relative to at least one alternative
possible approach (whether implemented or not) in its contribution to the classification task (8),
above.
10. Describe at least three classification approaches, their prerequisites for applicability, their
strengths, and their shortcomings.
11. Implement and evaluate a deep learning solution to problems in computer vision, such as object
or scene recognition.
Professional Dispositions
● Meticulousness: Attention must be paid to details when implementing AI and machine learning
algorithms, requiring students to be meticulous to detail.
● Persistence: AI techniques often operate in partially observable environments and optimization
processes may have cascading errors from multiple iterations. Getting AI techniques to work
predictably takes trial and error, and repeated effort. These call for persistence on the part of
the student.
● Responsible: Applications of AI can have significant impacts on society, affecting both
individuals and large populations. This calls for students to understand the implications of work
in AI to society, and to make responsible choices for when and how to apply AI techniques.
Math Requirements
Required:
● Discrete Math:
o sets, relations, functions
o predicate and first-order logic, logic-based proofs
● Linear Algebra:
o Matrix operations, matrix algebra
o Basis sets
● Probability and Statistics:
39
o Basic probability theory, conditional probability, independence
o Bayes theorem and applications of Bayes theorem
o Expected value, basic descriptive statistics, distributions
o Basic summary statistics and significance testing
o All should be applied to real decision making examples with real data, not “textbook”
examples
Desirable:
● Calculus-based probability and statistics
● Other topics in probability and statistics
o Hypothesis testing, data resampling, experimental design techniques
● Optimization
40
proper machine learning algorithm for a problem, preprocess the data appropriately, apply proper
evaluation techniques, and explain how to interpret the resulting models, including the model's
shortcomings. They should be able to identify and compensate for biased data sets and other sources
of error, and be able to explain ethical and societal implications of their application of machine learning
to practical problems.
Committee
Members:
● Zachary Dodds, Harvey Mudd College, Claremont, CA, USA
● Susan L. Epstein, Hunter College and The Graduate Center of The City University of New York,
New York, NY, USA
● Laura Hiatt, US Naval Research Laboratory, Washington, DC, USA
● Amruth N. Kumar, Ramapo College of New Jersey, Mahwah, USA
● Peter Norvig, Google, Mountain View, CA, USA
● Meinolf Sellmann, GE Research, Niskayuna, NY, USA
● Reid Simmons, Carnegie Mellon University, Pittsburgh, PA, USA
Contributors:
● Claudia Schulz, Thomson Reuters, Zurich, Switzerland
41
42
Algorithmic Foundations (AL)
Preamble
Algorithms and data structures are fundamental to computer science and software engineering since
every theoretical computation and real-world program consists of algorithms that operate on data
elements possessing an underlying structure. Selecting appropriate computational solutions to real-
world problems benefits from understanding the theoretical and practical capabilities, and limitations, of
available algorithms and paradigms, including their impact on the environment and society. Moreover,
this understanding provides insight into the intrinsic nature of computation, computational problems,
and computational problem-solving as well as possible solution techniques independent of
programming language, programming paradigm, computer hardware, or other implementation aspects.
This knowledge area focuses on the nature of algorithmic computation including the concepts and skills
required to design and analyze algorithms for solving real-world computational problems. It
complements the implementation algorithms and data structures found in the Software Development
Foundations (SDF) knowledge area. As algorithms and data structures are essential in all advanced
areas of computer science, this area provides the algorithmic foundations that every computer science
graduate is expected to know. Exposure to the breadth of these foundational AL topics is designed to
provide students with the basis for studying additional topics in algorithmic computation in more depth
and for learning advanced algorithms across a variety of knowledge areas and disciplines.
The increase of four CS core hours acknowledges the importance of this foundational area in the
computer science curriculum and returns it to the 2001 level. Despite this increase, there is a significant
overlap in hours with the Software Development Fundamentals (SDF) and Mathematical Foundations
(MSF) areas. There is also a complementary nature of the units in this area since, for example, linear
search of an array covers topics in AL-Fundamentals and can be used to simultaneously explain AL-
Complexity, e.g., O(n), and AL-Strategies, e.g. Brute-Force.
The KA hours primarily reflect topics studied in a stand-alone computational theory course and the
availability of additional hours when such a course is included in the curriculum.
Core Hours
43
Knowledge Unit CS Core KA Core
Algorithmic Strategies 6
Complexity Analysis 6 3
Total 32 32
Knowledge Units
44
c. O(logb n) (e.g. depth/breadth-first tree)
12. Sorting Algorithms (e.g., stable, unstable) (See also: SDF-Algorithms)
a. O(n2) complexity (e.g., insertion, selection),
b. O(n log n) complexity (e.g., quicksort, merge, timsort)
13. Graph Algorithms
a. Shortest Path (e.g., Dijkstra’s, Floyd’s)
b. Minimal spanning tree (e.g., Prim’s, Kruskal’s)
KA Core:
1. Heaps and Priority Queues
2. Sorting Algorithms
a. O(n log n) heapsort
b. Pseudo O(n) complexity (e.g., bucket, counting, radix)
3. Graph Algorithms
a. Transitive closure (e.g., Warshall’s Algorithm)
b. Topological sort
4. Matching
a. Efficient String Matching (e.g., Boyer-Moore, Knuth-Morris-Pratt)
b. Longest common subsequence matching
c. Regular expression matching
Non Core:
5. Cryptography Algorithms (e.g., SHA-256) (See also: SE-Cryptography, MSF-Discrete: 5)
6. Parallel Algorithms (See also: PDC-Algorithms, FPL-Parallel)
7. Consensus algorithms (e.g., Blockchain) (See also: SE-Cryptography: 14)
a. Proof of work vs. proof of stake (See also: SEP-Sustainability: 3)
8. Quantum computing algorithms (See also: AR-Quantum: 6)
a. Oracle-based (e.g. Deutsch-Jozsa, Bernstein-Vazirani, Simn)
b. Superpolynomial speed-up via QFT (e.g., Shor’s algorithm)
c. Polynomial speed-up via amplitude amplification (e.g., Grover’s algorithm)
45
5. Explain how collision avoidance and collision resolution is handled in hash tables.
6. Discuss factors other than computational efficiency that influence the choice of algorithms, such as,
programming time, maintainability, and the use of application-specific patterns in the input data.
KA Core:
7. Describe the heap property and the use of heaps as an implementation of a priority queue.
8. For each of the algorithms and algorithmic approaches in the KA core topics:
a. Give a prototypical example of the algorithm,
b. Use a real-world example, show step-by-step how the algorithm operates.
CS Core:
1. Paradigms
a. Brute-Force (e.g., linear search, selection sort, traveling salesperson, knapsack)
b. Decrease-and-Conquer
i. By a Constant (e.g., insertion sort, topological sort),
ii. By a Constant Factor (e.g., binary search),
iii. By a Variable Size (e.g., Euclid’s algorithm)
c. Divide-and-Conquer (e.g., Binary Search, Quicksort, Mergesort, Strassen’s)
d. Greedy (e.g., Dijkstra’s, Kruskal’s)
e. Transform-and-Conquer
i. Instance simplification (e.g. find duplicates via list presort)
ii. Representation change (e.g., heapsort)
iii. Problem reduction (e.g., least-common-multiple, linear programming)
iv. Dynamic Programming (e.g., Floyd’s)
f. Space vs. Time Tradeoffs (e.g., hashing) (See also: AL-Fundamentals)
2. Handling Exponential Growth (e.g., heuristics, A*, branch-and-bound, backtracking)
3. Iteration vs. Recursion (e.g., factorial) (See also: MSF-Discrete: 2)
KA Core:
4. Paradiams
a. Approximation Algorithms
b. Iterative improvement (e.g., Ford-Fulkerson, Simplex)
c. Randomized/Stochastic Algorithms (e.g., Max-Cut, Balls and Bins)
Non Core:
5. Quantum computing (See also AL-Fundamentals: 8, AL-Models: 8)
46
a. Articulate its definitional characteristics,
b. Give an example that demonstrates the paradigm including explaining how this example
satisfies the paradigm’s characteristics
2. For each of the algorithms in the AL-Fundamentals unit:
a. Describe the paradigm used by the algorithm and how it exemplifies this paradigm
3. Given an algorithm, describe the paradigm used by the algorithm and how it exemplifies this
paradigm
4. Give a real-world problem, determine appropriate algorithmic paradigms and algorithms from these
paradigms that address the problem including considering the tradeoffs among the paradigms and
algorithms selected.
5. Give an example of an iterative and a recursive algorithm that solves the same problem including
explaining the benefits and disadvantages of each approach.
6. Determine if a greedy approach leads to an optimal solution.
7. Explain at least one approach for addressing a computational problem whose algorithmic solution is
exponential.
AL-Complexity: Complexity
CS Core:
1. Complexity Analysis Framework
a. Best, average, and worst case performance of an algorithm
b. Empirical and Relative (Order of Growth) Measurements
c. Input Size and Primitive Operations
d. Time and Space Efficiency
2. Asymptotic complexity analysis (average and worst case bounds)
a. Big-O, Big-Omega, and Big-Theta formal notations
b. Foundational complexity classes and representative examples/problems
i. O(1) Constant (e.g., Array Access)
ii. O(log2 n) Logarithmic (e.g., Binary Search)
iii. O(n) Linear (e.g., Linear Search)
iv. O(n log2 n) Log Linear (e.g., Mergesort
v. O(n )
2 Quadratic (e.g., Selection Sort)
vi. O(n )
3 Cubic (e.g., Gaussian Elimination)
vii. O(2n) Exponential (e.g., Knapsack, SAT, TSP, All
Subsets)
viii. O(n!) Factorial (e.g., Hamiltonian Circuit, All Permutations)
3. Empirical measurements of performance
4. Tractability and Intractability
a. P, NP and NP-Complete complexity classes
b. NP-Complete problems (e.g., SAT, Knapsack, TSP)
c. Reductions
5. Time and space trade-offs in algorithms.
47
KA Core:
6. Little-o and Little-Omega notations
7. Recursive Analysis: (e.g., recurrence relations, Master theorem, substitution)
8. Amortized Analysis
9. Turing Machine-Based Models of Complexity
a. Time complexity (See also: AL-Models)
i. P, NP, NP-C, and EXP classes
ii. Cook-Levin Theorem
b. Space Complexity
i. NSpace and PSpace
ii. Savitch’s Theorem
KA Core:
14. Use recurrence relations to determine the time complexity of recursively defined algorithms.
15. Solve elementary recurrence relations using some form of the Master Theorem.
16. Use Big-O notation to give upper case bounds on time/space complexity of algorithms.
17. Explain the Cook-Levin Theorem and the NP-Completeness of SAT.
18. Define the classes P and NP.
19. Prove that a problem is NP-Complete by reducing a classic known NP-C problem to it (e.g., 3SAT
and Clique)
20. Define the P-space class and its relation to the EXP class.
48
AL-Models: Computational Models and Formal Languages
CS Core:
1. Formal Automata
a. Finite State
b. Pushdown (See also: AL-Fundamentals: 5, SDF-ADT)
c. Linear Bounded
d. Turing Machine
2. Formal Languages, Grammars and Chomsky Hierarchy
(See also: FPL-H Translation, FPL-J Syntax)
a. Regular (Type-3)
i. Regular Expressions
b. Context-Free (Type-2)
c. Context-Sensitive (Type-1)
d. Recursively Enumerable (Type-0)
3. Relations among formal automata, languages, and grammars
4. Decidability, (un)computability, and halting
5. The Church-Turing Thesis
6. Algorithmic Correctness
a. Invariants (e.g., in: iteration, recursion, tree search)
KA Core:
1. Deterministic and nondeterministic automata
2. Pumping Lemma Proofs (See also: MSF-Discrete: 3)
a. Finite State/Regular
b. Pushdown Automata/Context-Free
3. Decidability
a. Arithmetization and Diagonalization (See also: MSF-Discrete: 1)
4. Reducibility and reductions
5. Time Complexity based on Turing Machine
6. Space Complexity (e.g., PSPACE, Savitch’s Theorem)
7. Equivalent Models of Algorithmic Computation
a. Turing Machines and Variations (e.g., multi-tape, non-deterministic)
b. Lambda Calculus (See also: FPL-Functional)
c. Mu-Recursive Functions
Non Core:
8. Quantum Computation (See also: AR-Quantum)
a. Postulates of quantum mechanics
i. State Space
ii. State Evolution
iii. State Composition
iv. State Measurement
b. Column vector representations of Qubits
49
c. Matrix representations of quantum operations
d. Quantum Gates (e.g., XNOT, CNOT)
50
Non Core:
10. For a quantum system give examples that explain the following postulates:
a. State Space: system state represented as a unit vector in Hilbert space,
b. State Evolution: the use of unitary operators to evolve system state,
c. State Composition: the use of tensor product to compose systems states,
d. State Measurement: the probabilistic output of measuring a system state.
11. Explain the operation of a quantum XNOT or CNOT gate on a quantum bit represented as a matrix
and column vector respectively
Professional Dispositions
Math Requirements
51
Required:
● MSF-Discrete
As depicted in the following figure, the committee envisions two common approaches for addressing
foundational AL topics in CS courses. Both approaches included required introductory Programming
(CS1) and Data Structures (CS2) courses. In the three-course approach, all CS Core topics are
covered. Alternatively, in the four-course approach, AL-Model unit CS and KA core topics are
addressed in a Computational Theory focused course, which leaves room to address additional KA
topics in the third Algorithms course. Both approaches assume Big-O analysis is introduced in the Data
Structures (CS2) course and that graphs are taught in the third Algorithms course. The committee
recognizes that there are many different approaches for packaging AL topics into courses including, for
example, introducing graphs in CS2 Data Structures, Backtracking in an AI course, and AL-Model
topics in a theory course which also addresses FPL topics. The given example is simply one way to
cover the entire AL CS Core in three introductory courses.
Programming 1 (CS1)
● AL-Foundational: Fundamental Data Structures and Algorithms (2 hours)
○ Arrays and Strings
○ Linear Search
Note: the following AL topics are demonstrated in CS1, but not explicitly taught as such:
● AL-Strategies: Algorithmic Strategies
○ Brute Force (e.g., linear search)
○ Iteration (e.g., linear search)
● AL-Complexity: Complexity Analysis
○ O(1) and O(n) runtime complexities
52
○ Multi-dimensional Arrays
○ Linked Lists
○ Hash Tables/Maps including conflict resolution strategies
○ Stacks, Queues, and Dequeues
○ Trees: Binary, Ordered, Breadth- and Depth-first search
○ An O(n2) sort, (e.g., Selection Sort)
○ An O(n log n) sort (e.g., Quicksort, Mergesort)
Computation Theory
Committee
53
Chair: Richard Blumenthal, Regis University, Denver, Colorado, USA
Members:
● Cathy Bareiss, Bethel University, Mishawaka, Minnesota, USA
● Tom Blanchet, Hillman Companies Inc., Boulder, Colorado, USA
● Doug Lea, State University of New York at Oswego, Oswego, New York, USA
● Sara Miner More, John Hopkins University, Baltimore, Maryland, USA
● Mia Minnes, University of California San Diego, California, USA
● Atri Rudra, University at Buffalo, Buffalo, New York, USA
● Christian Servin, El Paso Community College, El Paso, Texas, USA
54
Architecture and Organization (AR)
Preamble
Computing professionals spend considerable time writing efficient code to solve a particular problem in
an application domain. Parallelism and heterogeneity at the hardware system level have been
increasingly utilized to meet performance requirements in almost all systems, including most commodity
hardware. This departure from sequential processing demands a more in-depth understanding of the
underlying computer architectures. Architecture can no longer be treated as a black box where principles
from one can be applied to another. Instead, programmers should look inside the black box and use
specific components to improve system performance and energy efficiency.
The Architecture and Organization (AR) Knowledge Area aims to develop a deeper understanding of the
hardware environments upon which almost all computing is based and the relevant interfaces provided
to higher software layers. The target hardware comprises low-end embedded system processors up to
high-end enterprise multiprocessors.
The topics in this knowledge area will benefit students by enabling them to appreciate the fundamental
architectural principles of modern computer systems, including the challenge of harnessing parallelism
to sustain performance and energy improvements into the future. This KA will help computer science
students depart from the black box approach and become more aware of the underlying computer system
and the efficiencies that specific architectures can achieve.
Overview
55
Core Hours
Memory Hierarchy 6
Functional Organization 2
Heterogeneous Architectures 3
Quantum Architectures 3
Total 9 16
Knowledge Units
56
KA Core:
1. Comment on the progression of computer technology components from vacuum tubes to VLSI,
from mainframe computer architectures to the organization of warehouse-scale computers.
2. Comment on parallelism and data dependencies between and within components in a modern
heterogeneous computer architecture.
3. Explain how the “power wall" makes it challenging to harness parallelism.
4. Propose the design of basic building blocks for a computer: arithmetic-logic unit (gate-level),
registers (gate-level), central processing unit (register transfer-level), and memory (register transfer-
level).
5. Evaluate simple building blocks (e.g., arithmetic-logic unit, registers, movement between registers)
of a simple computer design.
6. Validate the timing diagram behavior of a pipelined processor, identifying data dependency issues.
57
3. Introduction to SIMD vs. MIMD and the Flynn taxonomy (See also: PDC-A: Programs and
Execution)
4. Shared memory multiprocessors/multicore organization
KA Core:
5. Instruction set architecture (ISA) (e.g., x86, ARM and RISC-V)
a. Fixed vs. variable-width instruction sets
b. Instruction formats
c. Data manipulation, control, I/O
d. Addressing modes
e. Machine language programming
f. Assembly language programming
6. Subroutine call and return mechanisms (xref PL/language translation and execution)
7. I/O and interrupts
8. Heap, static, stack and code segments
KA Core:
4. Comment on how instructions are represented at the machine level and in the context of a
symbolic assembler.
5. Map an example of high-level language patterns into assembly/machine language notations.
6. Comment on different instruction formats, such as addresses per instruction and variable-length
vs fixed-length formats.
7. Follow a subroutine diagram to comment on how subroutine calls are handled at the assembly
level.
8. Comment on basic concepts of interrupts and I/O operations.
9. Code a simple assembly language program for string array processing and manipulation.
58
4. Latency, cycle time, bandwidth and interleaving
5. Cache memories
a. Address mapping
b. Block size
c. Replacement and store policy
6. Multiprocessor cache coherence
7. Virtual memory (hardware support, cross-reference OS/Virtual Memory) (See also: OS-F:
Memory Management)
8. Fault handling and reliability
9. Reliability (cross-reference SF/Reliability through Redundancy) (See also: SF-F: System
reliability)
a. Error coding
b. Data compression (See also: AL-Fundamentals.3)
c. Data integrity
KA Core:
10. Non-von Neumann Architectures
a. Processing In-Memory (PIM)
59
CS Core:
1. Follow an interrupt control diagram to comment on how interrupts are used to implement I/O
control and data transfers.
2. Enumerate various types of buses in a computer system.
3. List the advantages of magnetic disks and contrast them with the advantages of solid-state
disks.
KA Core:
1. Implementation of simple datapaths, including instruction pipelining, hazard detection and
resolution (e.g., stalls, forwarding)
2. Control unit
a. Hardwired implementation
b. Microprogrammed realization
3. Instruction pipelining
4. Introduction to instruction-level parallelism (ILP)
60
7. Alternative architectures, such as VLIW/EPIC, accelerators and other special-purpose
processors
8. Dynamic voltage and frequency scaling (DVFS)
9. Dark Silicon
61
4. Tell how you would determine when to use a domain-specific accelerator instead of a general-
purpose CPU.
5. Enumerate key differences in architectural design principles between a vector and scalar-based
processing unit.
6. List the advantages and disadvantages of PIM architectures.
62
6. Code Shor’s algorithm in a simulator and document your code, highlighting the classical
components and aspects of Shor’s algorithm.
7. Enumerate the specifics of each qubit modality (e.g., trapped ion, superconducting, silicon spin,
photonic, quantum dot, neutral atom, topological, color center, electron-on-helium, etc.),
8. Contextualize the differences between AQC and the gate model of quantum computation and
which kind of problems each is better suited to solve.
9. Comment on the statement: a QPU is a heterogeneous multicore architecture like an FPGA or a
GPU.
Dispositions
Math Requirements
Required:
● Discrete Math: Sets, Relations, Logical Operations, Number Theory, Boolean Algebra
● Linear Algebra: Arithmetic Operations, Matrix operations
● Logarithms, Limits
Desired:
● Math/Physics for Quantum Computing: basic probability, trigonometry, simple vector spaces,
complex numbers, Euler’s formula
● System performance evaluation: probability and factorial experiment design.
Pre-requisites:
● Discrete Math: Sets, Relations, Logical Operations, Number Theory, Basic Programming
63
Skill statement:
● A student who completes this course should be able to understand the fundamental
architectures of modern computer systems, including the challenge of memory caches, memory
management and pipelining.
Pre-requisites:
● Discrete Math: Sets, Relations, Logical Operations, Number Theory
Skill statement:
● A student who completes this course should be able to appreciate the advanced architectural
aspects of modern computer systems, including the challenge of heterogeneous architectures
and the required hardware and software interfaces to improve the performance and energy
footprint of applications.
Pre-requisites:
● Discrete Math: Sets, Relations, Logical Operations, Number Theory, Probability, Linear Algebra
Skill statement:
● A student who completes this course should be able to appreciate how computer architectures
evolved into today’s heterogeneous systems and to what extent past design choices can
influence the design of future high-performance computing
64
Committee
Chair: Marcelo Pias, Federal University of Rio Grande (FURG), Rio Grande, RS, Brazil
Members:
● Brett A. Becker, University College Dublin, Dublin, Ireland
● Mohamed Zahran, New York University, New York, NY, USA
● Monica D. Anderson, University of Alabama, Tuscaloosa, AL, USA
● Qiao Xiang, Xiamen University, China
● Adrian German, Indiana University, Bloomington, IN, USA
65
Data Management (DM)
Preamble
Each area of computer science can be described as "The study of algorithms and data structures to ..." In
this case the blank is filled in with "deal with persistent data sets; frequently too large to fit in primary
memory."
Since the mid-1970's this has meant an almost exclusive study of relational database systems.
Depending on institutional context, students have studied, in varying proportions:
- Data modeling and database design: e.g., E-R Data model, relational model, normalization theory
- Query construction: e.g., relational algebra, SQL
- Query processing: e.g., indices (B+tree, hash), algorithms (e.g., external sorting, select, project,
join), query optimization (transformations, index selection)
- DBMS internals: e.g., concurrency/locking, transaction management, buffer management
Today's graduates are expected to possess DBMS user (as opposed to implementor) skills. These
primarily include data modeling and query construction; ability to take an unorganized collection of
data, organize it using a DBMS, and access/update the collection via queries.
Additionally, students need to study:
- The role data plays in an organization. This includes:
o The Data Life Cycle: Creation-Processing-Review/Reporting-Retention/Retrieval-Destruction.
o The social/legal aspects of data collections: e.g., scale, data privacy, database privacy
(compliance) by design, de-identification, ownership, reliability, database security, and intended
and unintended applications.
- Emerging and advanced technologies that are augmenting/replacing traditional relational systems,
particularly those used to support (big) data analytics: NoSQL (e.g., JSON, XML, key-value store
databases), cloud databases, MapReduce, and dataframes.
We recognize the existing and emerging roles for those involved with data management, which include:
● Product feature engineers: those who use both SQL and NoSQL operational databases.
● Analytical Engineers/Data Engineers: those who write analytical SQL, Python, and Scala code to
build data assets for business groups.
● Business Analysts: those who build/manage data most frequently with Excel spreadsheets.
● Data Infrastructure Engineers: those who implement a data management system (e.g., OLTP).
● “Everyone:” those who produce or consume data need to understand the associated social,
ethical, and professional issues.
One role that transcends all of the above categories is that of data custodian. Previously, data was seen
as a resource to be managed (Information Systems Management) just like other enterprise resources.
Today, data is seen in a larger context. Data about customers can now be seen as belonging to (or in
some national contexts, as owned by) those customers. There is now an accepted understanding that the
safe and ethical storage, and use, of institutional data is part of being a responsible data custodian.
Furthermore, we acknowledge the tension between a curricular focus on professional preparation versus
the study of a knowledge area as a scientific endeavor. This is particularly true with Data Management.
For example, proving (or at least knowing) the completeness of Armstrong’s Axioms is fundamental in
66
functional dependency theory. However, the vast majority of computer science graduates will never
utilize this concept during their professional careers. The same can be said for many other topics in the
Data Management canon. Conversely, if our graduates can only normalize data into Boyce-Codd normal
form (using an automated tool) and write SQL queries, without understanding the role that indices play
in efficient query execution, we have done a disservice.
To this end, the number of CS Core hours is relatively small relative to the KA Core hours. Hopefully,
this will allow institutions with differing contexts to customize their curricula appropriately. For some,
the efficient storage and access of data is primary and independent of how the data is ultimately used -
institutional context with a focus on OLTP implementation. For others, what is “under the hood” is less
important than the programmatic access to already designed databases - institutional context with a
focus on product feature engineers/data scientists.
Regardless of how an institution manages this tension we wish to give voice to one of the ironies of
computer science curricula. Students typically spend the majority of their educational career reading
(and writing) data from a file or interactively, while outside of the academy the lion's share of data, by a
huge margin, comes from databases accessed programmatically. Perhaps in the not too distant future
students will learn programmatic database access early on and then continue this practice as they
progress through their curriculum.
Finally, we understand that while Data Management is orthogonal to Cybersecurity and SEP (Society,
Ethics, and Professionalism), it is also ground zero for these (and other) knowledge areas. When
designing persistent data stores, the question of what should be stored must be examined from both a
legal and ethical perspective. Are there privacy concerns? And finally, how well protected is the data?
Core Hours
Data Modeling 2 3
Relational Databases 1 3
67
Query Construction 2 4
Query Processing 4
DBMS Internals 4
NoSQL Systems 2
Data Analytics
Total 9 21
Knowledge Units
68
8. Approaches for managing large volumes of data (e.g., noSQL database systems, use of MapReduce)
(See also: PDC-Algorithms:2)
9. How to support CRUD-only applications
10. Distributed databases/cloud-based systems
11. Structured, semi-structured, and unstructured databases
KA Core:
12. Systems supporting structured and/or stream content
KA Core:
3. Conceptual models (e.g., entity-relationship, UML diagrams)
4. Semi-structured data models (expressed using DTD, XML, or JSON Schema, for example)
Non-Core:
5. Spreadsheet models
6. Object-oriented models (See also: FPL-OOP)
a. GraphQL
7. New features in SQL
8. Specialized Data Modeling topics
a. Time series data (aggregation, and join)
b. Graph data (link traversal)
c. Techniques for avoiding inefficient raw data access (e.g., “avg daily price”): materialized
views and special data structures (e.g., Hyperloglog, bitmap)
d. Geo-Spatial data (e.g., GIS databases) (See also: SPD-Access)
KA Core:
3. Articulate the components of the E-R (or some other non-relational) data model.
4. Model a given environment using a conceptual data model.
5. Model a given environment using the document-based or key-value store-based data model.
KA Core:
3. Mapping conceptual schema to a relational schema
4. Physical database design: file and storage structures (See also OS-Files)
5. Introduction to Functional dependency Theory
6. Normalization Theory
a. Decomposition of a schema; lossless-join and dependency-preservation properties of a
decomposition
b. Normal forms (BCNF)
c. Denormalization (for efficiency)
Non-Core:
7. Functional dependency Theory
a. Closure of a set of attributes
b. Canonical Cover
8. Normalization Theory
a. Multi-valued dependency (4NF)
b. Join dependency (PJNF, 5NF)
c. Representation theory
KA Core:
4. Compose a relational schema from a conceptual schema which contains 1:1, 1:n, and n:m
relationships.
5. Map appropriate file structure to relations and indices.
70
6. Articulate how functional dependency theory generalizes the notion of key.
7. Defend a given decomposition as lossless and or dependency preserving.
8. Detect which normal form a given decomposition yields.
9. Comment on reasons for denormalizing a relation.
KA Core:
2. Relational Algebra
3. SQL
a. Data definition including integrity and other constraints specification
b. Update sublanguage
Non-Core:
4. Relational Calculus
5. QBE and 4th-generation environments
6. Different ways to invoke non-procedural queries in conventional languages
7. Introduction to other major query languages (e.g., XPATH, SPARQL)
8. Stored procedures
KA Core:
4. Define, in SQL, a relation schema, including all integrity constraints and delete/update triggers.
5. Compose an SQL query to update a tuple in a relation.
71
KA Core:
1. Page structures
2. Index structures
a. B+ trees (See also: AL-Fundamentals)
b. Hash indices: static and dynamic (See also: AL-Fundamentals, SEC-Foundations)
c. Index creation in SQL
3. Algorithms for query operators
a. External Sorting (See also: AL-Fundamentals)
b. Selection
c. Projection; with and without duplicate elimination
d. Natural Joins: Nested loop, Sort-merge, Hash join
e. Analysis of algorithm efficiency (See also: AL-Complexity)
4. Query transformations
5. Query optimization
a. Access paths
b. Query plan construction
c. Selectivity estimation
d. Index-only plans
6. Parallel Query Processing (e.g., parallel scan, parallel join, parallel aggregation) (See also: PDC-
Algorithms)
7. Database tuning/performance
a. Index selection
b. Impact of indices on query performance (See also: SF-Performance:3, SEP-Sustainability)
c. Denormalization
72
DM-Internals: DBMS Internals
This unit covers DBMS internals that are not directly involved in query execution.
KA Core:
1. DB Buffer Management (See also: SF-Resources)
2. Transaction Management (See also: PDC-Coordination:3)
a. Isolation Levels
b. ACID
c. Serializability
d. Distributed Transactions
3. Concurrency Control: (See also: OS-Concurrency)
a. 2-Phase Locking
b. Deadlocks handling strategies
c. Quorum-based consistency models
4. Recovery Manager
a. Relation with Buffer Manager
Non-Core:
5. Concurrency Control:
a. Optimistic CC
b. Timestamp CC
6. Recovery Manager
a. Write-Ahead logging
b. ARIES recovery system (Analysis, REDO, UNDO)
Non-Core:
3. Storage systems (e.g., Key-Value systems, Data Lakes)
4. Distribution Models (Sharding and Replication) (See also: PDC-Communication:4)
5. Graph Databases
6. Consistency Models (Update and Read, Quorum consistency, CAP theorem) (See also: PDC-
Communication:4)
7. Processing model (e.g., Map-Reduce, multi-stage map-reduce, incremental map-reduce) (See also:
PDC-Communication:4)
73
8. Case Studies: Cloud storage system (e.g., S3); Graph databases ; “When not to use NoSQL” (See
also: SPD-Web: 7)
Non-Core:
9. Typical risk factors and prevention measures for ensuring data integrity
10. Ransomware and prevention of data loss and destruction
74
4. Data acquisition and governance
5. Data security and privacy considerations (See also: SEP-Security)
6. Data fairness and bias (See also: SEP-Security, AI-Ethics)
7. Data visualization techniques and their use in data analytics (Git-Ask Susan)
8. Entity Resolution
75
ii. Inverted index and bitmap compression
iii. Space filling curve indexing for semi-structured geo-data
c. Query processing for OLTP and OLAP use cases
i. Insert, Select, update/delete trade offs
ii. Case studies on Postgres/JSON, MongoDB and Snowflake/JSON
Professional Dispositions
● Meticulous: Those who either access or store data collections must be meticulous in fulfilling
data ownership responsibilities.
● Responsible: In conjunction with the professional management of (personal) data, it is equally
important that data be managed responsibly. Protection from unauthorized access as well as
prevention of irresponsible, though legal, use of data is paramount. Furthermore, data custodians
need to protect data not only from outside attack, but from crashes and other foreseeable dangers.
● Collaborative: Data managers and data users must behave in a collaborative fashion to ensure
that the correct data is accessed, and is used only in an appropriate manner.
● Responsive: The data that gets stored and is accessed is always in response to an institutional
need/request.
Math Requirements
Required:
● Discrete Mathematics
76
○ Set theory (union, intersection, difference, cross-product)
For those implementing a single course on Database Systems, there are a variety of options. As
described in [1], there are four primary perspectives from which to approach databases:
● Database design/modeling
● Database use
● Database administration
● Database development, which includes implementation algorithms
Course design proceeds by focusing on topics from each perspective in varying degrees according to
one’s institutional context. For example in [1], one of the courses described can be characterized as
design/modeling (20%), use (20%), development/internals (30%), and administration/tuning/advanced
topics (30%). The topics might include:
● DM-SEP: Society, Ethics, and Professionalism (3 hours)
● DM-Data: The Role of Data (1 hour)
● DM-Core: Core Database System Concepts (3 hours)
● DM-Modeling: Data Modeling (5 hours)
● DM-Relational: Relational Databases (4 hours)
● DM-Querying: Query Construction (6 hours)
● DM-Processing: Query Processing (5 hours)
● DM-Internals: DBMS Internals (5 hours)
● DM-NoSQL: NoSQL Systems (4 hours)
● DM-Security: Data Security and Privacy (3 hours)
● DM-Distributed: Distributed Databases/Cloud Computing (2 hours)
Possibly, the more interesting question is how to cover the CS Core concepts in the absence of a
dedicated database course. Perhaps the key to accomplishing this is to normalize database access.
Starting with the introductory course students could be accessing a database versus file I/O or interactive
data entry, to acquire the data needed for introductory-level programming. As students progress through
their curriculum, additional CS Core topics can be introduced. For example, introductory students would
be given the code to access the database along with the SQL query. By the intermediate level, they could
be writing their own queries. Finally, in a Software Engineering or capstone course, they are practicing
database design. One advantage of this databases across the curriculum approach is that allows for the
inclusion of database-related SEP topics to also be spread across the curriculum.
In a similar vein one might have a whole course on the Role of Data from either a Security (SEC)
perspective, or and Ethics (SEP) Perspective.
77
[1] The 2022 Undergraduate Database Course in Computer Science: What to Teach?. Michael
Goldweber, Min Wei, Sherif Aly, Rajendra K. Raj, and Mohamed Mokbel. ACM Inroads, Volume 13,
Number 3, 2022.
Committee
Members:
● Sherif Aly, The American University in Cairo, Cairo, Egypt
● Sara More, Johns Hopkins University, Baltimore, MD, USA
● Mohamed Mokbel, University of Minnesota, Minneapolis, MN, USA
● Rajendra Raj, Rochester Institute of Technology, Rochester, NY, USA
● Avi Silberschatz, Yale University, New Haven, CT, USA
● Min Wei, Seattle, WA, Microsoft
● Qiao Xiang, Xiamen University, Xiamen, China
78
Foundations of Programming Languages (FPL)
Preamble
This knowledge area provides a basis (rooted in discrete mathematics and logic) for the understanding
of complex modern programming languages: their foundations, implementation, and formal description.
Although programming languages vary according to the language paradigm and the problem domain and
evolve in response to both societal needs and technological advancement, they share an underlying
abstract model of computation and program development. This remains true even as processor
hardware and their interface with programming tools are becoming increasingly intertwined and
progressively more complex. An understanding of the common abstractions and programming paradigms
enables faster learning of new programming languages.
The Foundations of Programming Languages Knowledge Area is concerned with articulating the
underlying concepts and principles of programming languages, the formal specification of a programming
language and the behavior of a program, explaining how programming languages are implemented,
comparing the strengths and weaknesses of various programming paradigms, and describing how
programming languages interface with entities such as operating systems and hardware. The concepts
covered in this area are applicable to many different languages and an understanding of these principles
assists in being able to move readily from one language to the next, and to be able to select a
programming paradigm and a programming language to best suit the problem at hand.
Two example courses are presented at the end of this knowledge area to illustrate how the content may
be covered. The first is an introductory course which covers the CS Core and KA Core content. This is a
course focused on the different programming paradigms and ensue familiarity with each to a level
sufficient to be able to decide which paradigm is appropriate in which circumstances.
The second course is an advanced course focused on the implementation of a programming language
and the formal description of a programming language and a formal description of the behavior of a
program.
While these two courses have been the predominant way to cover this knowledge area of the past
decade, it is by no means the only way that the content can be covered. An institution could, for example,
choose to cover only the CS Core content (28 hours) in a shorter course, or in a course which combines
this CS Core content with Core content from another knowledge area such as Software Engineering.
Natural combinations are easily identifiable since they are the areas in which the Foundations of
Programming Languages knowledge areas overlaps with other knowledge areas. A list of such overlap
areas is provided at the end of this knowledge area.
Programming languages are the medium through which programmers precisely describe concepts,
formulate algorithms, and reason about solutions. Over the course of a career, a computer scientist will
need to learn and work with many different languages, separately or together. Software developers must
understand the programming models, new programming features and constructs, underlying different
languages and make informed design choices in languages supporting multiple complementary
79
approaches. Computer scientists will often need to learn new languages and programming constructs
and must understand the principles underlying how programming language features are defined,
composed, and implemented to improve execution efficiency and long-term maintenance of developed
software. The effective use of programming languages and appreciation of their limitations also requires
a basic knowledge of programming language translation and program analysis, of run-time behavior, and
components such as memory management and the interplay of concurrent processes communicating
with each other through message-passing, shared memory, and synchronization, Finally, some
developers and researchers will need to design new languages, an exercise which requires familiarity
with basic principles.
In addition, some knowledge units from CS 2013 are renamed to more accurately reflect their content:
● Static Analysis is renamed to Program Analysis and Analyzers
● Concurrency and Parallelism is renamed to Parallel and Distributed Computing
● Program Representation is renamed to Program Abstraction and Representation
● Runtime Systems is renamed to Runtime Behavior and Systems
● Basic Type Systems and Type Systems were merged into a single topic and named Type
Systems
Seven new knowledge units have been added to reflect their continuing and growing importance as we
look toward the 2030s:
● Compiler vs Interpreted Languages +1 CS Core hour
● Scripting +2 CS Core hours
80
● Systems Execution and Memory Model +3 CS Core hours
● Formal Development Methodologies
● Design Principles of Programming Languages
● Quantum Computing
● Fundamentals of Programming Languages and Society, Ethics and Professionalism
Compared to CS 2013 which had a total of 24 CS core hours (tier-1 hours plus 80% of tier-2 hours), and
4 KA core hours (20% of tier-2 hours), the current recommendation has a total of 22 CS core hours and
20 KA core hours. Note that there is no requirement that each computer science graduate sees any KA
core hours – they are a recommendation of content to consider if a program choses to offer greater
emphasis on this knowledge area.
Note:
● Several topics within this knowledge area either build on, or overlap with, content covered in other
knowledge areas such as the Software Development Fundamentals Knowledge Area in a
curriculum’s introductory courses. Curricula will differ on which topics are integrated in this fashion
and which are delayed until later courses on software development and programming languages.
● Different programming paradigms correspond to different problem domains. Most languages have
evolved to integrate more than one programming paradigms such imperative with OOP, functional
programming with OOP, logic programming with OOP, and event and reactive modeling with
OOP. Hence, the emphasis is not on just one programming paradigm but a balance of all major
programming paradigms.
● While the number of CS core and KA core hours is identified for each major programming
paradigm (object-oriented, functional, logic), the distribution of hours across the paradigms may
differ depending on the curriculum and programming languages students have been exposed to
leading up to coverage of this knowledge area. This document makes the assumption that
students have exposure to a object-oriented programming language leading into this knowledge
area.
● Imperative programming is not listed as a separate paradigm to be examined, instead it is treated
as a subset of the object-oriented paradigm.
● With multicore computing, cloud computing, and computer networking becoming commonly
available in the market, it has become imperative to understand the integration of “Distribution,
concurrency, parallelism” along with other programming paradigms as a core area. This paradigm
is integrated with almost all other major programming paradigms.
● With ubiquitous computing and real-time temporal computing becoming more in daily human life
such as health, transportation, smart homes, it has become important to cover the software
development aspects of event-driven and reactive programming, as well as parallel and
distributed computing, under the programming languages knowledge area. Some of the topics
covered will require, and interface with, concepts covered in knowledge areas such as
Architecture and Organization, Operating Systems, and Systems Fundamentals.
● Some topics from the Parallel and Distributed Computing Knowledge Unit are likely to be
integrated within the curriculum with topics from the Parallel and Distributed Programming
Knowledge Area.
81
● There is an increasing interest in formal methods to prove program correctness and other
properties. To support this, increased coverage of topics related to formal methods is included,
but all of these topics are identified as non-core.
● When introducing these topics, it is also important that an instructor provides context for this
material including why we have an interest in programming languages, and what they do for us in
terms of providing a human readable version of instructions to a computer to execute for us.
Core Hours
Knowledge Units
82
a. Decomposition into objects carrying state and having behavior
b. Class-hierarchy design for modeling
3. Definition of classes: fields, methods, and constructors (See also: SDF-Fundamentals)
4. Subclasses, inheritance (including multiple inheritance), and method overriding
5. Dynamic dispatch: definition of method-call
6. Exception handling (See also: PDC-Coordination, SF-System-Reliability)
7. Object-oriented idioms for encapsulation
a. Privacy, data hiding, and visibility of class members
b. Interfaces revealing only method signatures
c. Abstract base classes, traits and mixins
8. Dynamic vs static properties
9. Composition vs inheritance
10. Subtyping
a. Subtype polymorphism; implicit upcasts in typed languages
b. Notion of behavioral replacement: subtypes acting like supertypes
c. Relationship between subtyping and inheritance
KA Core:
11. Collection classes, iterators, and other common library components
83
KA Core:
10. Use collection classes and iterators effectively to solve a problem.
KA Core:
5. Function closures (functions using variables in the enclosing lexical environment)
a. Basic meaning and definition - creating closures at run-time by capturing the environment
b. Canonical idioms: call-backs, arguments to iterators, reusable code via function arguments
c. Using a closure to encapsulate data in its environment
d. Lazy versus eager evaluation
Non-Core:
6. Graph reduction machine and call-by-need
7. Implementing lazy evaluation
8. Integration with logic programming paradigm using concepts such as equational logic, narrowing,
residuation and semantic unification (See also: FPL-Logic)
9. Integration with other programming paradigms such as imperative and object-oriented
84
Understand both as defining a matrix of operations and variants. (See also: FPL-OOP)
KA Core:
4. Explain a simple example of lambda expression being implemented using a virtual machine, such as
a SECD machine, showing storage and reclaim of the environment.
5. Correctly interpret variables and lexical scope in a program using function closures.
6. Use functional encapsulation mechanisms such as closures and modular interfaces.
7. Compare and contrast stateful vs stateless execution.
8. Define and use iterators and other operations on aggregates, including operations that take functions
as arguments, in multiple programming languages, selecting the most natural idioms for each
language. (See also: FPL-OOP)
Non-Core:
9. Illustrate graph reduction using a λ-expression using a shared subexpression
10. Illustrate the execution of a simple nested λ-expression using an abstract machine, such as an ABC
machine.
11. Illustrate narrowing, residuation and semantic unification using simple illustrative examples.
12. Illustrate the concurrency constructs using simple programming examples of known concepts such
as a buffer being read and written concurrently or sequentially. (See also: FPL-OOP)
Non-Core:
9. Memory overhead of variable copying in handling iterative programs
10. Programming constructs to store partial computation and pruning search trees
11. Mixing functional programming and logic programming using concepts such as equational logic,
narrowing, residuation and semantic unification (See also: FPL-Functional)
12. Higher-order, constraint and inductive logic programming (See also: AI-LRR)
13. Integration with other programming paradigms such as object-oriented programming
14. Advance programming constructs such as difference-lists, creating user defined data structures, set
of, etc.
85
Illustrative learning outcomes:
KA Core:
1. Use a logic language to implement a conventional algorithm.
2. Use a logic language to implement an algorithm employing implicit search using clauses, relations,
and cuts.
3. Use a simple illustrative example to show correspondence between First Order Predicate Logic
(FOPL) and logic programs using Horn clauses.
4. Use examples to illustrate unification algorithm and its role of parameter passing in query reduction.
5. Use simple logic programs interleaving relations, functions, and recursive programming such as
factorial and Fibonacci numbers and simple complex relationships between entities, and illustrate
execution and parameter passing using unification and backtracking.
Non-Core:
6. Illustrate computation of simple programs such as Fibonacci and show overhead of recomputation,
and then show how to improve execution overhead.
FPL-Scripting: Scripting
CS Core:
1. Error/exception handling
2. Piping (See also: AR-Organization, SF-Overview:6, OS-Process:5)
3. System commands (See also: SF-A)
a. Interface with operating systems (See also: SF-Overview, OS-Principles:3)
4. Environment variables (See also: SF-A)
5. File abstraction and operators (See also: SDF-Fundamentals, OS-Files, AR-IO, SF-Resource)
6. Data structures, such as arrays and lists (See also: AL-Fundamentals, SDF-Fundamentals, SDF-
DataStructures)
7. Regular expressions (See also: AL-Models)
8. Programs and processes (See also: OS-Process)
9. Workflow
86
Illustrative learning outcomes:
CS Core:
1. Create and execute automated scripts to manage various system tasks.
2. Solve various text processing problems through scripting
KA Core:
5. Using a reactive framework
a. Defining event handlers/listeners
b. Parameterization of event senders and event arguments
c. Externally-generated events and program-generated events
6. Separation of model, view, and controller
KA Core:
3. Define and use a reactive framework.
4. Describe an interactive system in terms of a model, a view, and a controller.
87
d. Termination {see also: PDC-Coordination)
2. Programming models (See also: PDC-Programs).
a. One or more of the following:
a. Actor models
b. Procedural and reactive models
c. Synchronous/asynchronous programming models
d. Data parallelism
3. Properties {see also: PDC-Programs, PDC-Coordination)
a. Order-based properties:
i. Commutativity
ii. Independence
b. Consistency-based properties:
i. Atomicity
ii. Consensus
4. Execution control (See also: PDC-Coordination, SF-B)
a. Async await
b. Promises
c. Threads
5. Communication and coordination (See also: OS-Process:5, PDC-Communication, PDC-
Coordination)
a. Message-passing
b. Shared memory
c. cobegin-coend
d. Monitors
e. Channels
f. Threads
g. Guards
KA Core:
6. Futures
7. Language support for data parallelism such as forall, loop unrolling, map/reduce
8. Effect of memory-consistency models on language semantics and correct code generation
9. Representational State Transfer Application Programming Interfaces (REST APIs)
10. Technologies and approaches: cloud computing, high performance computing, quantum computing,
ubiquitous computing
11. Overheads of message passing
12. Granularity of program for efficient exploitation of concurrency.
13. Concurrency and other programming paradigms (e.g., functional).
88
1. Explain why programming languages do not guarantee sequential consistency in the presence of
data races and what programmers must do as a result.
2. Implement correct concurrent programs using multiple programming models, such as shared
memory, actors, futures, synchronization constructs, and data-parallelism primitives.
3. \Use a message-passing model to analyze a communication protocol.
4. Use synchronization constructions such as monitor/synchronized methods in a simple program.
5. Modeling data dependency using simple programming constructs involving variables, read and write.
6. Modeling control dependency using simple constructs such as selection and iteration.
KA Core:
7. Explain how REST API's integrate applications and automate processes.
8. Explain benefits, constraints and challenges related to distributed and parallel computing.
KA Core:
7. Type equivalence: structural vs name equivalence
8. Complementary benefits of static and dynamic typing
a. Errors early vs. errors late/avoided
b. Enforce invariants during code development and code maintenance vs. postpone typing decisions
while prototyping and conveniently allow flexible coding patterns such as heterogeneous
collections
89
c. Typing rules
i. Rules for function, product, and sum types
d. Avoid misuse of code vs. allow more code reuse
e. Detect incomplete programs vs. allow incomplete programs to run
f. Relationship to static analysis
g. Decidability
Non-Core:
9. Compositional type constructors, such as product types (for aggregates), sum types (for unions),
function types, quantified types, and recursive types
10. Type checking
11. Subtyping (See also: FPL-OOP)
a. Subtype polymorphism; implicit upcasts in typed languages
b. Notion of behavioral replacement: subtypes acting like supertypes
c. Relationship between subtyping and inheritance
12. Type safety as preservation plus progress
13. Type inference
14. Static overloading
15. Propositions as types (implication as a function, conjunction as a product, disjunction as a sum) (See
also: FPL-Formalism)
16. Dependent types (universal quantification as dependent function, existential quantification as
dependent product) (See also: FPL-Formalism)
KA Core:
7. Explain how typing rules define the set of operations that are legal for a type.
8. List the type rules governing the use of a particular compound type.
9. Explain why undecidability requires type systems to conservatively approximate program behavior.
10. Define and use program pieces (such as functions, classes, methods) that use generic types,
including for collections.
11. Discuss the differences among generics, subtyping, and overloading.
90
12. Explain multiple benefits and limitations of static typing in writing, maintaining, and debugging
software.
Non-Core:
13. Define a type system precisely and compositionally.
14. For various foundational type constructors, identify the values they describe and the invariants they
enforce.
15. Precisely describe the invariants preserved by a sound type system.
16. Prove type safety for a simple language in terms of preservation and progress theorems.
17. Implement a unification-based type-inference algorithm for a simple language.
18. Explain how static overloading and associated resolution algorithms influence the dynamic behavior
of programs.
91
2. Explain how programming language implementations typically organize memory into global data, text,
heap, and stack sections and how features such as recursion and memory management map to this
memory model.
3. Investigate, identify, and fix memory leaks and dangling-pointer dereferences.
KA Core:
3. Run-time representation of core language constructs such as objects (method tables) and first-class
functions (closures)
4. Secure compiler development (See also: SEC-Foundation, SEC-Defense)
Non-Core:
7. Discuss the benefits and limitations of garbage collection, including the notion of reachability.
92
3. Components of a language
a. Definitions of alphabets, delimiters, sentences, syntax and semantics
b. Syntax vs. semantics
4. Program as a set of non-ambiguous meaningful sentences
5. Basic programming abstractions: constants, variables, declarations (including nested declarations),
command, expression, assignment, selection, definite and indefinite iteration, iterators, function,
procedure, modules, exception handling (See also: SDF-Fundamental)
6. Mutable vs. immutable variables: advantages and disadvantages of reusing existing memory location
vs. advantages of copying and keeping old values; storing partial computation vs. recomputation
7. Types of variables: static, local, nonlocal, global; need and issues with nonlocal and global variables
8. Scope rules: static vs. dynamic; visibility of variables; side-effects
9. Side-effects induced by nonlocal variables, global variables and aliased variables
Non-Core:
10. L-values and R-values: mapping mutable variable-name to L-values; mapping immutable variable-
names to R-values (See also: SDF-A)
11. Environment vs. store and their properties
12. Data and control abstraction
13. Mechanisms for information exchange between program units such as procedures, functions and
modules: nonlocal variables, global variables, parameter passing, import-export between modules
14. Data structures to represent code for execution, translation, or transmission
15. Low level instruction representation such as virtual machine instructions, assembly language, and
binary representation (See also: AR-B, AR-C)
16. Lambda calculus, variable binding, and variable renaming (See also: AL-Models, FPL-Formalism)
17. Types of semantics: operational, axiomatic, denotational, behavioral; define and use abstract syntax
trees; contrast with concrete syntax
93
2. Scanning and parsing based on language specifications
3. Lexical analysis using regular expressions
4. Tokens and their use
5. Parsing strategies including top-down (e.g., recursive descent, or LL) and bottom-up (e.g., LR or
GLR) techniques.
a. Lookahead tables and their application to parsing
6. Language theory
a. Chomsky hierarchy (See also: AL-Models)
b. Left-most/right-most derivation and ambiguity
c. Grammar transformation
7. Parser error recovery mechanisms
8. Generating scanners and parsers from declarative specifications
94
3. Describe semantic analyses using an attribute grammar.
95
Illustrative learning outcomes:
Non-Core:
1. Identify all essential steps for automatically converting source code into assembly or other low-level
languages.
2. Generate the low-level code for calling functions/methods in modern languages.
3. Discuss why separate compilation requires uniform calling conventions.
4. Discuss why separate compilation limits optimization because of unknown effects of calls.
5. Discuss opportunities for optimization introduced by naive translation and approaches for achieving
optimization, such as instruction selection, instruction scheduling, register allocation, and peephole
optimization.
96
3. Explain the use of metadata in run-time representations of objects and activation records, such as
class pointers, array lengths, return addresses, and frame pointers.
4. Compare and contrast static allocation vs. stack-based allocation vs. heap-based allocation of data
elements.
5. Explain why some data elements cannot be automatically deallocated at the end of a
procedure/method call (need for garbage collection).
6. Discuss advantages, disadvantages, and difficulties of just-in-time and dynamic recompilation.
7. Discuss use of sandboxing in mobile code
8. Identify the services provided by modern language run-time systems.
98
3. Discuss the different approaches of operational, denotational, and axiomatic semantics.
4. Use induction to prove properties of all programs in a language.
5. Use induction to prove properties of all programs in a language that are well-typed according to a
formally defined type system.
6. Use parametricity to establish the behavior of code given only its type.
99
2. Designing a language to fit a specific domain or problem
3. Interoperability between programming languages,
4. Language portability
5. Formal description of s programming language
6. Green computing principles (See also: SEP-Sustainability)
100
Professional Dispositions
1. Meticulous: Students must demonstrate and apply the highest standards when using programming
languages and formal methods to build safe systems that are fit for their purpose.
2. Meticulous: Attention to detail is essential when using programming languages and applying formal
methods.
3. Inventive: Programming and approaches to formal proofs is inherently a creative process, students
must demonstrate innovative approaches to problem solving. Students are accountable for their
choices regarding the way a problem is solved.
4. Proactive: Programmers are responsible for anticipating all forms of user input and system behavior
and to design solutions that address each one.
5. Persistent: Students must demonstrate perseverance since the correct approach is not always self-
evident and a process of refinement may be necessary to reach the solution.
Math Requirements
Required:
● Discrete Mathematics – Boolean algebra, proof techniques, digital logic, sets and set operations,
mapping, functions and relations, states and invariants, graphs and relations, trees, counting,
recurrence relations, finite state machine, regular grammar
● Logic – propositional logic (negations, conjunctions, disjunctions, conditionals, biconditionals),
first-order logic, logical reasoning (induction, deduction, abduction).
● Mathematics – complex numbers, matrices, linear transformation, probability, statistics
101
3 CS Core hours
● FPL-Translation: Language Translation and Execution 2 KA Core hours
● FPL-Abstraction: Program Abstraction and Representation 3 KA Core hours
● FPL-Quantum: Quantum Computing 2 CS Core hours
● FPL-SEP: FPL and SEP 1 Non-Core hour
Pre-requisites:
● Discrete Mathematics – Boolean algebra, proof techniques, digital logic, sets and set operations,
mapping, functions and relations, states and invariants, graphs and relations, trees, counting,
recurrence relations, finite state machine, regular grammar.
Pre-requisites:
● Discrete mathematics – Boolean algebra, proof techniques, digital logic, sets and set operations,
mapping, functions and relations, states and invariants, graphs and relations, trees, counting,
recurrence relations, finite state machine, regular grammar.
● Logic – propositional logic (negations, conjunctions, disjunctions, conditionals, biconditionals),
first-order logic, logical reasoning (induction, deduction, abduction).
● Introductory programming course.
● Programming proficiency in programming concepts such as:
● type declarations such as basic data types, records, indexed data elements such as arrays
and vectors, and class/subclass declarations, types of variables,
● scope rules of variables,
● selection and iteration concepts, function and procedure calls, methods, object creation
● Data structure concepts such as:
● abstract data types, sequence and string, stack, queues, trees, dictionaries
● pointer-based data structures such as linked lists, trees and shared memory locations
● Hashing and hash tables
● System fundamentals and computer architecture concepts such as:
● Digital circuits design, clocks, bus
102
● registers, cache, RAM and secondary memory
● CPU and GPU
● Basic knowledge of operating system concepts such as:
● Interrupts, threads and interrupt-based/thread-based programming
● Scheduling, including prioritization
● Memory fragmentation
● Latency
Committee
Chair: Michael Oudshoorn, High Point University, High Point, NC, USA
Members:
● Annette Bieniusa, TU Kaiserslautern, Kaiserslautern, Germany
● Brijesh Dongol, University of Surrey, Guildford, UK
● Michelle Kuttel, University of Cape Town, Cape Town, South Africa
● Doug Lea, State University of New York at Oswego, Oswego, NY, USA
● James Noble, Victoria University of Wellington, Wellington, New Zealand
● Mark Marron, Microsoft Research, Seattle, WA, USA
● Peter-Michael Osera, Grinnell College, Grinnell, IA, USA
● Michelle Mills Strout, University of Arizona, Tucson, AZ, USA
Contributors:
● Alan Dearle, University of St. Andrews, St. Andrews, Scotland
103
Graphics and Interactive Techniques (GIT)
Preamble
Computer graphics is the term used to describe the computer generation and manipulation of images
and can be viewed as the science of enabling visual communication through computation. Its
applications include machine learning; medical imaging; engineering; scientific, information, and
knowledge visualization; cartoons; special effects; simulators; and video games. Traditionally, graphics
at the undergraduate level focused on rendering, linear algebra, physics, the graphics pipeline,
interaction, and phenomenological approaches. Today’s graphics courses increasingly include physical
computing, animation, and haptics. At the advanced level, undergraduate institutions are increasingly
likely to offer one or more courses specializing in a specific graphics knowledge unit: e.g., gaming,
animation, visualization, tangible or physical computing, and immersive courses such as AR/VR/XR.
There is considerable overlap with other computer science knowledge areas: Algorithmic Foundations,
Architecture and Organization, Artificial Intelligence; Human-Computer Interaction; Parallel and
Distributed Computing; Specialized Platform Development; Software Engineering, and Society, Ethics,
and Professionalism.
In order for students to become adept at the use and generation of computer graphics, many issues
must be addressed, such as human perception and cognition, data and image file formats, hardware
interfaces, and application program interfaces (APIs). Unlike other knowledge areas, knowledge units
or topics within Graphics and Interactive Techniques may be included in a variety of elective courses
ranging from a traditional Interactive Computer Graphics course to Gaming or Animation. Alternatively,
graphics topics may be introduced in an applied project in Human Computer Interaction, Embedded
Systems, Web Development, etc. Undergraduate computer science students who study the knowledge
units specified below through a balance of theory and applied instruction will be able to understand,
evaluate, and/or implement the related graphics and interactive techniques as users and developers.
Because technology changes rapidly, the Graphics and Interactive Techniques subcommittee
attempted to avoid being overly prescriptive. Where provided, examples of APIs, programs, and
languages should be considered as appropriate examples in 2023. In effect, this is a snapshot in time.
Graphics as a knowledge area has expanded and become pervasive since the CS2013 report. Machine
learning, computer vision, data science, artificial intelligence, and interfaces driven by embedded
sensors in everything from cars to coffee makers use graphics and interactive techniques. The now
ubiquitous smartphone has made the majority of the world’s population regular users and creators of
graphics, digital images, and the interactive techniques to manipulate them. Animations, games,
visualizations, and immersive applications that ran on desktops in 2013, now can run on mobile
devices. The amount of stored digital data grew exponentially since 2013, and both data and
visualizations are now published by myriad sources including news media and scientific organizations.
104
Revenue from mobile video games now exceeds that of music and movies combined.1 Computer
Generated Imagery (CGI) is employed in almost all films.
It is critical that students and faculty confront the ethical questions and conundrums that have arisen
and will continue to arise because of applications in computer graphics—especially those that employ
machine learning, data science, and artificial intelligence. Today’s news unfortunately provides
examples of inequity and wrong-doing in autonomous navigation, deepfakes, computational
photography, and facial recognition.
In an effort to align CC2013’s Graphics and Visualizations areas with SIGGRAPH, we have renamed it
Graphics and Interactive Techniques (GIT). To capture the expanding footprint of the field, the following
knowledge units have been added to the original list of knowledge units: Fundamental Concepts,
Visualization, Basic Rendering (renamed Rendering), Geometric Modeling, Advanced Rendering
(renamed Shading), and Computer Animation:
● Immersion (MR, AR, VR)
● Interaction
● Image Processing
● Tangible/Physical Computing
● Simulation
Core Hours
Fundamental Concepts 4
Rendering 18
Geometric Modeling 6
Shading 6
Visualization Visualization KA - 6
Interaction Interaction KA - 6
1
Jon Quast, Clay Bruning, and Sanmeet Deo. "Markets: This Opportunity for Investors Is Bigger Than Movies
and Music Combined." retrieved from https://www.nasdaq.com/articles/this-opportunity-for-investors-is-
bigger-than-movies-and-music-combined-2021-10-03.
105
Image Processing Image Processing KA - 6
Simulation Simulation KA - 6
Total 4
Knowledge Units
CS Core:
1. Uses of graphics and interactive techniques and potential risks and abuses
a. Entertainment, business, and scientific applications: examples include visual effects, machine
learning, computer vision, user interfaces, video editing, games and game engines, computer-
aided design and manufacturing, data visualization, and virtual/augmented/mixed reality.
b. Intellectual property, deep fakes, facial recognition, privacy (See also: SEP-Privacy, SEP-IP,
SEP-Ethics)
2. Graphic output
a. displays (e.g., LCD)
b. printer
c. analog film
d. resolution
i. pixels for visual displays
ii. dots for laser printers
e. aspect ratio
f. frame rate
3. Human vision system
a. tristimulus reception (RGB)
b. eye as a camera (projection)
c. persistence of vision (frame rate, motion blur)
d. contrast (detection, Mach banding, dithering/aliasing)
e. non-linear response (dynamic range, tone mapping)
f. binocular vision (stereo)
g. accessibility (color deficiency, strobing, monocular vision, etc.) (See also: SEP-IDEA, HCI-User)
4. Standard image formats
a. raster
106
i. lossless (e.g., TIF)
ii. lossy (e.g., JPG, PNG, etc.)
b. vector (e.g. SVG, Adobe Illustrator)
5. Digitization of analog data
a. rasterization
b. resolution
c. sampling and quantization
6. Color models: additive (RGB), subtractive (CMYK), and color perception (HSV)
7. Tradeoffs between storing image data and re-computing image data.
8. Applied interactive graphics, e.g., graphics API, mobile app
9. Animation as a sequence of still images
CS Core:
1. Identify common uses of digital presentation to humans (e.g., computer graphics, sound).
2. Explain how analog signals can be reasonably represented by discrete samples, for example, how
images can be represented by pixels.
3. Compute the memory requirement for storing a color image given its resolution.
4. Create a graphic depicting how the limits of human perception affect choices about the digital
representation of analog signals.
5. Design a user interface and an alternative for persons with color-perception deficiency.
6. Construct a simple graphical user interface using a standard API.
7. When should you use each of the following common graphics file formats: JPG, PNG, MP3, MP4,
and GIF? Why?
8. Give an example of a lossy- and a lossless-image compression technique found in common
graphics file formats.
9. Describe color models and their use in graphics display devices.
10. Compute the memory requirements for a multi-second movie (lasting n seconds) displaying at a
specific framerate (f frames per second) at a specified resolutions (r pixels per frame)
11. Compare and contrast digital video to analog video.
12. Describe the basic process of producing continuous motion from a sequence of discrete frames
(sometimes called “flicker fusion”).
13. Describe a possible visual misrepresentation that could result from digitally sampling an analog
world.
14. Compute memory space requirements based on resolution and color coding.
15. Compute time requirements based on refresh rates and rasterization techniques.
GIT-Visualization
KA Core
1. Data Visualization and Information Visualization
2. Visualization of:
a. 2D/3D scalar fields
107
b. Vector fields and flow data
c. Time-varying data
d. High-dimensional data
e. Non-spatial data
2. Visualization techniques (color mapping, isosurfaces, dimension reduction, parallel coordinates,
multi-variate, tree/graph-structured, text)
3. Direct volume data rendering: ray-casting, transfer functions, segmentation.
4. Common data formats (e.g., HDF, netCDF, geotiff, raw binary, CSV, ASCII to parse, etc.)
5. Common Visualization software and libraries (e.g., R, Processing, D3.js, GIS, Matlab, IDL, Python,
etc.)
6. Perceptual and cognitive foundations that drive visual abstractions
a. Visual communication
b. Color theory
7. Visualization design
a. Purpose (discovery, outreach)
b. Audience (technical, general public)
c. Ethically responsible visualization
i. Avoid misleading visualizations (due to exaggeration, hole filling, smoothing, data
cleanup).
ii. Even correct data can be misleading, e.g., aliasing, incorrectly moving or stopped fan
blades.
8. Evaluation of visualization methods and applications
9. Visualization bias
10. Applications of visualization
GIT-Rendering
This section describes basic rendering and fundamental graphics techniques that nearly every
undergraduate course in graphics will cover and that are essential for further study in most graphics-
related courses.
108
a. Object representations: polygonal, parametric, etc.
b. Modeling transformations: affine and coordinate-system transformations
c. Scene representations: scene graphs
2. Camera and projection modeling
a. Pinhole cameras, similar triangles, and projection model
b. Camera models
c. Projective geometry
3. Radiometry and light models
a. Radiometry
b. Rendering equation
c. Rendering in nature – emission and scattering, etc.
4. Rendering
a. Simple triangle rasterization
b. Rendering with a shader-based API
c. Visibility and occlusion, including solutions to this problem (e.g., depth buffering, Painter’s
algorithm, and ray tracing)
d. Texture mapping, including minification and magnification (e.g., trilinear MIP mapping)
e. Application of spatial data structures to rendering.
f. Ray tracing
g. Sampling and anti-aliasing
109
KA Core
1. Basic geometric operations such as intersection calculation and proximity tests on 2D objects
2. Surface representation/model
a. Tessellation
b. Mesh representation, mesh fairing, and mesh generation techniques such as Delaunay
triangulation, and marching cubes/tetrahedrons
c. Parametric polynomial curves and surfaces
d. Implicit representation of curves and surfaces
e. Spatial subdivision techniques
3. Volumetric representation/model
a. Volumes, voxels, and point-based representations.
b. Signed Distance Fields
c. Sparse Volumes, i.e., VDB
d. Constructive Solid Geometry (CSG) representation
4. Procedural representation/model (See also: FPL-Translation 1.a)
a. Fractals
b. L-Systems
5. Multi-resolution modeling (See also SPD-Game)
6. Reconstruction, e.g., 3D scanning, photogrammetry, etc.
GIT-Shading
Topics:
1. Solutions and approximations to the rendering equation, for example:
a. Distribution ray tracing and path tracing
b. Photon mapping
c. Bidirectional path tracing
d. Metropolis light transport
2. Time (motion blur), lens position (focus), and continuous frequency (color) and their impact on
rendering
3. Shadow mapping
4. Occlusion culling
5. Bidirectional Scattering Distribution function (BSDF) theory and microfacets
6. Subsurface scattering
7. Area light sources
110
8. Hierarchical depth buffering
9. Image-based rendering
10. Non-photorealistic rendering
11. GPU architecture (See also AR-Heterogeneous Architectures 1 & 3)
12. Human visual systems including adaptation to light, sensitivity to noise, and flicker fusion (See also:
HCI-Accessibility, SEP-IDEA)
111
c. Blending motion capture and keyframe animation
d. Ethical considerations (e.g., accessibility and privacy)
i. Avoidance of “default” captures - there is no typical human walk cycle.
GIT: Simulation
Simulation has strong ties to Computational Science. In the graphic domain, however, simulation
techniques are re-purposed to a different end. Rather than creating predictive models, the goal instead
is to achieve a mixture of physical plausibility and artistic intention. To illustrate, the goals of “model
surface tension in a liquid” and “produce a crown splash” are related, but different. Depending on the
simulation goals, covered topics may vary as shown below.
112
● Grid-Based Smoke and Fire
○ Semi-Lagrangian Advection
○ Pressure Projection
● Grid and Particle-Based Water
○ Particle-Based Water
○ Grid-Based Smoke, and Fire
KU Core:
1. Collision detection and response
a. Signed Distance Fields
b. Sphere/sphere
c. Triangle/point
d. Edge/edge
2. Procedural animation using noise
3. Particle systems
a. Integration methods (Forward Euler, Midpoint, Leapfrog)
b. Mass/spring networks
c. Position-based dynamics
d. Rules (boids/crowds)
e. Rigid bodies
4. Grid-based fluids
a. Semi-Lagrangian advection
b. Pressure Projection
● Heightfields
a. Terrain: Transport, erosion
b. Water: Ripple, Shallow water.
● Rule-based systems, e.g., L-Systems, Space-colonizing systems, Game of Life, etc.
GIT-Immersion
KU Core: See also: SPD-Games, SPD-Mobile, HCI-Design
1. Define and distinguish VR, AR, and MR
2. Define and distinguish immersion and presence
3. 360 Video
4. Stereoscopic display
a. Head-mounted displays
113
b. Stereo glasses
5. Viewer tracking
a. Inside out vs Outside In
b. Head / Body / Hand / tracking
6. Time-critical rendering to achieve optimal motion to photon (MTP) latency
a. Multiple levels of details (LOD)
b. Image-based VR
c. Branching movies
7. Distributed VR, collaboration over computer network
8. Applications in medicine, simulation, training, and visualization
9. Safety in immersive applications
a. Motion sickness
b. VR obscures the real world, which increases the potential for falls and physical accidents
GIT-Interaction
Interactive computer graphics is a requisite part of real-time applications ranging from the utilitarian-like
word processors to virtual and/or augmented reality applications. Students will learn the following topics
in a graphics course or a course that covers HCI/GUI Construction and HCI/Programming.
KU Core:
1. Event Driven Programming (See also: FPL-Event-Driven)
a. Mouse or touch events
b. Keyboard events
c. Voice input
d. Sensors
e. Message passing communication
f. Network events
2. Graphical User Interface (Single Channel)
a. Window
b. Icons
c. Menus
114
d. Pointing Devices
3. Gestural Interfaces (See also: SPD-Games)
a. Touch screens
b. Game controllers
4. Haptic Interfaces
a. External actuators
b. Gloves
c. Exoskeletons
5. Multimodal Interfaces
6. Head-worn Interfaces (AI)
a. brainwave (EEG type electrodes)
b. headsets with embedded eye tracking
c. AR glasses
7. Natural Language Interfaces (See also: AI-NLP)
8. Accessibility (See also: SEP_IDEA)
115
a. Noise, degradation
b. Inpainting and other completion algorithms
c. Wiener filter
5. Image coding
a. Redundancy
b. Compression (e.g., Huffman coding)
c. DCT, wavelet transform, Fourier transforms
d. Nyquist Theorem
e. Watermarks
6. Connections to deep learning (e.g., Convolutional Neural Networks) (See AI-ML 7)
116
b. Show an example of instances of an object
c. Create a fabrication plan. Provide a cost estimate for materials and time. How will you fabricate
it?
d. Fabricate it. How closely did your actual fabrication process match your plan? Where did it
differ?
5. Write the G- and M-Code to construct a 3D maze, and use a CAD/CAM package to check your
work
6. If you were to design an IoT pill dispenser, would you use Ethernet, WiFi, Bluetooth, RFID/NFC, or
something else for Internet connectivity. Why? Make one.
7. Distinguish between the different types of fabrication and describe when you would use each.
Professional Dispositions
117
c. code
Math Requirements
Required:
1. Linear Algebra:
a. Points (coordinate systems & homogeneous coordinates), vectors, and matrices
b. Vector operations: addition, scaling, dot and cross products
c. Matrix operations: addition, multiplication, determinants
d. Affine transformations
2. Calculus
a. Continuity
Desirable:
1. Linear Algebra
a. Eigenvectors and Eigen decomposition
b. Gaussian Elimination and Lower Upper Factorization
c. Singular Value Decomposition
2. Calculus
a. Quaternions
3. Probability
Shared Topics:
● GIT-Immersion HCI
● GIT-Interaction HCI/GUI Programming
● Graftals: GIT-Geometric Modeling FPL-H: Language Translation and Execution
● Simulation
● Visualization in GIT, AI, and Specialized Platform Development Interactive Computing Platforms
(Data Visualization)
● Image Processing in GIT and Specialized Platform Development Interactive Computing
Platforms (Supporting Math Studies)
● Tangible Computing in GIT and Specialized Platform Development Interactive Computing
Platforms (Game Platforms)
118
● Tangible Computing in GIT and Specialized Platform Development Interactive Computing
Platforms (Embedded Platforms)
● Tangible Computing and Animation in GIT and Specialized Platform Development Interactive
Computing Platforms (Robot Platforms)
● Immersion in GIT and Specialized Platform Development Interactive Computing Platforms
(Mobile Platforms)
● Image Processing in GIT and Advanced Machine Learning (Graphical Models) in AI
● Image Processing and Physical Computing in GIT and Robotics Location and Mapping and
Navigation in AI
● Image Processing in GIT and Perception and Computer Vision in AI
● Core Graphics in GIT and Algorithms and Application Domains in PD
● GIT and Interactive Computing Platforms in SPD
● GIT and Game Platforms in SPD
● GIT and Imbedded Platforms in SPD
Crosscutting themes:
● Efficiency
● Ethics
● Modeling
● Programming
● Prototyping
● Usability
● Evaluation
119
Interactive Computer Graphics to include the following:
● GIT-Rendering: 40 hours
● SEP-C: Professional Ethics: 4 hours
Pre-requisites:
● CS2
● Affine Transforms from Linear Algebra
● Trigonometry
Skill statement: A student who completes this course should understand and be able to create basic
computer graphics using an API. They should know how to position and orient models, the camera, and
distant and local lights.
120
Skill statement: A student who completes this course should be able to design and build circuits and
program a microcontroller. They will understand polarity, Ohm’s law, and how to work with electronics
safely.
Committee
Members:
121
● Erik Brunvand, University of Utah, Salt Lake City, USA
● Kel Elkins, NASA/GSFC Scientific Visualization Studio, Greenbelt, MD
● Jeff Lait, SideFX, Toronto, Canada
● Amruth Kumar, Ramapo College, Mahwah, USA
● Paul Mihail, Valdosta State University, Valdosta, USA
● Tabitha Peck, Davidson College, Davidson, USA
● Ken Schmidt, NOAA NCEI, Asheville, USA
● Dave Shreiner, UnityTechnologies & Sonoma State University, San Francisco, USA
Contributors:
● Ginger Alford, Southern Methodist University, TX, USA
● Christopher Andrews, Middlebury College, Middlebury, VT, USA
● AJ Christensen, NASA/GSFC Scientific Visualization Studio – SSAI, Champaign, IL
● Roger Eastman, University of Maryland, College Park, MD, USA
● Ted Kim, Yale University, New Haven, CT, USA
● Barbara Mones, University of Washington, Seattle, WA, USA
● Greg Shirah, NASA/GSFC Scientific Visualization Studio, Greenbelt, MD
● Beatriz Sousa Santos, University of Aveiro, Portugal
● Anthony Steed, University College, London, UK
122
123
Human-Computer Interaction (HCI)
Preamble
Computational systems not only enable users to solve problems, but also foster social connectedness
and support a broad variety of human endeavors. Thus, these systems should interact with their users
and solve problems in ways that respect individual dignity, social justice, and human values and
creativity. Human-computer interaction (HCI) addresses those issues from an interdisciplinary
perspective that includes psychology, business strategy, and design principles.
Each user is different and, from the perspective of HCI, the design of every system that interacts with
people should anticipate and respect that diversity. This includes not only accessibility, but also cultural
and societal norms, neural diversity, modality, and the responses the system elicits in its users. An
effective computational system should evoke trust while it treats its users fairly, respects their privacy,
provides security, and abides by ethical principles.
These goals require design-centric engineering that begins with intention and with the understanding
that design is an iterative process, one that requires repeated evaluation of its usability and its impact
on its users. Moreover, technology evokes user responses, not only by its output, but also by the
modalities with which it senses and communicates. This knowledge area heightens the awareness of
these issues and should influence every computer scientist.
Driven by this broadened perspective, the HCI knowledge area has revised the CS 2013 document in
several ways:
● Knowledge units have been renamed and reformulated to reflect current practice and to anticipate
future technological development.
● There is increased emphasis on the nature of diversity and the centrality of design focused on the
user.
● Modality (e.g., text, speech) is still emphasized given its key role throughout HCI, but with a
reduced emphasis on particular modalities in favor of a more timely and empathetic approach.
● The curriculum reflects the importance of understanding and evaluating the impacts and
implications of a computational system on its users, including issues in ethics, fairness, trust, and
explainability.
● Given its extensive interconnections with other knowledge areas, we believe HCI is itself a cross-
cutting knowledge area with connections to Artificial Intelligence; Society, Ethics and
Professionalism, Software Development Fundamentals, Software Engineering.
Core Hours
124
Accountability and Responsibility in Design 2 2
System Design 1 5
Total Hours 8 16
Knowledge Units
HCI-User: Understanding the User: Individual goals and interactions with others
CS Core:
1. User-centered design and evaluation methods:
a. “you are not the users”
b. user needs-finding
c. formative studies
d. interviews
e. surveys
f. usability tests
KA Core:
2. User-centered design methodology: (See also: SE-Tools)
a. personas/persona spectrum
b. user stories/storytelling and techniques for gathering stories
c. empathy maps
d. needs assessment (techniques for uncovering needs and gathering requirements - e.g.,
interviews, surveys, ethnographic and contextual enquiry) (See also: SE-Requirements)
e. journey maps
f. evaluating the design (See also: HCI-Evaluation)
3. Physical & cognitive characteristics of the user:
a. physical capabilities that inform interaction design (e.g., color perception, ergonomics)
b. cognitive models that inform interaction design (e.g., attention, perception and recognition,
movement, memory)
c. topics in social/behavioral psychology (e.g., cognitive biases, change blindness)
4. Designing for diverse user populations: (See also: SEP-IDEA)
a. how differences (e.g., in race, ability, age, gender, culture, experience, and education)
impact user experiences and needs
b. Internationalization
c. designing for users from other cultures
d. cross-cultural design
e. challenges to effective design evaluation (e.g., sampling, generalization; disability and
disabled experiences)
125
f. universal design
g. See also: HCI-Accessibility.
5. Collaboration and communication (See also: AI-SEP 3.e, SE-Teamwork, SEP-Communication 3-5,
SPD-Game: 5.d)
a. understanding the user in a multi-user context
b. synchronous group communication (e.g., chat rooms, conferencing, online games)
c. asynchronous group communication (e.g., email, forums, social networks)
d. social media, social computing, and social network analysis
e. online collaboration
f. social coordination and online communities
g. avatars, characters, and virtual worlds
KA Core:
2. Compare and contrast the needs of users with those of designers.
3. Identify the representative users of a design and discuss who else could be impacted by it.
4. Describe empathy and evaluation as elements of the design process.
5. Carry out and document an analysis of users and their needs.
6. Construct a user story from a needs assessment.
7. Redesign an existing solution to a population whose needs differ from those of the initial target
population.
8. Contrast the different needs-finding methods for a given design problem.
9. Reflect on whether your design would benefit from low-tech or no-tech components.
Non-Core:
10. Recognize the implications of designing for a multi-user system/context.
KA Core:
4. Value-sensitive design: identify direct and indirect stakeholders, determine and include diverse
stakeholder values and value systems.
126
5. Persuasion through design: assessing the persuasive content of a design, persuasion as a design
goal.
KA Core:
2. Identify the potential human factor elements in a design.
3. Identify and understand direct and indirect stakeholders.
4. Develop scenarios that consider the entire lifespan of a design, beyond the immediately planned
uses that anticipate direct and indirect stakeholders.
5. Identify and critique the potential factors in a design that impact direct and indirect stakeholders and
broader society (e.g., transparency, sustainability of the system, trust, artificial intelligence)
6. Assess the persuasive content of a design and its intent relative to user interests
7. Critique the outcomes of a design given its intent
8. Understand the impact of design decisions
KA Core:
5. Background
a. demographics and populations (permanent, temporary and situational disability)
b. international perspectives on disability
c. attitudes towards people with disabilities
6. Techniques
a. UX (user experience) design and research
b. software engineering practices that enable inclusion and accessibility.
7. Technologies: examples of accessibility-enabling features, such as conformance to screen readers
8. Inclusive Design Frameworks: creating inclusive processes such as participatory design;
designing for larger impact.
Non-Core:
9. Background
127
a. unlearning and questioning
b. disability studies
10. Technologies: the Return on Investment of inclusion
11. Inclusive Design Frameworks: user-sensitive inclusive design
12. Critical approaches to HCI:
a. critical race theory in HCI
b. feminist HCI
c. critical disability theory.
KA Core:
5. Apply inclusive frameworks to design, such as universal design and usability and ability-based
design, and demonstrate accessible design of visual, voice-based, and touch-based UIs.
6. Demonstrate understanding of laws and regulations applicable to accessible design
7. Demonstrate understanding of what is appropriate and inappropriate high level of skill during
interaction with individuals from diverse populations
8. Analyze web pages and mobile apps for current standards of accessibility
Non-Core:
9. Biases towards disability, race, and gender have historically, either intentionally or unintentionally,
informed technology design
a. find examples
b. consider how those experiences (learnings?) might inform design.
10. Conceptualize user experience research to identify user needs and generate design insights.
KA Core:
2. Methods for evaluation with users (See also: SE-Validation)
a. qualitative methods (qualitative coding and thematic analysis)
128
b. quantitative methods (statistical tests)
c. mixed methods (e.g., observation, think-aloud, interview, survey, experiment)
d. presentation requirements (e.g., reports, personas)
e. user-centered testing
f. heuristic evaluation
g. challenges and shortcomings to effective evaluation (e.g., sampling, generalization)
3. Study planning
a. how to set study goals
b. hypothesis design
c. approvals from Institutional Research Boards and ethics committees (See also: SEP-Ethical-
Analysis, SEP-Security, SEP-Privacy)
d. how to pre-register a study
e. within-subjects vs. between-subjects design
4. Implications and impacts of design with respect to the environment, material, society, security,
privacy, ethics, and broader impacts. (See also: SEC-Foundations)
Non-Core:
5. Techniques and tools for quantitative analysis
a. statistical packages
b. visualization tools
c. statistical tests (e.g., ANOVA, t-tests, post-hoc analysis, parametric vs non-parametric tests)
d. data exploration and visual analytics; how to calculate effect size.
6. Data management
a. data storage and data sharing (open science)
b. sensitivity and identifiability.
KA Core:
2. Select appropriate formative or summative evaluation methods at different points throughout the
development of a design
3. Discuss the benefits of using both qualitative and quantitative methods for evaluation
4. Evaluate the implications and broader impacts of a given design
5. Plan a usability evaluation for a given user interface, and justify its study goals, hypothesis design,
and study design
6. Conduct a usability evaluation of a given user interface and draw defensible conclusions given the
study design
Non-Core:
7. Select and run appropriate statistical tests on provided study data to test for significance in the
results
8. Pre-register a study design, with planned statistical tests
129
HCI-Design: System Design (See also: SE-Tools)
CS Core:
1. Prototyping techniques and tools: e.g., low-fidelity prototyping, rapid prototyping, throw-away
prototyping, granularity of prototyping
2. Design patterns
a. iterative design
b. universal design (See also: SEP-IDEA)
c. interaction design (e.g., data-driven design, event-driven design)
3. Design constraints
a. platforms (See also: SPD-Game 3.c)
b. devices
c. resources
KA Core:
4. Design patterns and guidelines
a. software architecture patterns
b. cross-platform design
c. synchronization considerations
5. Design processes
a. participatory design
b. co-design
c. double-diamond
d. convergence and divergence
6. Interaction techniques
a. input and output vectors (e.g., gesture, pose, touch, voice, force)
b. graphical user interfaces
c. controllers
d. haptics
e. hardware design
f. error handling
7. Visual UI design
a. Color
b. Layout
c. Gestalt principles
Non-Core:
8. Immersive environments
a. virtual reality
b. augmented reality, mixed reality
c. XR (which encompasses them)
d. spatial audio
9. 3D printing and fabrication
130
10. Asynchronous interaction models
11. Creativity support tools
12. Voice UI designs
KA Core:
4. Evaluate architectural design approaches in the context of project goals.
5. Identify synchronization challenges as part of the user experience in distributed environments.
6. Evaluate and compare the privacy implications behind different input techniques for a given
scenario
7. Explain the rationale behind a UI design based on visual design principles
Non-Core:
8. Evaluate the privacy implications within a VR/AR/MR scenario
KA Core:
6. Participatory and inclusive design processes
7. Evaluating the design: Implications and impacts of design: with respect to the environment,
material, society, security, privacy, ethics, and broader impacts (See also: SEC-Foundations, SEP-
Privacy)
KA Core:
131
2. Critique a recent example of a non-inclusive design choice, its societal implications, and propose
potential design improvements
3. Evaluating the design: Identify the implications and broader impacts of a given design.
Non-Core:
4. Evaluate the privacy implications within a VR/AR/MR scenario
Professional Dispositions
Math Requirements
Required:
● Basic statistics to support the evaluation and interpretation of results, including central
tendency, variability, frequency distribution
132
Suggested weekly topics:
1. Introduction to design (HCI-User, HCI-Design)
2. Thinking, Acting, and Evaluating (HCI-User, HCI-Evaluation, HCI-Design)
3. Memory and Mistakes (HCI-User, HCI-Accessibility,,HCI-Evaluation)
4. Principles and Processes (HCI-Design)
5-6. Integrating Design Processes and Software Development (HCI-Design)
7. Design Thinking and Heuristic Evaluation (HCI-User, HCI-Evaluation)
8. Accessibility (HCI-Accountability, HCI-Accessibility, HCI-SEP)
9. Visual Design and Personas (HCI-User, HCI-Accountability, HCI-Accessibility, HCI-Evaluation,
HCI-Design)
10. Final Project: Empathy and Identification (HCI-User, HCI-Accountability, HCI-Accessibility, HCI-
Accessibility, HCI-Evaluation, HCI-Design, HCI-SEP)
11. Final Project: Ideation and Low-Fidelity Prototyping (HCI-User, HCI-Evaluation, HCI-Design)
12-13: Final Project Implementation (HCI-Design)
14: Final Project: Testing (HCI-Evaluation)
15: Final Project: Reporting (HCI-User, HCI-Accountability, HCI-Accessibility, HCI-Evaluation, HCI-
Design, HCI-SEP)
133
Suggested topics: planning the usability study, defining goals, study participants, selecting tasks and
creating scenarios, deciding how to measure usability, preparing test materials, preparing the test
environment, conducting the pilot test, conducting the test, tabulating and analyzing data,
recommending changes, communicating the results, preparing the highlight tape, changing the product
and the process.
Learning outcomes:
● Design an appropriate test plan
● Recruit appropriate participants
● Conduct a usability test
● Analyze results and recommend changes
● Present results
● Write a report documenting the recommended improvements
Committee
Chair: Susan L. Epstein, Hunter College and The Graduate Center of The City University of New York,
New York, USA
Members:
● Sherif Aly, The American University of Cairo, Cairo, Egypt
● Jeremiah Blanchard, University of Florida, Gainesville, FL, USA
● Zoya Bylinskii, Adobe Research, Cambridge, MA, USA
● Paul Gestwicki, Ball State University, Muncie, IN, USA
● Susan Reiser, University of North Carolina at Asheville, Asheville, North Carolina, USA
● Amanda M. Holland-Minkley, Washington and Jefferson College, Washington, PA, USA
● Ajit Narayanan, Google, Mountainview, California, USA
● Nathalie Riche, Microsoft Research Lab, Redmond, WA, USA
● Kristen Shinohara, Rochester Institute of Technology, Rochester, New, York, USA
● Olivier St-Cyr, University of Toronto, Toronto, Canada
134
Mathematical and Statistical Foundations (MSF)
Preamble
A strong mathematical foundation remains a bedrock of computer science education and infuses the
practice of computing whether in developing algorithms, designing systems, modeling real-world
phenomena, or computing with data. This Mathematical and Statistical Foundations knowledge (MSF)
area – the successor to the ACM CS 2013 curriculum's "Discrete Structures" area – seeks to identify
the mathematical and statistical material that undergirds modern computer science. The change of
name corresponds to a realization both that the broader name better describes some of the existing
topics from 2013 and that some growing areas of computer science, such as artificial intelligence,
machine learning, data science, and quantum computing, have continuous mathematics as their
foundations too. Because consideration of additional foundational mathematics beyond traditional
discrete structures is a substantial matter, the MSF sub-committee included members who have taught
courses in continuous mathematics.
The committee considered the following inputs in order to prepare their recommendations:
A survey distributed to computer science faculty (nearly 600 respondents) across a variety of
institutional types and in various countries;
Math-related data collected from the of survey of industry professionals (865 respondents);
Math requirement stated by all the knowledge areas in the report;
Direct input sought from CS theory community; and
Review of past reports including recent reports on data science (e.g., Park City report) and
quantum computing education.
Core Hours
135
Thus, we are hesitant to recommend an all-encompassing set of mathematical topics as “every CS
degree must require.” Instead, we outline two sets of core requirements, a minimal “CS-core” set suited
to credit-limited majors and a more expansive “KA-core” set to align with technically focused programs.
The principle here is that, in light of the additional foundational mathematics needed for AI, data
science and quantum computing, programs ought to consider as much as possible from the more
expansive KA-core version unless there are sound institutional reasons for alternative requirements.
Discrete Mathematics 29 11
Probability 11 29
Statistics 10 30
Linear Algebra 5 35
Calculus 0 40
Total 55 200
KA-Core. Note that the calculus hours roughly correspond to the typical Calculus-I course now
standard across the world. Based on our survey, most programs already require Calculus-I. However,
we have left out Calculus-II (an additional 40 hours) and leave it to programs to decide whether
Calculus-II should be added to program requirements. Programs could choose to require a more
rigorous calculus-based probability or statistics sequence, or non-calculus requiring versions. Similarly,
linear algebra can be taught as an applied course without a calculus prerequisite or as a more
advanced course.
Knowledge Units
136
2. Recursive mathematical definitions
3. Proof techniques (induction, proof by contradiction)
4. Permutations, combinations, counting, pigeonhole principle
5. Modular arithmetic
6. Logic: truth tables, connectives (operators), inference rules, formulas, normal forms, simple
predicate logic
7. Graphs: basic definitions
137
d. Map real-world applications to appropriate counting formalisms, such as determining the
number of ways to arrange people around a table, subject to constraints on the seating
arrangement, or the number of ways to determine certain hands in cards (e.g., a full house).
5. Modular arithmetic
a. Perform computations involving modular arithmetic.
b. Explain the notion of greatest common divisor, and apply Euclid's algorithm to compute it.
6. Logic
a. Convert logical statements from informal language to propositional and predicate logic
expressions.
b. Apply formal methods of symbolic propositional and predicate logic, such as calculating
validity of formulae, computing normal forms, or negating a logical statement.
c. Use the rules of inference to construct proofs in propositional and predicate logic.
d. Describe how symbolic logic can be used to model real-life situations or applications,
including those arising in computing contexts such as software analysis (e.g., program
correctness), database queries, and algorithms.
e. Apply formal logic proofs and/or informal, but rigorous, logical reasoning to real problems,
such as predicting the behavior of software or solving problems such as puzzles.
f. Describe the strengths and limitations of propositional and predicate logic.
g. Explain what it means for a proof in propositional (or predicate) logic to be valid.
7. Graphs
a. Illustrate by example the basic terminology of graph theory, and some of the properties and
special cases of types of graphs, including trees.
b. Demonstrate different traversal methods for trees and graphs, including pre-, post-, and in-
order traversal of trees, along with breadth-first and depth-first search for graphs.
c. Model a variety of real-world problems in computer science using appropriate forms of
graphs and trees, such as representing a network topology, the organization of a
hierarchical file system, or a social network.
d. Show how concepts from graphs and trees appear in data structures, algorithms, proof
techniques (structural induction), and counting.
MSF-Probability: Probability
CS Core:
1. Basic notions: sample spaces, events, probability, conditional probability, Bayes’ rule
2. Discrete random variables and distributions
3. Continuous random variables and distributions
4. Expectation, variance, law of large numbers, central limit theorem
5. Conditional distributions and expectation
6. Applications to computing
KA Core:
The recommended topics are the same between CS core and KA-core, but with far more hours, the
KA-core can cover these topics in depth and might include more computing-related applications.
138
1. Basic notions: sample spaces, events, probability, conditional probability, Bayes’ rule
a. Translate a prose description of a probabilistic process into a formal setting of sample
spaces, outcome probabilities, and events.
b. Calculate the probability of simple events.
c. Determine whether two events are independent.
d. Compute conditional probabilities, including through applying (and explaining) Bayes' Rule.
2. Discrete random variables and distributions
a. Define the concept of a random variable and indicator random variable.
b. Determine whether two random variables are independent.
c. Identify common discrete distributions (e.g., uniform, Bernoulli, binomial, geometric).
3. Continuous random variables and distributions
a. Identify common continuous distributions (e.g., uniform, normal, exponential).
b. Calculate probabilities using cumulative density functions.
4. Expectation, variance, law of large numbers, central limit theorem
a. Define the concept of expectation and variance of a random variable.
b. Compute the expected value and variance of simple or common discrete/continuous random
variables.
c. Explain the relevance of the law of large numbers and central limit theorem to probability
calculations.
5. Conditional distributions and expectation
a. Explain the distinction between a joint distribution and a conditional distribution.
b. Compute conditional distributions from a full distribution, for both discrete and continuous
random variables.
c. Compute conditional expectations for both discrete and continuous random variables.
6. Applications to computing
a. Describe how probability can be used to model real-life situations or applications, such as
predictive text, hash tables, and quantum computation.
b. Apply probabilistic processes in solving computational problems, such as through
randomized algorithms or in security contexts.
MSF-Statistics: Statistics
CS Core:
1. Basic definitions and concepts: populations, samples, measures of central tendency, variance
2. Univariate data: point estimation, confidence intervals
KA Core:
3. Multivariate data: estimation, correlation, regression
4. Data transformation: dimension reduction, smoothing
5. Statistical models and algorithms
139
b. Display data graphically and interpret graphs (e.g. histograms)
c. Recognize, describe and calculate means, medians, quantiles (location of data)
d. Recognize, describe and calculate variances (spread of data)
2. Univariate data: point estimation, confidence intervals
a. Formulate maximum likelihood estimation (in linear-Gaussian settings) as a least-squares
problem
b. Calculate maximum likelihood estimates
c. Calculate maximum a posteriori estimates and make a connection with regularized least
squares
d. Compute confidence intervals as a measure of uncertainty
KA Core:
3. Multivariate data: estimation, correlation, regression
a. Formulate the multivariate maximum likelihood estimation problem as a least-squares
problem
b. Interpret the geometric properties of maximum likelihood estimates
c. Derive and calculate the maximum likelihood solution for linear regression
d. Derive and calculate the maximum a posteriori estimate for linear regression
e. Implement both maximum likelihood and maximum a posteriori estimates in the context of a
polynomial regression problem
f. Formulate and understand the concept of data correlation (e.g., in 2D)
4. Data transformation: dimension reduction, smoothing
a. Formulate and derive PCA as a least-squares problem
b. Geometrically interpret PCA (when solved as a least-squares problem)
c. Understand when PCA works well (one can relate back to correlated data)
d. Geometrically interpret the linear regression solution (maximum likelihood)
5. Statistical models and algorithms
a. Apply PCA to dimensionality reduction problems
b. Understand the trade-off between compression and reconstruction power
c. Apply linear regression to curve-fitting problems
d. Understand the concept of overfitting
e. Discuss and apply cross-validation in the context of overfitting and model selection (e.g.,
degree of polynomials in a regression context)
KA Core:
2. Matrices, matrix-vector equation, geometric interpretation, geometric transformations with matrices
3. Solving equations, row-reduction
4. Linear independence, span, basis
5. Orthogonality, projection, least-squares, orthogonal bases
140
6. Linear combinations of polynomials, Bezier curves
7. Eigenvectors and eigenvalues
8. Applications to computer science: PCA, SVD, page-rank, graphics
KA Core:
2. Matrices, matrix-vector equation, geometric interpretation, geometric transformations with matrices
a. Perform common matrix operations, such as addition, scalar multiplication, multiplication,
and transposition
b. Relate a matrix to a homogeneous system of linear equations
c. Recognize when two matrices can be multiplied
d. Relate various matrix transformations to geometric illustrations
3. Solving equations, row-reduction
a. Formulate, solve, apply, and interpret properties of linear systems
b. Perform row operations on a matrix
c. Relate an augmented matrix to a system of linear equations
d. Solve linear systems of equations using the language of matrices
e. Translate word problems into linear equations
f. Perform Gaussian elimination
4. Linear independence, span, basis
a. Define subspace of a vector space
b. List examples of subspaces of a vector space
c. Recognize and use basic properties of subspaces and vector spaces
d. Determine whether or not particular subsets of a vector space are subspaces
e. Discuss the existence of a basis of an abstract vector space
f. Describe coordinates of a vector relative to a given basis
g. Determine a basis and the dimension of a finite-dimensional space
h. Discuss spanning sets for vectors in R^n
i. Discuss linear independence for vectors in R^n
j. Define the dimension of a vector space
5. Orthogonality, projection, least-squares, orthogonal bases
a. Explain the Gram-Schmidt orthogonalization process
b. Define orthogonal projections
c. Define orthogonal complements
d. Compute the orthogonal projection of a vector onto a subspace, given a basis for the
subspace
e. Explain how orthogonal projections relate to least square approximations
141
6. Linear combinations of polynomials, Bezier curves
a. Identify polynomials as generalized vectors
b. Explain linear combinations of basic polynomials
c. Understand orthogonality for polynomials
d. Distinguish between basic polynomials and Bernstein polynomials
e. Apply Bernstein polynomials to Bezier curves
7. Eigenvectors and eigenvalues
a. Find the eigenvalues and eigenvectors of a matrix
b. Define eigenvalues and eigenvectors geometrically
c. Use characteristic polynomials to compute eigenvalues and eigenvectors
d. Use eigenspaces of matrices, when possible, to diagonalize a matrix
e. Perform diagonalization of matrices
f. Explain the significance of eigenvectors and eigenvalues
g. Find the characteristic polynomial of a matrix
h. Use eigenvectors to represent a linear transformation with respect to a particularly nice
basis
8. Applications to computer science: PCA, SVD, page-rank, graphics
a. Explain the geometric properties of PCA
b. Relate PCA to dimensionality reduction
c. Relate PCA to solving least-squares problems
d. Relate PCA to solving eigenvector problems
e. Apply PCA to reducing the dimensionality of a high-dimensional dataset (e.g., images)
f. Explain the page-rank algorithm and understand how it relates to eigenvector problems
g. Explain the geometric differences between SVD and PCA
h. Apply SVD to a concrete example (e.g., movie rankings)
MSF-Calculus
KA Core:
1. Sequences, series, limits
2. Single-variable derivatives: definition, computation rules (chain rule etc), derivatives of important
functions, applications
3. Single-variable integration: definition, computation rules, integrals of important functions,
fundamental theorem of calculus, definite vs indefinite, applications (including in probability)
4. Parametric and polar representations
5. Taylor series
6. Multivariate calculus: partial derivatives, gradient, chain-rule, vector valued functions, applications
to optimization, convexity, global vs local minima
7. ODEs: definition, Euler method, applications to simulation
Note: the calculus topics listed above are aligned with computer science goals rather than with
traditional calculus courses. For example, multivariate calculus is often a course by itself but computer
science undergraduates only need parts of it for machine learning.
142
1. Sequences, series, limits
a. Explain the difference between infinite sets and sequences
b. Explain the formal definition of a limit
c. Derive the limit for examples of sequences and series
d. Explain convergence and divergence
e. Apply L’Hospital’s rule and other approaches to resolving limits
2. Single-variable derivatives: definition, computation rules (chain rule etc), derivatives of important
functions, applications
a. Explain a derivative in terms of limits
b. Explain derivatives as functions
c. Perform elementary derivative calculations from limits
d. Apply sum, product and quotient rules
e. Work through examples with important functions
3. Single-variable integration: definition, computation rules, integrals of important functions,
fundamental theorem of calculus, definite vs indefinite, applications (including in probability)
a. Explain the definitions of definite and indefinite integrals
b. Apply integration rules to examples with important functions
c. Explore the use of the fundamental theorem of calculus
d. Apply integration to problems
4. Parametric and polar representations
a. Apply parametric representations of important curves
b. Apply polar representations
5. Taylor series
a. Derive Taylor series for some important functions
b. Apply the Taylor series to approximations
6. Multivariate calculus: partial derivatives, gradient, chain-rule, vector valued functions, applications
to optimization, convexity, global vs local minima
a. Compute partial derivatives and gradients
b. Work through examples with vector-valued functions with gradient notation
c. Explain applications to optimization
7. ODEs: definition, Euler method, applications to simulation
a. Apply the Euler method to integration
b. Apply the Euler method to a single-variable differential equation
c. Apply the Euler method to multiple variables in an ODE
Professional Dispositions
We focus on dispositions helpful to students learning mathematics as well as professionals who need to
refresh previously learned mathematics or learn new topics.
● Growth mindset. Perhaps the most important of the dispositions, students should be persuaded
that anyone can learn mathematics and that success is not based dependent on innate ability.
● Practice mindset. Students should be educated about the nature of “doing” mathematics and
learning through practice with problems as opposed to merely listening or observing demonstrations
in the classroom.
143
● Delayed gratification. Most students are likely to learn at least some mathematics from
mathematics departments unfamiliar with computing applications; computing departments should
acclimate the students to the notion of waiting to see computing applications. Many of the new
growth areas such as AI or quantum computing can serve as motivation.
● Persistence. Student perceptions are often driven by frustration with unable to solve hard problems
that they see some peers tackle seemingly effortlessly; computing departments should help
promote the notion that eventual success through persistence is what matters.
Math Requirements
The intent of this section is to list the likely most important topics that should expected from students
entering a computing program, typically corresponding to pre-calculus in high school. We recommend
pre-calculus as a prerequisite for discrete mathematics.
Required:
● Algebra and numeracy:
o Numeracy: numbers, operations, types of numbers, fluency with arithmetic, exponent
notation, rough orders of magnitude, fractions and decimals.
o Algebra: rules of exponents, solving linear or quadratic equations with one or two variables,
factoring, algebraic manipulation of expressions with multiple variables.
● Precalculus:
o Coordinate geometry: distances between points, areas of common shapes
o Functions: function notation, drawing and interpreting graphs of functions
o Exponentials and logarithms: a general familiarity with the functions and their graphs
o Trigonometry: familiarity with basic trigonometric functions and the unit circle
Every department faces constraints in delivering content, which precludes merely requiring a long list of
courses covering every single desired topic. These constraints include content-area ownership, faculty
size, student preparation, and limits on the number of departmental courses a curriculum can require.
We list below some options for mathematical foundations, combinations of which might best fit any
particular institution:
● Traditional course offerings. With this approach, a computer science department can require
students to take math-department courses in any of the five broad mathematical areas listed above.
● A “Continuous Structures” analog of Discrete Structures. Many computer science departments
now offer courses that prepare students mathematically for AI and machine learning. Such courses
can combine just enough calculus, optimization, linear algebra and probability; yet others may split
linear algebra into its own course. These courses have the advantage of motivating students with
computing applications, and including programming as pedagogy for mathematical concepts.
● Integration into application courses. An application course, such as machine learning, can be
spread across two courses, with the course sequence including the needed mathematical
preparation taught just-in-time, or a single machine learning course can balance preparatory
material with new topics. This may have the advantage of mitigating turf issues and helping
students see applications immediately after encountering math.
144
● Specific course adaptations. For nearly a century, physics and engineering needs have driven
the structure of calculus, linear algebra, and probability. Computer science departments can
collaborate with their colleagues in math departments to restructure math-offered sections in these
areas that are driven by computer science applications. For example, calculus could be reorganized
to fit the needs of computing programs into two calculus courses, leaving a later third calculus
course for engineering and physics students.
Committee
Chair: Rahul Simha, The George Washington University, Washington DC, USA
Members:
● Richard Blumenthal, Regis University, Denver, CO, USA
● Marc Deisenroth, University College London, London, UK
● Michael Goldweber, Denison University, Granville, OH, USA
● David Liben-Nowell, Carleton College, Northfield, MN, USA
● Jodi Tims, Northeastern University, Boston, MA, USA
145
Networking and Communication (NC)
Preamble
Networking and communication play a central role in interconnected computer systems that are
transforming the daily lives of billions of people. The public Internet provides connectivity for networked
applications that serve ever-increasing numbers of individuals and organizations around the world.
Complementing the public sector, major proprietary networks leverage their global footprints to support
cost-effective distributed computing, storage, and content delivery. Advances in satellite networks expand
connectivity to rural areas. Device-to-device communication underlies the emerging Internet of things.
This knowledge area deals with key concepts in networking and communication, as well as their
representative instantiations in the Internet and other computer networks. Beside the basic principles of
switching and layering, the area at its core provides knowledge on naming, addressing, reliability, error
control, flow control, congestion control, domain hierarchy, routing, forwarding, modulation, encoding,
framing, and access control. The area also covers knowledge units in network security and mobility, such
as security threats, countermeasures, device-to-device communication, and multihop wireless
networking. In addition to the fundamental principles, the area includes their specific realization in the
Internet as well as hands-on skills in implementation of networking and communication concepts. Finally,
the area comprises emerging topics such as network virtualization and quantum networking.
As the main learning outcome, learners develop a thorough understanding of the role and operation of
networking and communication in networked computer systems. They learn how network structure and
communication protocols affect behavior of distributed applications. The area educates on not only key
principles but also their specific instantiations in the Internet and equips the student with hands-on
implementation skills. While computer-system, networking, and communication technologies are
advancing at a fast pace, the gained fundamental knowledge enables the student to readily apply the
concepts in new technological settings.
Compared to the 2013 curricula, the knowledge area broadens its core tier-1 focus from the introduction
and networked applications to include reliability support, routing, forwarding, and single-hop
communication. Due to the enhanced core, learners acquire a deeper understanding of the impact that
networking and communication have on behavior of distributed applications. Reflecting the increased
importance of network security, the area adds a respective knowledge unit as a new elective. To track
the advancing frontiers in networking and communication knowledge, the area replaces the elective unit
on social networking with a new elective unit on emerging topics, such as middleboxes, software defined
networks, and quantum networking. Other changes consist of redistributing all topics from the old unit on
resource allocation among other units, in order to resolve the unnecessary overlap between the
knowledge units in the 2013 curricula.
Core Hours
146
Knowledge Units CS Core KA Core
Introduction 3
Networked Applications 4
Reliability Support 6
Single-Hop Communication 3
Mobility Support 4
Network Security 3
Emerging Topics 4
Total 7 24
Knowledge Units
NC-Introduction
CS Core:
1. Importance of networking in contemporary computing, and associated challenges. (See also: SEP-
Social Context SEP-Privacy and Civil Liberties)
2. Organization of the Internet (e.g. users, Internet Service Providers, autonomous systems, content
providers, content delivery networks).
3. Switching techniques (e.g., circuit and packet).
4. Layers and their roles (application, transport, network, datalink, and physical).
5. Layering principles (e.g. encapsulation and hourglass model).
6. Network elements (e.g. routers, switches, hubs, access points, and hosts).
7. Basic queueing concepts (e.g. relationship with latency, congestion, service levels, etc.)
Learning Outcomes:
1. Articulate the organization of the Internet.
2. List and define the appropriate network terminology.
3. Describe the layered structure of a typical networked architecture.
4. Identify the different types of complexity in a network (edges, core, etc.).
NC-Networked-Applications
CS Core:
1. Naming and address schemes (e.g. DNS, and Uniform Resource Identifiers).
2. Distributed application paradigms (e.g. client/server, peer-to-peer, cloud, edge, and fog). (See also:
PDC-Communication, PDC-Coordination)
147
3. Diversity of networked application demands (e.g. latency, bandwidth, and loss tolerance). (See
also: PDC-Communication, SEP-Sustainability)
4. An explanation of at least one application-layer protocol (e.g. HTTP).
5. Interactions with TCP, UDP, and Socket APIs. (See also: PDC-Programs and Execution)
NC-Reliability-Support
KA Core:
1. Unreliable delivery (e.g. UDP).
2. Principles of reliability (e.g. delivery without loss, duplication, or out of order).
3. Error control (e.g. retransmission, error correction).
4. Flow control (e.g. stop and wait, window based).
5. Congestion control (e.g. implicit and explicit congestion notification).
6. TCP and performance issues (e.g. Tahoe, Reno, Vegas, Cubic).
NC-Routing-and-Forwarding
KA Core:
1. Routing paradigms and hierarchy (e.g. intra/inter domain, centralized and decentralized, source
routing, virtual circuits, QoS).
2. Forwarding methods (e.g. forwarding tables and matching algorithms).
3. IP and Scalability issues (e.g. NAT, CIDR, BGP, different versions of IP).
Learning Outcomes:
1. Describe various routing paradigms and hierarchies.
2. Describe how packets are forwarded in an IP network.
3. Describe how the Internet tackles scalability challenges. .
NC-Single-Hop-Communication
KA Core:
1. Introduction to modulation, bandwidth, and communication media.
2. Encoding and Framing.
3. Medium Access Control (MAC) (e.g. random access and scheduled access).
4. Ethernet and WiFi.
5. Switching (e.g. spanning trees, VLANS).
148
6. Local Area Network Topologies (e.g. data center, campus networks).
NC-Network-Security
KA Core:
1. General intro about security (Threats, vulnerabilities, and countermeasures). (See also: SEP-
Security, SEC-Foundations)
2. Network specific threats and attack types (e.g., denial of service, spoofing, sniffing and traffic
redirection, man-in-the-middle, message integrity attacks, routing attacks, ransomware, and
traffic analysis) (See also: SEC-Foundations)
3. Countermeasures (See also: SEC-Foundations, SEC-Cryptography)
o Cryptography (e.g. SSL, TLS, symmetric/asymmetric).
o Architectures for secure networks (e.g., secure channels, secure routing protocols,
secure DNS, VPNs, DMZ, Zero Trust Network Access, hyper network security,
anonymous communication protocols, isolation)
o Network monitoring, intrusion detection, firewalls, spoofing and DoS protection,
honeypots, tracebacks, BGP Sec, RPKI.
NC-Mobility
KA Core:
1. Principles of cellular communication (e.g. 4G, 5G).
2. Principles of Wireless LANs (mainly 802.11).
3. Device to device communication.
4. Multihop wireless networks. (e.g. ad hoc networks, opportunistic, delay tolerant).
149
NC-Emerging-topics
KA Core:
1. Middleboxes (e.g. filtering, deep packet inspection, load balancing, NAT, CDN).
2. Network Virtualization (e.g. SDN, Data Center Networks).
3. Quantum Networking (e.g. Intro to the domain, teleportation, security, Quantum Internet).
4. Satellite, mmWave, Visible Light.
Professional Dispositions
Math Requirements
Required:
● Probability and Statistics
● Discrete Math
● Simple queuing theory concepts.
● Fourier and trigonometric analysis for physical layer.
Coverage of the concepts of networking including but not limited to types of applications used by the
network, reliability, routing and forwarding, single hop communication, security, and other emerging
topics.
Note: both courses cover the same KU’s but with different allocation of hours for each KU.
Introductory Course:
● NC-Introduction (9 hours)
● NC-Networked Applications (12 hours)
150
● NC-Reliability Support (6 hours)
● NC-Routing and Forwarding (4 hours)
● NC-Single-Hop Communication (3 hours)
● NC-Network Security (3 hours)
● NC-Mobility Support (3 hours)
● NC-Emerging Topics (2 hours)
Advanced Course:
● NC-Introduction (3 hours)
● NC-Networked Applications (4 hours)
● NC-Reliability Support (8 hours)
● NC-Routing and Forwarding (6 hours)
● NC-Single-Hop Communication (5 hours)
● NC-Network Security (5 hours)
● NC-Mobility Support (5 hours)
● NC-Emerging Topics (6 hours)
Committee
Members:
● Khaled Harras: Carnegie Mellon University, Pittsburgh, USA
● Moustafa Youssef: The American University in Cairo, Cairo, Egypt
● Sergey Gorinsky: IMDEA Networks Institute
● Qiao Xiang: Xiamen University, China
Contributor:
● Alex (Xi) Chen: Huawei
151
Operating Systems (OS)
Preamble
Operating system is the collection of services needed to safely interface the hardware with applications.
Core topics focus on the mechanisms and policies needed to virtualize computation, memory, and I/O.
Overarching themes that are reused at many levels in computer systems are well illustrated in
operating systems (e.g. polling vs interrupts, caching, flexibility costs overhead, similar scheduling
approaches to processes, page replacement, etc.). OS should focus on how those concepts apply in
other areas of CS - trust boundaries, concurrency, persistence, safe extensibility.
Operating systems remains an important Computer Science Knowledge Area in spite of how OS
functions may be redistributed into computer architecture or specialized platforms. A CS student needs
to have a clear mental model of how a pipelined instruction executes to how data scope impacts
memory location. Students can apply basic OS knowledge to domain-specific architectures (machine
learning with GPUs or other parallelized systems, mobile devices, embedded systems, etc.). Since all
software must leverage operating systems services, students can reason about the efficiency, required
overhead and the tradeoffs inherent to any application or code implementation. The study of basic OS
algorithms and approaches provides a context against which students can evaluate more advanced
methods. Without an understanding of sandboxing, how programs are loaded into processes, and
execution, students are at a disadvantage when understanding or evaluating vulnerabilities to vectors
of attack.
The core of operating systems knowledge from CC2013 has been propagated from CC2023 to the
updated knowledge area. Changes from CC2013 include moving of File systems knowledge (now
called File Systems API and Implementation) and Device Management to KA Core from elective and
Performance and Evaluation knowledge units to the Systems Fundamentals Knowledge area. The
addition of persistent data storage and device I/O reflects the impact of file storage and device I/O
limitations on the performance (e.g. parallel algorithms, etc.). More advanced topics in File Systems
API and Implementation and Device Management were moved to a new Knowledge Unit Advanced File
Systems. The Performance and Evaluation knowledge unit moved to Systems Fundamentals with the
idea that performance and evaluation approaches for operating systems are mirrored at other levels
and are best presented in this context.
Systems programming and creation of platform specific executables are operating systems related
topics as they utilize the interface provided by the operating system. These topics are listed as
knowledge units within the Foundations of Programming languages (FPL) knowledge area because
they are also programming related and would benefit from that context.
152
Overview
“Role and purpose of operating systems” and “Principles of operating systems” provide a high-level
overview of traditional operating systems responsibilities. Required computer architecture mechanisms
for safe multitasking and resource management are presented. This provides a basis for application
services needed to provide a virtual processing environment. These items are in the CS Core because
they enable reasoning on possible security threat vectors and application performance bottlenecks.
“Concurrency” CS Core topics focus on programming paradigms that are needed to share resources
within and between operating systems and applications. “Concurrency” KA Core topic provides enough
depth into concurrency primitives and solution design patterns so that students can evaluate, design,
and implement correct parallelized software components. Although many students may not become
operating systems developers, parallel components are widely used in specialized platforms and GPU-
based machine learning applications. Non-core topics focus on emerging concepts and examples
where there is more integration between architecture, operating systems functions and application
software to improve performance and safety.
“Protection and security” CS Core overlaps the dedicated Security Knowledge Area. However,
operating systems provide a unique perspective that considers the lower level mechanisms that must
be secured for safe system function. “Protection and security” KA Core extends the CS Core topics to
operating systems access and control functions that are available to applications and end-users. Non-
core focuses on advanced security mechanisms within specific operating systems as well as emerging
topics.
“Scheduling”, “Process model”, “Memory Management”, “Device management” and “File systems API
and Implementation” KA Core provide depth to the CS Core topics. They provide the basis for
virtualization and safe resource management. The placement of these topics in the KA Core does not
reduce their importance. It is expected that many of these topics will be covered along with the CS
Core topics. Non-core topics focus on emerging topics and provide additional depth to the KA Core
topics.
“Society, Ethics and Professionalism” KA Core focuses on open source and life cycle issues. These
software engineering issues are not the sole purview of operating systems as they also exist for
specialized platforms and applications level knowledge areas.
“Advanced File Systems”, “Virtualization”, “Real-time and Embedded Systems”, and “Fault tolerance”
KA Core and Non Core include advanced topics. These topics overlap with “Specialized Platform”,
“Architecture”, “Parallel and Distributed Systems” and “Systems Fundamentals” Knowledge Areas.
Core Hours
153
Principles of Operating System 2
Concurrency 2 1
Scheduling 1
Process Model 1
Memory Management 2
Device Management 1
Virtualization 3
Fault Tolerance 3
Knowledge Units
154
b. Operating systems enforced security can be defeated by infiltrating the boot layer before the
operating system is loaded and
c. Process isolation can be subverted by inadequate authorization checking at API boundaries
d. Vulnerabilities in system firmware can provide attack vectors that bypass the operating system
entirely
e. Improper isolation of virtual machine memory, computing, and hardware can expose the host
system to attacks from guest systems
f. The operating system may need to mitigate exploitation of hardware and firmware
vulnerabilities, leading to potential performance reductions (e.g. Spectre and Meltdown
mitigations)
7. Exposure of operating systems functions in shells and systems programming (See also: FPL-
Scripting)
CS Core:
1. Understand the objectives and functions of modern operating systems
2. Evaluate the design issues in different usage scenarios (e.g. real time OS, mobile, server, etc)
3. Understand the functions of a contemporary operating system with respect to convenience,
efficiency, and the ability to evolve
4. Understand how evolution and stability are desirable and mutually antagonistic in operating
systems function
155
a. Timer interrupts for implementing timeslices
b. I/O interrupts for putting blocking threads to sleep without polling
7. Concept of user/system state and protection, transition to kernel mode using system calls (See
also: AR-C: Assembly Level Machine Organization)
8. Mechanism for invoking of system calls, the corresponding mode and context switch and return
from interrupt (See also: AR-C: Assembly Level Machine Organization)
9. Performance costs of context switches and associated cache flushes when performing process
switches in Spectre-mitigated environments
OS-Concurrency: Concurrency
CS Core:
1. Thread abstraction relative to concurrency
2. Race conditions, critical regions (role of interrupts if needed)(See also: PDC-A: Programs and
Execution )
3. Deadlocks and starvation
4. Multiprocessor issues (spin-locks, reentrancy)
5. Multiprocess concurrency vs. multithreading
KA Core:
6. Thread creation, states, structures(See also: SF-B: Basic Concepts)
7. Thread APIs
8. Deadlocks and starvation (necessary conditions/mitigations)
9. Implementing thread safe code (semaphores, mutex locks, cond vars) (See also: AR-G:
Performance and Energy Efficiency, SF-E: Performance Evaluation)
10. Race conditions in shared memory (See also: PDC-A: Programs and Execution)
Non-Core:
156
11. Managing atomic access to OS objects Example concept: Big kernel lock vs. many small locks vs.
lockless data structures like lists
CS Core:
1. Overview of operating system security mechanisms (See also: SEC-A: Foundational Security)
2. Attacks and antagonism (scheduling, etc) (See also: SEC-A: Foundational Security)
3. Review of major vulnerabilities in real operating systems (See also: SEC-A: Foundational
Security)
4. Operating systems mitigation strategies such as backups (See also: SF-F: System Reliability)
KA Core:
5. Policy/mechanism separation (See also: SEC-F-Security Governance)
6. Security methods and devices (See also: SEC-F-Security Governance)
Example concepts:
a. Rings of protection (history from Multics to virtualized x86)
b. x86_64 rings -1 and -2 (hypervisor and ME/PSP)
7. Protection, access control, and authentication (See also: SEC-F-Security Governance)
157
OS-Scheduling: Scheduling
KA Core:
1. Preemptive and non-preemptive scheduling
2. Schedulers and policies. Example concepts: First come, first serve, Shortest job first, Priority,
Round Robin, and Multilevel (See also: SF-C: Resource Allocation and Scheduling)
3. Concepts of SMP/multiprocessor scheduling and cache coherence (See also: AR-C: Assembly
Level Machine Organization)
4. Timers (e.g. building many timers out of finite hardware timers) (See also: AR-C: Assembly Level
Machine Organization)
5. Fairness and starvation
Non-Core:
6. Subtopics of operating systems such as energy-aware scheduling and real-time scheduling (See
also: AR-G: Performance and Energy Efficiency, SPD-Embedded, SPD-Mobile 5‐D?‐)
7. Cooperative scheduling, such as Linux futexes and userland scheduling
KA Core:
1. Processes and threads relative to virtualization-Protected memory, process state, memory isolation,
etc
2. Memory footprint/segmentation (stack, heap, etc)(See also: AR-C: Assembly Level Machine
Organization)
158
3. Creating and loading executables and shared libraries(See also: FPL-H: Language Translation
and Execution or Systems Interaction)
Examples:
a. Dynamic linking, GOT, PLT
b. Structure of modern executable formats like ELF
4. Dispatching and context switching (See also: AR-C: Assembly Level Machine Organization)
5. Interprocess communication (See also: PDC-B: Communication)
Example concepts: Shared memory, message passing, signals, environment variables, etc
KA Core:
1. Review of physical memory, address translation and memory management hardware(See also:
AR-D: Memory Hierarchy)
2. Impact of memory hierarchy including cache concept, cache lookup, etc on operating system
mechanisms and policy (See also: AR-D: Memory Hierarchy, SF-D: System Performance)
Example concepts:
a. CPU affinity and per-CPU caching is important for cache-friendliness and performance on
modern processors
3. Logical and physical addressing, address space virtualization(See also: AR-D: Memory
Hierarchy)
4. Concepts of paging, page replacement, thrashing and allocation of pages and frames
5. Allocation/deallocation/storage techniques (algorithms and data structure) performance and
flexibility
Example concepts:
a. Arenas, slab allocators, free lists, size classes, heterogeneously sized pages (hugepages)
6. Memory caching and cache coherence and the effect of flushing the cache to avoid speculative
execution vulnerabilities(See also: AR-F: Functional Organization, AR-D: Memory Hierarchy,
SF-D: System Performance)
7. Security mechanisms and concepts in memory mgmt including sandboxing, protection, isolation,
and relevant vectors of attack
159
Non-Core:
8. Virtual Memory: leveraging virtual memory hardware for OS services and efficiency
KA Core:
1. Buffering strategies (See also: AR-E: Interfacing and Communication)
2. Direct Memory Access and Polled I/O, Memory-mapped I/O Example concept: DMA communication
protocols (ring buffers etc)(See also: AR-E: Interfacing and Communication)
3. Historical and contextual - Persistent storage device management (magnetic, SSD, etc)
Non-Core:
4. Device interface abstractions, HALs
5. Device driver purpose, abstraction, implementation and testing challenges
6. High-level fault tolerance in device communication
160
OS-Files: File Systems API and Implementation
KA Core:
1. Concept of a file including Data, Metadata, Operations and Access-mode
2. File system mounting
3. File access control
4. File sharing
5. Basic file allocation methods including linked, allocation table, etc
6. File system structures comprising file allocation including various directory structures and methods
for uniquely identifying files (name, identified or metadata storage location)
7. Allocation/deallocation/storage techniques (algorithms and data structure) impact on performance
and flexibility (i.e. Internal and external fragmentation and compaction)
8. Free space management such as using bit tables vs linking
9. Implementation of directories to segment and track file location
161
4. Understand differences in finding and addressing bugs for various operating systems payment
models
OS-Virtualization: Virtualization
KA Core:
1. Using virtualization and isolation to achieve protection and predictable performance (See also: SF-
D-System Performance)
2. Advanced paging and virtual memory
3. Virtual file systems and virtual devices
4. Containers (See also: SF-D-System Performance)
Example concepts: Emphasizing that containers are NOT virtual machines, since they do not contain
their own operating systems [where operating system is pedantically defined as the kernel]
5. Thrashing
a. Popek and Goldberg requirements for recursively virtualizable systems
162
Non-core:
6. Types of virtualization (including Hardware/Software, OS, Server, Service, Network) (See also:
SF-D-System Performance)
7. Portable virtualization; emulation vs. isolation (See also: SF-D-System Performance)
8. Cost of virtualization (See also: SF-D-System Performance, SF-E: Performance Evaluation)
9. VM and container escapes, dangers from a security perspective (See also: SF-D-System
Performance, SEC-Engineering)
10. Hypervisors- hardware virtual machine extensions
Example concepts:
a. Hypervisor monitor w/o a host operating system
b. Host OS with kernel support for loading guests, e.g. QEMU KVM
OS-Real-time: Real-time/embedded
KA Core:
1. Process and task scheduling
2. Deadlines and real-time issues (See also: SPD-Embedded)
3. Low-latency/soft real-time" vs "hard real time" (See also: SPD-Embedded, FPL-S: Embedded
Computing and Hardware Interface)
Non-Core:
4. Memory/disk management requirements in a real-time environment
5. Failures, risks, and recovery
6. Special concerns in real-time systems (safety)
163
4. Understand specific real time operating systems features and mechanisms
Professional Dispositions
Math Requirements
Required:
● Discrete math
164
● OS-Purpose: Role and Purpose of Operating Systems- 3 hours
● OS-Principles: Principles of Operating Systems- 3 hours
● OS-Concurrency: Concurrency- 7 hours
● OS-Scheduling: Scheduling- 3 hours
● OS-Process: Process Model- 3 hours
● OS-Memory: Memory Management- 4 hours
● OS-Protection: Protect and Safety- 4 hours
● OS-Devices: Device Management- 2 hours
● OS-Files: File Systems API and Implementation- 2 hours
● OS-Virtualization: Virtualization- 3 hours
● OS-AdvFiles: Advanced File Systems- 2 hours
● OS-Real-time: Real-time and Embedded Systems- 1 hours
● OS-Faults: Fault Tolerance- 1 hours
● OS-SEP: Social, Ethical and Professional topics- 4 hours
Pre-requisites:
● Assembly Level Machine Organization from Architecture
● Memory Management from Architecture
● Software Reliability from Architecture
● Interfacing and Communication from Architecture
● Functional Organization from Architecture
Skill statement: A student who completes this course should understand the impact and implications
of operating system resource management in terms of performance and security. A student should
understand and implement interprocess communication mechanisms safely. A student should
differentiate between the use and evaluation of open source and/or proprietary operating systems. A
student should understand virtualization as a feature of safe modern operating systems
implementation.
Committee
Members:
● Renzo Davoli
● Avi Silberschatz
● Marcelo Pias, Federal University of Rio Grande (FURG), Brazil
● Mikey Goldweber, Xavier University, Cincinnati, USA
● Qiao Xiang, Xiamen University, China
165
Parallel and Distributed Computing (PDC)
Preamble
Parallel and distributed programming arranges and controls multiple computations occurring at the
same time across different places. The ubiquity of parallelism and distribution are inevitable
consequences of increasing numbers of gates in processors, processors in computers, and computers
everywhere that may be used to improve performance compared to sequential programs, while also
coping with the intrinsic interconnectedness of the world, and the possibility that some components or
connections fail or misbehave. Parallel and distributed programming remove the restrictions of
sequential programming that require computational steps to occur in a serial order in a single place,
revealing further distinctions, techniques, and analyses applying at each layer of computing systems.
In most conventional usage, “parallel” programming focuses on arranging that multiple activities co-
occur, “distributed” programming focuses on arranging that activities occur in different places, and
“concurrent” programming focuses on interactions of ongoing activities with each other and the
environment. However, all three terms may apply in most contexts. Parallelism generally implies some
form of distribution because multiple activities occurring without sequential ordering constraints happen
in multiple physical places (unless relying on context-switching schedulers or quantum effects). And
conversely, actions in different places need not bear any particular sequential ordering with respect to
each other in the absence of communication constraints..
PDC has evolved from a diverse set of advanced topics into a central body of knowledge and practice,
permeating almost every other aspect of computing. Growth of the field has occurred irregularly across
different subfields of computing, sometimes with different goals, terminology, and practices, masking
the considerable overlap of basic ideas and skills that are the main focus of this KA. Nearly every
problem with a sequential solution also admits parallel and/or distributed solutions; additional problems
and solutions arise only in the context of existing concurrency. And nearly every application domain of
parallel and distributed computing is a well-developed area of study and/or engineering too large to
enumerate.
Overview
The PDC KA is divided into five KUs, each with CS-Core and KA-core components that extend but do
not overlap CS Core coverage that appears in other KAs.. They cover: The nature of parallel and
distributed Programs and their execution; Communication (via channels, memory, or shared data
stores), Coordination among parallel activities to achieve common outcomes; Evaluation with
respect to specifications, and Algorithms across multiple application domains..
CS Core topics span approaches to parallel and distributed computing, but restrict coverage to those
applying to nearly all of them. Learning Outcomes include developing small programs (in a choice of
several styles) with multiple activities and analyzing basic properties. The topics and hours do not
include coverage of particular languages, tools, frameworks, systems, and platforms needed as a basis
166
for implementing and evaluating concepts and skills. They also avoid reliance on specifics that may
vary widely (for example GPU programming vs cloud container deployment scripts), Prerequisites for
PDC CS Core coverage include::
● SDF-Fundamentals. programs vs executions, specifications vs implementations, variables,
arrays, sequential control flow, procedural abstraction and invocation, IO.
● SF-Overview: Layered systems, State machines, Reliability
● AR-Assembly, AR-Memory: Von Neumann architecture, Memory hierarchy
● MSF-Discrete: Logic, discrete structures including directed graphs.
KA Core topics in each unit are of the form “One or more of the following” for a la carte topics
extending associated core topics. Any selection of KA-core topics meeting the KA Core hour
requirement constitutes fulfillment of the KA Core. These permit variation in coverage depending on the
focus of any given course. See below for examples. Depth of coverage of any KA Core subtopic is
expected to vary according to course goals. For example, shared-memory coordination is a central
topic in multicore programming, but much less so in most heterogeneous systems, and conversely for
bulk data transfer. Similarly, fault tolerance is central to the design of distributed information systems,
but much less so in most data-parallel applications.
Core Hours
Programs 2 2
Communication 2 6
Coordination 2 6
Evaluation 1 3
167
Algorithms 2 9
Total 9 26
Knowledge Units
PDC-Programs
CS Core:
1. Fundamental concepts
a. Ordering
i. Declarative parallelism: Determining which actions may be performed in parallel, at
the level of instructions, functions, closures, composite actions, sessions, tasks,
services
ii. Defining order: happens-before relations, series/parallel directed acyclic graphs
representing programs
iii. Independence: determining when ordering doesn’t matter, in terms of commutativity,
dependencies, preconditions
iv. Ensuring ordering among otherwise parallel actions when necessary, for example
locking, safe publication; and orderings imposed by communication: sending a
message happens before receiving it
v. Nondeterministic execution of unordered actions
b. Places
i. Devices executing actions include hardware components, remote hosts (See also
AR-IO)
ii. One device may time-slice or otherwise emulate multiple parallel actions by fewer
processors (See also OS-Scheduling)
iii. May include external, uncontrolled devices, hosts, and human users
c. Deployment
i. Arranging actions be performed (eventually) at places, with options ranging from
from hardwiring to configuration scripts, or reliance on automated provisioning and
management by platforms
ii. Establishing communication and resource management (See also SF-Resources)
iii. Naming or identifying actions as parties (for example thread IDs)
d. Consistency
i. Agreement among parties about values and predicates
ii. Races, atomicity, consensus
iii. Tradeoffs of consistency vs progress in decentralized systems
e. Faults
i. Handling failures in parties or communication, Including (Byzantine) misbehavior due
to untrusted parties and protocols
ii. Degree of fault tolerance and reliability may be a design choice
2. Programming new activities
a. The first step of PDC-Coordination techniques. expressed differently across languages,
platforms, contexts
168
b. Procedural: Enabling multiple actions to start at a given program point; for example, starting
new threads, possibly scoping or otherwise organizing them in possibly-hierarchical groups
c. Reactive: Enabling upon an event by installing an event handler, with less control of when
actions begin or end
d. Dependent: Enabling upon completion of others; for example, sequencing sets of parallel
actions
KA Core:
3. Mappings and mechanisms across layered systems. One or more of:
a. CPU data- and instruction-level- parallelism (See also AR-Organization)
b. SIMD and heterogeneous data parallelism (See also AR-Heterogeneity)
c. Multicore scheduled concurrency, tasks, actors (See also OS-Scheduling)
d. Clusters, clouds; elastic provisioning (See also SPD-Common)
e. Networked distributed systems (See also NC-Networked-Applications)
f. Emerging technologies such as quantum computing and molecular computing
PDC-Communication
CS Core:
1. Media
a. Varieties: channels (message passing or IO), shared memory, heterogeneous, data stores
b. Reliance on the availability and nature of underlying hardware, connectivity, and protocols;
language support, emulation (See also AR)
2. Channels
a. Explicit party-to-party communication; naming channels
b. APIs: sockets (See also NC-Introduction), architectural and language-based constructs
c. IO channel APIs
3. Memory
a. Parties directly communicate only with memory at given addresses, with extensions to
heterogeneous memory supporting multiple memory stores with explicit data transfer across
them; for example, GPU local and shared memory, DMA
b. Consistency: Bitwise atomicity limits, coherence, local ordering
169
c. Memory hierarchies: Multiple layers of sharing domains, scopes and caches; locality:
latency, false-sharing
4. Data Stores
a. Cooperatively maintained structured data implementing maps and related ADTs
b. Varieties: Owned, shared, sharded, replicated, immutable, versioned
5. Programming with communication
a. Using channel, socket, and/or remote procedure call APIs
b. Using shared memory constructs in a given language
KA Core:
6. Properties and Extensions. One or more of:
a. Media
i. Topologies: Unicast, Multicast, Mailboxes, Switches; Routing via hardware and
software interconnection networks
ii. Concurrency properties: Ordering, consistency, idempotency, overlapping
communication with computation
iii. Performance properties: Latency, bandwidth (throughput) contention (congestion),
responsiveness (liveness), reliability (error and drop rates), protocol-based progress
(acks, timeouts, mediation)
iv. Security properties: integrity, privacy, authentication, authorization (See also SEC)
v. Data formats
vi. Applications of Queuing Theory to model and predict performance
b. Channels
i. Policies: Endpoints, Sessions, Buffering, Saturation response (waiting vs dropping),
Rate control
ii. Program control for sending (usually procedural) vs receiving.(usually reactive or
RPC-based)
iii. Formats, marshaling, validation, encryption, compressIon
iv. Multiplexing and demultiplexing many relatively slow IO devices or parties;
completion-based and scheduler-based techniques; async-await, select and polling
APIs.
v. Formalization and analysis; for example using CSP
c. Memory
i. Memory models: sequential and release/acquire consistency
ii. Memory management; including reclamation of shared data; reference counts and
alternatives
iii. Bulk data placement and transfer; reducing message traffic and improving locality;
overlapping data transfer and computation; impact of data layout such as array-of-
structs vs struct-of-arrays
iv. Emulating shared memory: distributed shared memory, RDMA
d. Data Stores
i. Consistency: atomicity, linearizability, transactionality, coherence, causal ordering,
conflict resolution, eventual consistency, blockchains,
ii. Faults, partitioning, and partial failures; voting; protocols such as Paxos and Raft
170
iii. Design tradeoffs among consistency, availability, partition (fault) tolerance;
impossibility of meeting all at once
iv. Security and trust: Byzantine failures, proof of work and alternatives
PDC-Coordination
CS Core:
1. Dependencies
a. Initiation or progress of one activity may be dependent on other activities, so as to avoid
race conditions, ensure termination, or meet other requirements
b. Ensuring progress by avoiding dependency cycles, using monotonic conditions, removing
inessential dependencies
2. Control constructs
a. Completion-based: Barriers, joins
b. Data-enabled: Queues, Produce-Consumer designs
c. Condition-based: Polling, retrying, backoffs, helping, suspension, signaling, timeouts
171
d. Reactive: enabling and triggering continuations
3. Atomicity
a. Atomic instructions, enforced local access orderings
b. Locks and mutual exclusion; lock granularity
c. Deadlock avoidance: ordering, coarsening, randomized retries; encapsulation via lock
managers
d. Common errors: failing to lock or unlock when necessary, holding locks while invoking
unknown operations
e. Avoiding locks: replication, read-only, ownership, and nonblocking constructions
4. Programming with coordination
a. Controlling termination
b. Using locks, barriers, and other synchronizers in a given language; maintaining liveness
without introducing races
KA Core:
5. Properties and extensions. One or more of:
a. Progress
i. Properties including lock-free, wait-free, fairness, priority scheduling; interactions
with consistency, reliability
ii. Performance: contention, granularity, convoying, scaling
iii. Non-blocking data structures and algorithms
b. Atomicity
i. Ownership and resource control
ii. Lock variants and alternatives: sequence locks, read-write locks; RCU, reentrancy;
tickets; controlling spinning versus blocking
iii. Transaction-based control: Optimistic and conservative
iv. Distributed locking: reliability
c. Interaction with other forms of program control
i. Alternatives to barriers: Clocks; Counters, Virtual clocks; Dataflow and continuations;
Futures and RPC; Consensus-based, Gathering results with reducers and collectors
ii. Speculation, selection, cancellation; observability and security consequences
iii. Resource-based: Semaphores and condition variables
iv. Control flow: Scheduling computations, Series-parallel loops with (possibly elected)
leaders, Pipelines and Streams, nested parallelism.
v. Exceptions and failures. Handlers, detection, timeouts, fault tolerance, voting
172
6. Write a program that speculatively searches for a solution by multiple activities, terminating
others when one is found.
7. Write a program in which a numerical exception (such as divide by zero) in one activity causes
termination of others
8. Write a program for multiple parties to agree upon the current time of day; discuss its limitations
compared to protocols such as NTP
9. Write a service that creates a thread (or other procedural form of activation) to return a
requested web page to each new client
PDC-Evaluation:
CS Core:
1. Safety and liveness requirements (See also FPL-PDC:1)
a. Temporal logic constructs to express “always” and “eventually”
2. Identifying, testing for, and repairing violations
a. Common forms of errors: failure to ensure necessary ordering (race errors), atomicity
(including check-then-act errors), or termination (livelock)..
3. Performance requirements
a. Metrics for throughput, responsiveness, latency, availability, energy consumption, scalability,
resource usage, communication costs, waiting and rate control, fairness; service level
agreements (See also SF-Performance)
4. Performance impacts of design and implementation choices
a. Granularity, overhead, energy consumption, and scalability limitations (See also SEP:
Sustainability)
b. Estimating scalability limitations, for example using Amdahl and Gustafson laws (See also
SF-Evaluation)
KA Core:
5. Methods and tools. One or more of:
a. Formal Specification
i. Extensions of sequential requirements such as linearizability; protocol, session, and
transactional specifications
ii. Use of tools such as UML, TLA, program logics
iii. Security: safety and liveness in the presence of hostile or buggy behaviors by other
parties; required properties of communication mechanisms (for example lack of
cross-layer leakage), input screening, rate limiting
b. Static Analysis
i. Applied to correctness, throughput, latency, resources, energy
ii. dag model analysis of algorithmic efficiency (work, span, critical paths)
c. Empirical Evaluation
i. Testing and debugging; tools such as race detectors, fuzzers, lock dependency
checkers, unit/stress/torture tests, visualizations, continuous integration, continuous
deployment, and test generators,
ii. Measuring and comparing throughput, overhead, waiting, contention,
communication, data movement, locality, resource usage, behavior in the presence
of too many events, clients, threads.
d. Application domain specific analyses and evaluation techniques
173
Illustrative Learning Outcomes
CS Core:
1. Revise a specification to enable parallelism and distribution without violating other essential
properties or features
2. Explain how concurrent notions of safety and liveness extend their sequential counterparts
3. Specify a set of invariants that must hold at each bulk-parallel step of a computation
4. Write a test program that can reveal a data race error; for example, missing an update when two
activities both try to increment a variable.
5. In a given context, explain the extent to which introducing parallelism in an otherwise sequential
program would be expected to improve throughput and/or reduce latency, and how it may impact
energy efficiency
6. Show how scaling and efficiency change for sample problems without and with the assumption of
problem size changing with the number of processors; further explain whether and how scalability
would change under relaxations of sequential dependencies.
KA Core:
7. Specify and measure behavior when a service is requested by too many clients
8. Identify and repair a performance problem due to sequential bottlenecks
9. Empirically compare throughput of two implementations of a common design (perhaps using an
existing test harness framework).
10. Identify and repair a performance problem due to communication or data latency
11. Identify and repair a performance problem due to communication or data latency
12. Identify and repair a performance problem due to resource management overhead
13. Identify and repair a reliability or availability problem
PDC-Algorithms
CS Core:
1. Expressing and implementing algorithms (See also FPL-PDC)
a. Implementing concepts in given languages and frameworks to initiate activities (for example
threads), use shared memory constructs, and channel, socket, and/or remote procedure call
APIs.
b. Basic examples: map/reduce
2. Survey of common application domains
(with reference to the following table)
174
Reactive Handlers, IO Channels Services, real- latency
threads time
KA Core:
3. Algorithmic Domains. One of more of:
a. Linear Algebra: Vector and Matrix operations, numerical precision/stability, applications in
data analytics and machine learning
b. Data processing: sorting, searching and retrieval, concurrent data structures
c. Graphs, search, and combinatorics: Marking, edge-parallelization, bounding, speculation,
network-based analytics
d. Modeling and simulation: differential equations; randomization, N-body problems, genetic
algorithms
e. Computational Logic: SAT, concurrent logic programming
f. Graphics and computational geometry: Transforms, rendering, ray-tracing
g. Resource Management: Allocating, placing, recycling and scheduling processors, memory,
channels, and hosts. Exclusive vs shared resources. Static, dynamic and elastic algorithms;
Real-time constraints; Batching, prioritization, partitioning, Decentralization via work-stealing
and related techniques;
h. Services: Implementing Web APIs, Electronic currency, transaction systems, multiplayer
games.
175
6. Design, implement, analyze, and evaluate a component or application for X operating in a given
context, where X is in one of the listed domains; for example a genetic algorithm for factory floor
design.
7. Critique the design and implementation of an existing component or application, or one developed
by classmates
8. Compare the performance and energy efficiency of multiple implementations of a similar design; for
example multicore versus clustered versus GPU.
Professional Dispositions
Math Requirements
Parallel Computing
PDC-A: Programs and Execution (4 hours)
PDC-B: Communication (6 hours)
PDC-C: Coordination (6 hours)
PDC-D: Software Engineering (4 hours)
PDC-E: Algorithms and Application Domains (6 hours)
Distributed Computing
PDC-A: Programs and Execution (4 hours)
PDC-B: Communication (3 hours)
PDC-C: Coordination (3 hours)
PDC-D: Software Engineering (3 hours)
PDC-E: Algorithms and Application Domains (6 hours)
High-performance Computing
HPC no prerequisite:
PDC-A: Programs and Execution (4 hour)
PDC-B: Communication (6 hours)
PDC-C: Coordination (6 hours)
PDC-D: Software Engineering (5 hours)
PDC-E: Algorithms and Application Domains (11 hours)
176
PDC-A: Programs and Execution (1 hour)
PDC-B: Communication (2 hours)
PDC-C: Coordination (2 hours)
PDC-D: Software Engineering (2 hours)
PDC-E: Algorithms and Application Domains (6 hours)
More extensive examples and guidance for courses focusing on HPC are provided by the NSF/IEEE-
TCPP Curriculum Initiative on Parallel and Distributed Computing (http://tcpp.cs.gsu.edu/curriculum/).
Committee
Chair: Doug Lea, State University of New York at Oswego, Oswego, USA
Members:
● Sherif Aly, American University of Cairo, Cairo, Egypt
● Michael Oudshoorn, High Point University, High Point, NC, USA
● Qiao Xiang, Xiamen University, China
● Dan Grossman, University of Washington, Seattle, USA
● Sebastian Burckhardt, Microsoft Research
● Vivek Sarkar, Georgia Tech, Atlanta, USA
● Maurice Herlihy, Brown University, Providence, USA
● Sheikh Ghafoor, Tennessee Tech, USA
● Chip Weems, University of Massachusetts, Amherst, USA
Contributors:
● Paul McKenney, Meta, Beaverton, OR, USA
● Peter Buhr, University of Waterloo, Waterloo, Ontario, Canada
177
Software Development Fundamentals (SDF)
Preamble
Fluency in the process of software development is fundamental to the study of computer science. In
order to use computers to solve problems most effectively, students must be competent at reading and
writing programs. Beyond programming skills, however, they must be able to select and use
appropriate data structures and algorithms, and use modern development and testing tools.
The SDF knowledge area brings together fundamental concepts and skills related to software
development, focusing on concepts and skills that should be taught early in a computer science
program, typically in the first year. This includes fundamental programming concepts and their
effective use in writing programs, use of fundamental data structures which may be provided by the
programming language, basics of programming practices for writing good quality programs, reading
and understanding programs, and some understanding of the impact of algorithms on the performance
of the programs. The 43 hours of material in this knowledge area may be augmented with core material
from other knowledge areas as a student progresses to mid- and upper-level courses.
This knowledge area assumes a contemporary programming language with good built-in support for
common data types including associative data types like dictionaries/maps as the vehicle for
introducing students to programming (e.g. Python, Java). However, this is not to discourage the use of
older or lower-level languages for SDF — the knowledge units below can be suitably adapted for the
actual language used.
The emergence of generative AI / LLMs, which can generate programs for many programming tasks,
will undoubtedly affect the programming profession and consequently the teaching of many CS topics.
However, we feel that to be able to effectively use Generative AI in programming tasks, a programmer
must have a good understanding of programs, and hence must still learn the foundations of
programming and develop basic programming skills - which is the aim of SDF. Consequently, we feel
that the desired outcomes for SDF should remain the same, though different instructors may now give
more emphasis to program understanding, documenting, specifications, analysis, and testing. (This is
similar to teaching students multiplication and tables, addition, etc. even though calculators can do all
this).
Overview
This Knowledge Area has five Knowledge Units. These are:
178
1. SDF-Fundamentals: Fundamental Programming Concepts and Practices: This knowledge unit
aims to develop understanding of basic concepts, and ability to fluently use basic language
constructs as well as modularity constructs. It also aims to familiarize students with the concept
of common libraries and frameworks, including those to facilitate API-based access to
resources.
2. SDF-DataStructures: Fundamental Data Structures: This knowledge unit aims to develop core
concepts relating to Data Structures and associated operations. Students should understand the
important data structures available in the programming language or as libraries, and how to use
them effectively, including choosing appropriate data structures while designing solutions for a
given problem.
3. SDF-Algorithms: Algorithms: This knowledge unit aims to develop the foundations of
algorithms and their analysis. The KU should also empower students in selecting suitable
algorithms for building modest-complexity applications.
4. SDF-Practices: Software Development Practices: This knowledge unit develops the core
concepts relating to modern software development practices. Its aim is to develop student
understanding and basic competencies in program testing, enhancing readability of programs,
and using modern methods and tools including some general-purpose IDE.
5. SDF-SEP: Society, Ethics and Professionalism: This knowledge unit aims to develop an initial
understanding of some of the ethical issues related to programming, professional values
programmers need to have, and the responsibility to society that programmers have. This
knowledge unit is a part of the SEP Knowledge Area.
Core Hours
SDF-Fundamentals: Fundamental 20
Programming Concepts and Practices
SDF-Algorithms: Algorithms 6
Total 43
179
Knowledge Units
180
SDF-ADT: Fundamental Data Structures
CS Core: (See also: AL-Fundamentals)
1. Standard abstract data types such as lists, stacks, queues, sets, and maps/dictionaries, and
operations on them.
2. Selecting and using appropriate data structures.
3. Performance implications of choice of data structure(s).
4. Strings and string processing.
Illustrative Learning Outcomes
1. Write programs that use each of the key abstract data types / data structures provided in the
language (e.g., arrays, tuples/records/structs, lists, stacks, queues, and associative data types
like sets, dictionaries/maps).
2. Select the appropriate data structure for a given problem.
3. Explain how the performance of a program may change when using different data structures or
operations. .
4. Write programs that work with text by using string processing capabilities provided by the
language.
SDF-Algorithms: Algorithms
CS Core: (See also: AL-Fundamentals)
1. Concept of algorithm and notion of algorithm efficiency.
2. Some common algorithms (e.g., sorting, searching, tree traversal, graph traversal).
3. Impact of algorithms on time/space efficiency of programs.
Illustrative Learning Outcomes
1. Explain the role of algorithms for writing programs.
2. Demonstrate how a problem may be solved by different algorithms, each with different
properties.
3. Explain some common algorithms (eg., sorting, searching, tree traversal, graph traversal).
4. Explain the impact on space/time performance of some algorithms.
181
Illustrative Learning Outcomes
1. Develop tests for modules, and apply a variety of strategies to design test cases.
2. Explain some limitations of testing programs.
3. Build, execute and debug programs using a modern IDE and associated tools such as visual
debuggers.
4. Apply basic programming style guidelines to aid readability of programs such as comments,
indentation, proper naming of variables, etc.
5. Write specifications of a module as module comment describing its functionality.
Professional Dispositions
● Self-Directed. Seeking out solutions to issues on their own (e.g., using technical forums, FAQs,
discussions).
● Experimental. Practical experimentation characterized by experimenting with language features
to understand them, quickly prototyping approaches, and using the debugger to understand why
a bug is occurring.
● Technical curiosity. Characterized by, for example, interest in understanding how programs are
executed, how programs and data are stored in memory.
● Technical adaptability. Characterized by willingness to learn and use different tools and
technologies that facilitate software development.
● Perseverance. To continue efforts until, for example, a bug is identified, a program is robust and
handles all situations, etc.
● Systematic. Characterized by attention to detail and use of orderly processes in practice.
182
Math Requirements
As SDF focuses on the first year and is foundational, it assumes only basic math knowledge that
students acquire in school.
Shared Topics:
● Topics 1, 2, 3: SDF-Algorithms: Algorithms :: AL-A
● Topics 1, 2, 3: SDF-Practices: Software Development Practices :: SE-Construction:
The SDF KA will generally be covered in introductory courses, often called CS1 and CS2. How much of
the SDF KA can be covered in CS1 and how much is to be left for CS2 is likely to depend on the choice
of programming language for CS1. For languages like Python or Java, CS1 can cover all the
Programming Concepts and Development Methods KAs, and some of the Data Structures KA. It is
desirable that they be further strengthened in CS2. The topics under algorithms KA and some topics
under data structures KA can be covered in CS2. In case CS1 uses a language with fewer in-built data
structures, then much of the Data Structures KA and some aspects of the programming KA may also
need to be covered in CS2. With the former approach, the introductory course in programming can
include the following:
183
● Design, code, test, and debug a modest-sized object-oriented program using classes and
objects.
● Design, code, test, and debug a modest-sized program that uses language provided libraries
and frameworks (including accessing data from the web through APIs).
● Read and explain given code including tracing the flow of control during execution.
● Write specifications of a program or a module in natural language explaining what it does.
● Build, execute and debug programs using a modern IDE and associated tools such as visual
debuggers.
● Explain the key concepts relating to programming like parameter passing, recursion, runtime
exceptions and exception handling.
Committee
184
Software Engineering (SE)
Preamble
As far back as the early 1970s, Brian Randell allegedly said, “Software engineering is the multi-person
construction of multi-version programs.” This is an essential insight: while programming is the skill that
governs our ability to write a program, software engineering is distinct in two dimensions: time and
people.
First, a software engineering project is a team endeavor. Being a solitary programming expert is
insufficient. Skilled software engineers will additionally demonstrate expertise in communication and
collaboration. Programming may be an individual activity, but software engineering is a collaborative
one, deeply tied to issues of professionalism, teamwork, and communication.
Second, a software engineering project is usually “multi-version.” It has an expected lifespan; it needs
to function properly for months, years, or decades. Features may be added or removed to meet product
requirements. The technological context will change, as our computing platforms evolve, programming
languages change, dependencies upgrade, etc. This exposure to matters of time and change is novel
when compared to a programming project: it isn’t enough to build a thing that works, instead it must
work and stay working. Many of the most challenging topics in tech share “time will lead to change” as
a root cause: backward compatibility, version skew, dependency management, schema changes,
protocol evolution.
Software engineering presents a particularly difficult challenge for learning in an academic setting.
Given that the major differences between programming and Software engineering are time and
teamwork, it is hard to generate lessons that require successful teamwork and that faithfully present the
risks of time. Additionally, some topics in software engineering will be more authentic and more relevant
if and when our learners experience collaborative and long-term software engineering projects in vivo
rather than in the classroom. Regardless of whether that happens as an internship, involvement in an
open source project, or full-time engineering role, a month of full-time hands-on experience has more
available hours than the average software engineering course.
Thus, a software engineering curriculum should focus primarily on ideas that are needed by a majority
of new-grad hires, and that either are novel for those who are trained primarily as programmers, or that
are abstract concepts that may not get explicitly stated/shared on the job. Such topics include, but are
not limited to:
● Testing
● Teamwork, collaboration
● Communication
● Design
● Maintenance and Evolution
● Software engineering tools
Some such material is reasonably suited to a standard lecture or lecture+lab course. Discussing
theoretical underpinnings of version control systems, or branching strategies in such systems, can be
185
an effective way to familiarize students with those ideas. Similarly, a theoretical discussion can highlight
the difference between static and dynamic analysis tools, or may motivate discussion of diamond
dependency problems in dependency networks.
On the other hand, many of the fundamental topics of software engineering are best experienced in a
hands-on fashion. Historically, project-oriented courses have been a common vehicle for such learning.
We believe that such experience is valuable but also bears some interesting risks: students may form
erroneous notions about the difficulty / complexity of collaboration if their only exposure is a single
project with teams formed of other novice software engineers. It falls to instructors to decide on the right
balance between theoretical material and hands-on projects - neither is a perfect vehicle for this
challenging material. We strongly encourage instructors of project courses to aim for iteration and fast
feedback - a few simple tasks repeated (i.e., in an Agile-structured project) is better than singular high-
friction introductions to many types of tasks. Programs with real-world industry partners and clients are
also particularly encouraged. If long-running project courses are not an option, anything that can
expose learners to the collaborative and long-term aspects of software engineering is valuable: adding
features to an existing codebase, collaborating on distinct parts of a larger whole, pairing up to write an
encoder and decoder, etc.
All evidence suggests that the role of software in our society will continue to grow for the foreseeable
future, and yet the era of “two programmers in a garage” seems to have drawn to a close. Most
important software these days is clearly a team effort, building on existing code and leveraging existing
functionality. The study of software engineering skills is a deeply important counterpoint to the everyday
experience of computing students - we must impress on them the reality that few software projects are
managed by writing from scratch as a solo endeavor. Communication, teamwork, planning, testing, and
tooling are far more important as our students move on from the classroom and make their mark on the
wider world.
Overview
SE-Teamwork: Because of the nature of learning programming, most students in introductory SE have
little or no exposure to the collaborative nature of SE. Practice (for instance in project work) may help,
but lecture and discussion time spent on the value of clear, effective, and efficient communication and
collaboration. are essential for Software Engineering.
186
SE-Tools: Industry reliance on SE tools has exploded in the past generation, with version control
becoming ubiquitous, testing frameworks growing in popularity, increased reliance on static and
dynamic analysis in practice, and near-ubiquitous use of continuous integration systems. Increasingly
powerful IDEs provide code searching and indexing capabilities, as well as small scale refactoring tools
and integration with other SE tools. An understanding of the nature of these tools is broadly valuable -
especially version control systems.
SE-Requirements: Knowing how to build something is of little help if we do not know what to build.
Product Requirements (aka Requirements Engineering, Product Design, Product Requirements
solicitation, PRDs, etc.) introduces students to the processes surrounding the specification of the broad
requirements governing development of a new product or feature.
SE-Design: While Product Requirements focuses on the user-facing functionality of a software system,
Software Design focuses on the engineer-facing design of internal software components. This
encompasses large design concerns such as software architecture, as well as small-scale design
choices like API design.
SE-Construction: Software Construction focuses on practices that influence the direct production of
software: use of tests, test driven development, coding style. More advanced topics extend into secure
coding, dependency injection, work prioritization, etc.
SE-Validation: Software Verification and Validation focuses on how to improve the value of testing -
understand the role of testing, failure modes, and differences between good tests and poor ones.
SE-Refactoring: Refactoring and Code Evolution focuses on refactoring and maintenance strategies,
incorporating code health, use of tools, and backwards compatibility considerations.
SE-Reliability: Software Reliability aims to improve understanding of and attention to error cases,
failure modes, redundancy, and reasoning about fault tolerance.
SE-FormalMethods: Formal Methods provides mathematically rigorous mechanisms to apply to
software, from specification to verification. (Prerequisites: Substantial dependence on core material
from the Discrete Structures area, particularly knowledge units DS/Basic Logic and DS/Proof
Techniques.)
Core Hours
187
Knowledge Units CS Core KA Core
Teamwork 2 2
Product Requirements 2
Software Design 1 4
Software Construction 1 3
Software Reliability 2
Formal Methods
Total 6 21
Knowledge Units
KA Core:
7. Interfacing with stakeholders, as a team
a. Management & other non-technical teams
b. Customers
c. Users
8. Risks associated with physical, distributed, hybrid and virtual teams
a. Including communication, perception, structure, points of failure, mitigation and recovery,
etc.
188
Illustrative Learning Outcomes:
CS Core:
1. Follow effective team communication practices.
2. Articulate the sources of, hazards of, and potential benefits of team conflict - especially focusing on
the value of disagreeing about ideas or proposals without insulting people.
3. Facilitate a conflict resolution and problem solving strategy in a team setting.
4. Collaborate effectively in cooperative development/programming.
5. Propose and delegate necessary roles and responsibilities in a software development team.
6. Compose and follow an agenda for a team meeting.
7. Facilitate through involvement in a team project, the central elements of team building, establishing
healthy team culture, and team management including creating and executing a team work plan.
8. Promote the importance of and benefits that diversity and inclusivity brings to a software
development team
KA Core:
9. Reference the importance of, and strategies to, as a team, interface with stakeholders outside the
team on both technical and non-technical levels.
10. Enumerate the risks associated with physical, distributed, hybrid and virtual teams and possible
points of failure and how to mitigate against and recover/learn from failures.
KA Core:
189
4. Describe how available static and dynamic test tools can be integrated into the software
development environment.
5. Understand the use of CI systems as a ground-truth for the state of the team’s shared code (build
and test success).
6. Describe the issues that are important in selecting a set of tools for the development of a particular
software system, including tools for requirements tracking, design modeling, implementation, build
automation, and testing.
7. Demonstrate the capability to use software tools in support of the development of a software
product of medium size.
Non-core:
7. Prototyping
a. A tool for both eliciting and validating/confirming requirements
8. Product evolution
a. When requirements change, how to understand what effect that has and what changes need to
be made
9. Effort estimation
a. Learning techniques for better estimating the effort required to complete a task
b. Practicing estimation and comparing to how long tasks actually take
c. Effort estimation is quite difficult, so students are likely to be way off in many cases, but seeing
the process play out with their own work is valuable
190
Non-core:
7. Create a prototype of a software system to validate a set of requirements. (Building a mock-up,
MVP, etc.)
8. Estimate the time to complete a set of tasks, then compare estimates to the actual time taken.
9. Determine an implementation sequence for a set of tasks, adhering to dependencies between
them, with a goal to retire risk as early as possible.
10. Write a requirement specification for a simple software system.
KA Core:
5. API design principles
a. Consistency
i. Consistent APIs are easier to learn and less error-prone
ii. Consistency is both internal (between different portions of the API) and external (following
common API patterns)
b. Composability
c. Documenting contracts
i. API operations should describe their effect on the system, but not generally their
implementation
ii. Preconditions, postconditions, and invariants
d. Expandability
e. Error reporting
i. Errors should be clear, predictable, and actionable
ii. Input that does not match the contract should produce an error
iii. Errors that can be reliably managed without reporting should be managed
6. Identifying and codifying data invariants and time invariants
7. Structural and behavioral models of software designs
8. Data design (See also: IM/Data Modeling)
a. Data structures
b. Storage systems
191
9. Requirement traceability
a. Understanding which requirements are satisfied by a design
Non-Core:
10. Design modeling, for instance with class diagrams, entity relationship diagrams, or sequence
diagrams
11. Measurement and analysis of design quality
12. Principles of secure design and coding (See also: SEC/Security Analysis and Engineering)
a. Principle of least privilege
b. Principle of fail-safe defaults
c. Principle of psychological acceptability
13. Evaluating design tradeoffs (e.g., efficiency vs. reliability, security vs. usability)
192
b. Implementation documentation should focus on tricky and non-obvious pieces of code, whether
because the code is using advanced language features or the behavior of the code is complex.
(Do not add comments that re-state common/obvious operations and simple language features.)
i. Clarify dataflow, computation, etc., focusing on what the code is
ii. Identify subtle/tricky pieces of code and refactor to be self-explanatory if possible, or provide
appropriate comments to clarify.
KA Core:
3. Coding style (See also: SDF/Software Development Practices)
a. Style guides
b. Commenting
c. Naming
4. “Best Practices” for coding: techniques, idioms/patterns, mechanisms for building quality programs
(See also: SEC/Defensive Programming, SDF/Software Development Practices)
a. Defensive coding practices
b. Secure coding practices and principles
c. Using exception handling mechanisms to make programs more robust, fault-tolerant
5. Debugging (See also: SDF/Software Development Practices)
6. Logging
7. Use of libraries and frameworks developed by others (See also: SDF/Software Development
Practices)
Non-Core:
8. Larger-scale testing
a. Test doubles (stubs, mocks, fakes)
b. Dependency injection
9. Work sequencing, including dependency identification, milestones, and risk retirement
a. Dependency identification: Identifying the dependencies between different tasks
b. Milestones: A collection of tasks that serve as a marker of progress when completed. Ideally,
the milestone encompasses a useful unit of functionality.
c. Risk retirement: Identifying what elements of a project are risky and prioritizing completing tasks
that address those risks
10. Potential security problems in programs (See also: SEC/Defensive Programming)
a. Buffer and other types of overflows
b. Race conditions
c. Improper initialization, including choice of privileges
d. Input validation
11. Documentation (autogenerated)
12. Development context: “green field” vs. existing code base
a. Change impact analysis
b. Change actualization
13. Release management
14. DevOps practices
KA Core:
193
3. Describe techniques, coding idioms and mechanisms for implementing designs to achieve desired
properties such as reliability, efficiency, and robustness.
4. Write robust code using exception handling mechanisms.
5. Describe secure coding and defensive coding practices.
6. Select and use a defined coding standard in a small software project.
7. Compare and contrast integration strategies including top-down, bottom-up, and sandwich
integration.
8. Describe the process of analyzing and implementing changes to code base developed for a specific
project.
9. Describe the process of analyzing and implementing changes to a large existing code base.
Non-Core:
10. Rewrite a simple program to remove common vulnerabilities, such as buffer overflows, integer
overflows and race conditions.
11. Write a software component that performs some non-trivial task and is resilient to input and run-
time errors.
KA Core:
6. Test planning and generation
a. Test case generation, from formal models, specifications, etc.
b. Test coverage
i. Test matrices
ii. Code coverage (how much of the code is tested)
iii. Environment coverage (how many hardware architectures, OSes, browsers, etc. are tested)
c. Test data and inputs
7. Test development
a. Test-driven development
b. Object oriented testing, mocking, and dependency injection
194
c. Black-box and white-box testing techniques
d. Test tooling, including code coverage, static analysis, and fuzzing
8. Verification and validation in the development cycle
a. Code reviews
b. Test automation, including automation of tooling
c. Pre-commit and post-commit testing
d. Trade-offs between test coverage and throughput/latency of testing
e. Defect tracking and prioritization
i. Reproducibility of reported defects
9. Domain specific verification and validation challenges
a. Performance testing and benchmarking
b. Asynchrony, parallelism, and concurrency
c. Safety-critical
d. Numeric
Non-Core:
10. Verification and validation tooling and automation
a. Static analysis
b. Code coverage
c. Fuzzing
d. Dynamic analysis and fault containment (sanitizers, etc.)
e. Fault logging and fault tracking
11. Test planning and generation
a. Fault estimation and testing termination including defect seeding
b. Use of random and pseudo random numbers in testing
12. Performance testing and benchmarking
a. Throughput and latency
b. Degradation under load (stress testing, FIFO vs. LIFO handling of requests)
c. Speedup and scaling
i. Amadhl's law
ii. Gustafson's law
iii. Soft and weak scaling
d. Identifying and measuring figures of merits
e. Common performance bottlenecks
i. Compute-bound
ii. Memory-bandwidth bound
iii. Latency-bound
f. Statistical methods and best practices for benchmarking
i. Estimation of uncertainty
ii. Confidence intervals
g. Analysis and presentation (graphs, etc.)
h. Timing techniques
13. Testing asynchronous, parallel, and concurrent systems
14. Verification and validation of non-code artifacts (documentation, training materials)
195
KA Core:
5. Describe techniques for creating a test plan and generating test cases.
6. Create a test plan for a medium-size code segment which includes a test matrix and generation of
test data and inputs.
7. Implement a test plan for a medium-size code segment.
8. Identify the fundamental principles of test-driven development methods and explain the role of
automated testing in these methods.
9. Discuss issues involving the testing of object-oriented software.
10. Describe mocking and dependency injection and their application.
11. Undertake, as part of a team activity, a code review of a medium-size code segment.
12. Describe the role that tools can play in the validation of software.
13. Automate testing in a small software project.
14. Explain the roles, pros, and cons of pre-commit and post-commit testing.
15. Discuss the tradeoffs between test coverage and test throughput/latency and how this can impact
verification.
16. Use a defect tracking tool to manage software defects in a small software project.
17. Discuss the limitations of testing in certain domains.
Non-Core:
18. Describe and compare different tools for verification and validation.
19. Automate the use of different tools in a small software project.
20. Explain how and when random numbers should be used in testing.
21. Describe approaches for fault estimation.
22. Estimate the number of faults in a small software application based on fault density and fault
seeding.
23. Describe throughput and latency and provide examples of each.
24. Explain speedup and the different forms of scaling and how they are computed.
25. Describe common performance bottlenecks.
26. Describe statistical methods and best practices for benchmarking software.
27. Explain techniques for and challenges with measuring time when constructing a benchmark.
28. Identify the figures of merit, construct and run a benchmark, and statistically analyze and visualize
the results for a small software project.
29. Describe techniques and issues with testing asynchronous, concurrent, and parallel software.
30. Create a test plan for a medium-size code segment which contains asynchronous, concurrent,
and/or parallel code, including a test matrix and generation of test data and inputs.
31. Describe techniques for the verification and validation of non-code artifacts.
196
d. Value of refactoring as a remedy for technical debt
4. Versioning
a. Semantic Versioning (SemVer)
b. Trunk-based development
Non-Core:
5. “Large Scale” Refactoring - techniques when a refactoring change is too large to commit safely
(large projects), or when it is impossible to synchronize change between provider + all consumers
(multiple repositories, consumers with private code).
a. Express both old and new APIs so that they can co-exist
b. Minimize the size of behavior changes
c. Why these techniques are required, (e.g., “API consumers I can see” vs “consumers I can’t
see”)
Non-Core:
6. Plan a complex multi-step refactoring to change default behavior of an API safely.
Non-Core:
7. Software reliability models
8. Software fault tolerance techniques and models
a. Contextual differences in fault tolerance (e.g., crashing a flight critical system is strongly
avoided, crashing a data processing system before corrupt data is written to storage is highly
valuable)
9. Software reliability engineering practices - including reviews, testing, practical model checking
197
10. Identification of dependent and independent failure domains, and their impact on system reliability
11. Measurement-based analysis of software reliability - telemetry, monitoring and alerting,
dashboards, release qualification metrics, etc.
Non-Core:
4. Demonstrate the ability to apply multiple methods to develop reliability estimates for a software
system.
5. Identify methods that will lead to the realization of a software architecture that achieves a specified
level of reliability.
6. Identify ways to apply redundancy to achieve fault tolerance.
7. Identify single-point-of-failure (SPF) dependencies in a system design.
Professional Dispositions
● Collaborative: Software engineering is increasingly described as a “team sport” - successful
software engineers are able to work with others effectively. Humility, respect, and trust underpin
the collaborative relationships that are essential to success in this field.
198
● Professional: Software engineering produces technology that has the chance to influence
literally billions of people. Awareness of our role in society, strong ethical behavior, and
commitment to respectful day-to-day behavior outside of one’s team are essential.
● Communicative: No single software engineer on a project is likely to know all of the project
details. Successful software projects depend on engineers communicating clearly and regularly
in order to coordinate effectively.
● Meticulous: Software engineering requires attention to detail and consistent behavior from
everyone on the team. Success in this field is clearly influenced by a meticulous approach -
comprehensive understanding, proper procedures, and a solid avoidance of cutting corners.
● Accountable: The collaborative aspects of software engineering also highlight the value of
accountability. Failing to take responsibility, failing to follow through, and failing to keep others
informed are all classic causes of team friction and bad project outcomes.
Math Requirements
Desirable:
● Introductory statistics (performance comparisons, evaluating experiments, interpreting survey
results, etc.)
Skill statement: A student who completes this course should be able to perform good quality code
review for colleagues (especially focusing on professional communication and teamwork needs), read
and write unit tests, use basic software tools (IDEs, version control, static analysis tools) and perform
basic activities expected of a new hire on a software team.
Committee
199
● Bryce Adelstein Lelbach, NVIDIA, New York City, NY, USA
● Patrick Servello, CIWRO, Norman, OK, USA
● Pankaj Jalote, IIIT-Delhi, Delhi, India
● Christian Servin, El Paso Community College, El Paso, TX, USA
Contributors:
● Hyrum Wright, Google, Pittsburgh, PA, USA
● Olivier Giroux, Apple, Cupertino, CA, USA
● Gennadiy Civil, Google, New York City, NY, USA
200
Security (SEC)
Preamble
The world increasingly relies on computing infrastructure to support nearly every facet of modern
critical infrastructure: transportation, communication, healthcare, education, energy generation and
distribution, just to name a few. In recent years, with rampant attacks on and breaches of this critical
computing infrastructure, it has become clearer that computer science graduates have an increased role
in designing, implementing, and operating software systems that are secure and can keep information
private.
In CS2023, the Security (SEC) Knowledge Area (KA) focuses on developing a security mindset into the
overall ethos of computer science graduates so that security is inherent in all of their work products. The
Security title choice was intentional to serve as a one-word umbrella term for this KA, which also
includes concepts such as privacy, cryptography, secure system design, principles of modularity, and
others that are imported from the other KAs. Reasons for this choice are discussed below; see also
Figure 1.
The SEC KA also relies on shared concepts pervasive in all the other areas of CS2023. Additionally, the
six cross-cutting themes of cybersecurity, as defined Cybersecurity Curricular 2017 (CSEC2017)2,
viewed with a computer science lens: confidentiality, integrity, availability, risk assessment, systems
thinking, and adversarial thinking, are relevant here. In addition, the SEC KA adds a seventh cross-
cutting theme: human-centered thinking, emphasizing that humans are also a link in the overall chain of
security, a theme that needs to be inculcated into computer science students, along with risk assessment
and adversarial thinking, which are not typically covered in other Computer Science Knowledge Areas
(KAs). Students also need to learn security concepts such as authentication, authorization, and non-
repudiation. They need to learn about system vulnerabilities and understand threats against software
systems.
Principles of protecting systems (also in the Software Development Fundamentals and Software
Engineering KAs) include security-by-design, privacy-by-design, and defense in depth. Another concept
important in the SEC KA is the notion of assurance, which is an attestation that security mechanisms
need to comply with the security policies that have been defined for data, processes, and systems.
Assurance is tied in with the concepts verification and validation in the SE KA. With the increased use
of computing systems and data sets in modern society, the issues of privacy, especially its technical
aspects not covered in the Society, Ethics and Professionalism KA, become essential to computer
science students.
2
Joint Task Force on Cybersecurity Education. 2017. Cybersecurity Curricula 2017. ACM, IEEE-CS, AIS SIGSEC, and IFIP
WG 11.8. https://doi.org/10.1145/3184594
201
Changes since CS 2013
The Security KA is an “updated” name for CS2013’s Information Assurance and Security (IAS)
knowledge area. Since 2013, Information Assurance and Security has been rebranded as Cybersecurity,
which has become a new computing discipline: the CSEC2017 curricular guidelines for this discipline
have been developed by a Joint Task Force of the ACM, IEEE Computer Society, AIS and IFIP.
Moreover, since 2013, other curricular recommendations for cybersecurity beyond CS2013 and CSEC
2017 have been available. In the US, the Centers of Academic Excellence has Cyber Defense and Cyber
Operations designations for institutions with cybersecurity programs that meet the CAE curriculum
requirements, which are highly granular. Additionally, the US National Institute for Standards and
Technologies (NIST) has developed and revised the National Initiative for Cybersecurity Education
(NICE) Workforce Framework for Cybersecurity (NICE Framework), which identifies competencies
(knowledge, and skills) needed to perform cybersecurity work. The European Cybersecurity Skills
Framework (ECSF) includes a standard ontology to describe cybersecurity tasks, role and to address the
cybersecurity shortage in EU member countries, and types. The computer science aspects of these
guidelines also informed the content of this draft of the SEC KA.
Building on CS2013’s recognition of the pervasiveness of security in computer science, the CS2023
SEC KA focuses on ensuring that students develop the security mindset so that they are prepared for the
continual changes occurring in computing. One noteworthy addition is the knowledge unit for security
analysis and engineering to support the concepts of security-by-design and privacy-by-design.
Feedback to earlier drafts of the SEC KA showed the need to clarify the differences between CS2023
SEC KA and the young computing-based discipline of cybersecurity. CS2023’s SEC KA, which is
informed by the notion of a computer science disciplinary lens mentioned in CSEC 2017, focuses on
those aspects of security, privacy, and related concepts important for computer science students. It
builds primarily on security concepts already included in other CS2023 KAs. In short, the major goal of
the SEC KA is to ensure computer science graduates to design and develop more secure code, ensure
data security and privacy, and can apply the security mindset to their daily activities.
Protecting what happens within the perimeter is a core competency of computer science graduates.
Although the computer science and cybersecurity knowledge units have overlaps, the demands upon
cybersecurity graduates typically are to protect the perimeter. Cybersecurity is a highly interdisciplinary
field of study that covers eight knowledge areas (data, software, component, connection, system, human,
organizational, and societal security) and prepares its students for both technical and managerial
positions. The first five knowledge areas are technical and have overlaps with the CS2023 SEC KA, but
the intent of coverage is substantially different.
For instance, consider the data security knowledge unit. The computer science student will need to view
this knowledge unit using the lens of computer science, as an extension of the material covered in
CS2023’s Data Management KA while the cybersecurity student will need to view data security in the
overall context of cybersecurity goals. These viewpoints are not totally distinct and have overlaps, but
the lenses used to examine and present the content are different, as shown in Figure 1. x1Similar
diagrams apply to the CS2023 SEC KAs overlaps with the CSEC 2017 KAs.
202
Core Hours
Foundational Security 2 6
Defensive Programming 2 5
Cryptography 1 4
Digital Forensics 0 6
Security Governance 0 3
Total 6 33
The SEC KA also relies on CS Core and KA Core hours from the other KAs, as discussed below in the
Shared Concepts and Crosscutting Themes section. At least 28 hours of CS Core hours from the other
KAs are needed, either to provide the basis for the SEC KA or to complement its content shown here.
Knowledge Units
203
SEC-Foundations: Foundational Security
CS Core:
1. Developing a security mindset, including crosscutting concepts: confidentiality, integrity,
availability, risk assessment, systems thinking, adversarial thinking, human-centered thinking
2. Vulnerabilities, threats, and attack vectors
3. Denial of Service (DoS) and Distributed Denial of Service (DDoS)
4. Principles and practices of protection, e.g., least privilege, open design, fail-safe defaults, and
defense in depth; and how they can be implemented
5. Principles and practices of privacy
6. Authentication and authorization
7. Tensions between security, privacy, performance, and other design goals
8. Applicability of laws and regulations on security and privacy
9. Ethical considerations for designing secure systems and maintaining privacy
KA Core:
10. Cryptographic building blocks, e.g., symmetric encryption, asymmetric encryption, hashing, and
message authentication
11. Hardware considerations in security
12. Access control, e.g., discretionary, mandatory, role-based, and attribute-based
13. Intrusion detection systems
14. Principles of usable security and human-centered computing
15. Concepts of trust and trustworthiness
16. Applications of security mindset: web, cloud, and mobile devices.
17. Internet of Things (IoT) security and privacy
18. Newer access control approaches
CS Core:
1. Evaluate a system for possible attacks that can be launched by any adversary
2. Design and develop approaches to protect a system from a set of identified threats
3. Design and develop a system designed to protect individual privacy
KA Core:
4. Evaluate a system for trustworthiness
5. Develop a system that incorporates various principles of security
6. Design and develop a web application ensuring data security and privacy
7. Evaluate a system for compliance to a given law
8. Show a system has been designed to avoid harm to user privacy
204
SEC-Defense: Defensive Programming
CS Core Topics
1. Common vulnerabilities and weaknesses
2. Input validation and data sanitization
3. Type safety and type-safe languages
4. Buffer overflows, stack smashing, and integer overflows
5. SQL injection and other injection attacks
6. Security issues due to race conditions
KA Core Topics
7. Using third-party components securely
8. Assurance: testing (including fuzzing), verification and validation
9. Static and dynamic analyses
10. Preventing information flow attacks
11. Offensive security: what, why. where, and how
12. Malware: varieties, creation, and defense against them
13. Ransomware and its prevention
CS Core
1. Explain the problems underlying in provided examples of an enumeration of common weaknesses,
and how they can be circumvented
2. Explain the importance of defensive programming in showing compliance to various laws
3. Apply input validation and data sanitization techniques to enhance security of a program
4. Rewrite a program in a type-safe language (e.g., Java or Rust) originally written in an unsafe
programming language, (e.g., C/C++)
5. Evaluate a program for possible buffer overflow attacks and rewrite to prevent such attacks
6. Evaluate a set of related programs for possible race conditions and prevent an adversary from
exploiting them
7. Evaluate and prevent SQL injections attacks on a database application.
KA Core
8. Explain the risks with misusing interfaces with third-party code and how to correctly use third-party
code
9. Discuss the need to update software to fix security vulnerabilities and the lifecycle management of
the fix
10. List examples of information flows and prevent unauthorized flows
205
11. Demonstrate how programs are tested for input handling errors
12. Use static and dynamic tools to identify programming faults
13. Describe different kinds of malicious software and how to prevent them from occurring in a system
14. Explain what ransomware is and implement preventive techniques to reduce its occurrence
SEC-Cryptography
CS Core Topics
1. Mathematical preliminaries: modular arithmetic, Euclidean algorithm, probabilistic independence,
linear algebra basics, number theory, finite fields, complexity, asymptotic analysis
2. Differences between algorithmic, applied, and math views of cryptography
3. History and real-world applications, e.g., electronic cash, secure channels between clients and
servers, secure electronic mail, entity authentication, device pairing, voting systems
4. Classical cryptosystems, such as shift, substitution, transposition ciphers, code books, machines.
5. Basic cryptography: symmetric key and public key cryptography
6. Kerckhoff’s principle and use of vetted libraries
KA Core Topics
7. Additional mathematical foundations: primality and factoring; elliptic curve cryptography
8. Private-key cryptosystems: substitution-permutation networks, linear cryptanalysis, differential
cryptanalysis, DES, AES
9. Public-key cryptosystems: Diffie-Hellman, RSA
10. Data integrity and authentication: hashing, digital signatures
11. Cryptographic protocols: challenge-response authentication, zero-knowledge protocols,
commitment, oblivious transfer, secure 2-party or multi-party computation, secret sharing, and
applications
12. Attacker capabilities: chosen-message attack (for signatures), birthday attacks, side channel attacks,
fault injection attacks.
13. Quantum cryptography
14. Blockchain and cryptocurrencies
CS Core
1. Describe the role of cryptography in supporting security and privacy
2. Describe the dangers of inventing one’s own cryptographic methods
3. Describe the role of cryptography in supporting confidentially and privacy
4. Discuss the importance of prime numbers in cryptography and explain their use in cryptographic
algorithms
5. Implement and cryptanalyze classical ciphers
206
KA Core
6. Describe modern private-key cryptosystems and ways to cryptanalyze them
7. Describe modern public-key cryptosystems and ways to cryptanalyze them
8. Compare different algorithms in their support for security
9. Explain key exchange protocols and show approaches to reduce their failure
10. Describe real-world applications of cryptographic primitives and protocols
11. Describe quantum cryptography and the impact of quantum computing on cryptographic algorithms
KA Core Topics
5. Security Analysis, covering security requirements analysis; security controls analysis; threat
analysis; and vulnerability analysis
6. Security Attack Domains and Attack Surfaces, e.g., communications and Networking, hardware,
physical, social engineering, software, and supply chain
7. Security Attack Modes, Techniques and Tactics, e.g., authentication abuse; brute force; buffer
manipulation; code injection; content insertion; denial of service; eavesdropping; function bypass;
impersonation; integrity attack; interception; phishing; protocol analysis; privilege abuse; spoofing;
and traffic injection
8. Security Technical Controls: identity and credential subsystems; access control and authorization
subsystems; information protection subsystems; monitoring and audit subsystems; integrity
management subsystems; cryptographic subsystems
CS Core
1. Create a threat model for a system or system design
2. Apply situational analysis to develop secure solutions under a specified scenario
3. Evaluate a give scenario for tradeoff analysis for system performance, risk assessment, and costs
KA Core
4. Design a set of technical security controls, countermeasures and information protections to meet the
security requirements and security objectives for a system
207
5. Develop a system that incorporates various principles of security
6. Evaluate the effectiveness of security functions, technical controls and componentry for a system.
7. Identify security vulnerabilities and weaknesses in a system
8. Mitigate threats, vulnerabilities and weaknesses in a system
CS Core Topics
1. Not applicable
KA Core Topics
2. Basic principles and methodologies for digital forensics.
3. System design for forensics
4. Rules of evidence – general concepts and differences between jurisdictions
5. Legal issues: digital evidence protection and management, chains of custody, reporting, serving as
an expert witness
6. Forensics in different situations: operating systems, file systems, application forensics, web
forensics, network forensics, mobile device forensics, use of database auditing
7. Attacks on forensics and preventing such attacks
CS Core
1. Not applicable
KA Core
2. Explain what a digital investigation is and how it can be implemented
3. Design and implement software to support forensics
4. Describe legal requirements for using seized data and its usage
5. Describe and implement end-to-end chain of custody from initial digital evidence seizure to
evidence disposition
6. Extract data from a hard drive to comply with the law
7. Describe a person’s professional responsibility and liability when testifying as a forensics expert
8. Recover data based on a given search term from an imaged system
9. Reconstruct data and events from an application history, or a web artifact, or a cloud database, or a
mobile device
10. Capture and interpret network traffic
11. Discuss the challenges associated with mobile device forensics
12. Apply forensics tools to investigate security breaches
13. Identify and mitigate anti-forensic methods
208
SEC-Governance: Security Governance
CS Core Topics
1. Not applicable
KA Core Topics
2. Protecting critical assets from threats
3. Security governance: organizational objectives and general risk assessment
4. Security management: achieve and maintain appropriate levels of confidentiality, integrity,
availability, accountability, authenticity, and reliability
5. Approaches to identifying and mitigating risks to computing infrastructure:
6. Security controls: management, operational and technical controls
7. Policies for data collection, backups, and retention; cloud storage and services; breach disclosure
CS Core
1. Not applicable
KA Core
2. Describe critical assets and how they can be protected
3. Differentiate between security governance, management, and controls, giving examples of each
4. Describe a technical control and implement it
5. Identify and assess risk of programs and database applications causing breaches
6. Design and implement appropriate backups, given a policy
7. Discuss a breach disclosure policy based on legal requirements and implement the policy
8. Identify the risks and benefits of outsourcing to the cloud.
CS Core Topics
1. TBD
2.
KA Core Topics
3. TBD
CS Core
14. TBD
KA Core
15. TBD
209
Expand the following as knowledge units?
CyberAnalytics
Professional Dispositions
● Meticulous: students need to pay careful attention to details to ensure the protection of real-world
software systems.
● Self-directed: students must be ready to deal with the many novel and easily unforeseeable ways in
which adversaries might launch attacks.
● Collaborative: students must be ready to collaborate with others , as collective knowledge and skills
will be needed to prevent attacks, protect systems and data during attacks, and plan for the future
after the immediate attack has been mitigated.
● Responsible: students need to show responsibility when designing, developing, deploying, and
maintaining secure systems, as their enterprise and society is constantly at risk.
● Accountable: students need to know that as future professionals that they will be held accountable if
a system or data breach were to occur, which should strengthen their resolve to prevent such
breaches from occurring in the first place.
Math Requirements
Required:
Desired:
210
There are two suggestions for course packaging.
The first is to infuse the CS Core hours of the SEC KA into appropriate places in other coursework that
covers related security topics in the following knowledge units, as mentioned in the Shared Concepts
section above. It seems to reasonable to assume that as the CS Core Hours of the SEC KA are only 6
hours, one or more of the following KUs being covered could accommodate the additional hours.
The second approach is to create an additional course that packages the following:
211
● FPL-H: Language Translation and Execution – 1 hour
● FPL-N: Runtime Behavior and Systems – 1 hour
● FPL-G: Type Systems – 1 hour
● HCI/Human Factors and Security – 1 hour
● NC-F: Network Security – 3 hours
● OS-G: Protection and Safety – 3 hours
● PDC-B: Communication – 1 hour
● PDC-D: Software Engineering – 2 hours
● SDF-A: Fundamental Programming Concepts and Practices – 1 hour
● SDF-D: Software Development Practices – 1 hour
● SE-F: Software Verification and Validation – 2 hours
● SEP-E: Privacy and Civil Liberties – 1 hour
● SEP-J: ecurity Policies, Laws and Computer Crime – 2 hours
● SF-G: Systems Security – 2 hours
● SPD-A: Common Aspects/Shared Concerns – 2 hours
● SPD-C: Mobile Platforms – 2 hours
● SPD-B: Web Platforms – 2 hours
The coverage exceeds 45 lecture hours, and so in a typical 3-credit semester course, instructors would
need to decide what topics to emphasize and what not to cover without losing the perspective that the
course should help students develop the security mindset.
Prerequisites:
Skill statement:
● A student who completes this course should develop the security mindset and be ready to
apply this mindset to problems to securing software and systems
Committee
Members:
● Vijay Anand, University of Missouri – St. Louis, MO, USA
● Diana Burley, American University, Washington, DC, USA
● Sherif Hazem, Central Bank of Egypt, Egypt
● Michele Maasberg, United States Naval Academy, Annapolis, MD, USA
● Sumita Mishra, Rochester Institute of Technology, Rochester, NY, USA
● Nicolas Sklavos, University of Patras, Patras, Greece
212
● Blair Taylor, Towson University, MD, USA
● Jim Whitmore, Dickinson College, Carlisle, PA, USA
Contributors:
● Markus Geissler, Cosumnes River College, CA, USA
● Daniel Zappala, Brigham Young University, UT, USA
213
Society, Ethics and Professionalism (SEP)
Preamble
While technical issues dominate the computing curriculum, they do not constitute a complete
educational program in the broader context. Students must also be exposed to the larger societal
context of computing to develop an understanding of the relevant social, ethical, legal and professional
issues. This need to incorporate the study of these non-technical issues into the ACM curriculum was
formally recognized in 1991, as can be seen from the following excerpt from CS1991 [1]:
Undergraduates also need to understand the basic cultural, social, legal, and ethical issues
inherent in the discipline of computing. They should understand where the discipline has been,
where it is, and where it is heading. They should also understand their individual roles in this
process, as well as appreciate the philosophical questions, technical problems, and aesthetic
values that play an important part in the development of the discipline.
Students also need to develop the ability to ask serious questions about the social impact of
computing and to evaluate proposed answers to those questions. Future practitioners must be
able to anticipate the impact of introducing a given product into a given environment. Will that
product enhance or degrade the quality of life? What will the impact be upon individuals,
groups, and institutions?
Finally, students need to be aware of the basic legal rights of software and hardware vendors
and users, and they also need to appreciate the ethical values that are the basis for those
rights. Future practitioners must understand the responsibility that they will bear, and the
possible consequences of failure. They must understand their own limitations as well as the
limitations of their tools. All practitioners must make a long-term commitment to remaining
current in their chosen specialties and in the discipline of computing as a whole.
As technological advances (more specifically, how these advances are used by humans) continue to
significantly impact the way we live and work, the critical importance of social and ethical issues and
professional practice continues to increase in importance and consequence. The ways humans use
computer-based products and platforms, while hopefully providing opportunities, also introduce ever
more challenging problems each year. A recent example is the emergence of Generative AI including
large language models that generate code. A 2020 Communications of the ACM article [2] stated:
“because computing as a discipline is becoming progressively more entangled within the human and
social lifeworld, computing as an academic discipline must move away from engineering-inspired
curricular models and integrate the analytic lenses supplied by social science theories and
methodologies.”
In parallel to a heightened awareness of the social consequences computing has on the world,
computing communities have become much more aware - and active - in areas of Inclusion, Diversity,
Equity and Accessibility. All computing students deserve an inclusive, diverse, equitable and accessible
inclusive learning environment. However, computing students have a unique duty to ensure that when
214
put to practice, their skills, knowledge, and competencies are applied in similar fashion, ethically and
professionally, in the society they are in. For these reasons, inclusion, diversity, equity and accessibility
are inherently a part of Society, Ethics, and Professionalism, and a new knowledge unit has been
added that addresses this.
Computer science educators may opt to deliver the material in this knowledge area integrated into the
context of traditional technical and theoretical courses, in dedicated courses (ideally a combination of
both) and as special units as part of capstone, project, and professional practice courses. The material
in this knowledge area is best covered through a combination of all the above. It is too commonly held
that many topics in knowledge units listed as CS Core may not readily lend themselves to being
covered in other more traditional computer science courses. However many of these topics naturally
arise and others can be included with minimal effort, and the benefits of exposing students to these
topics within the context of those traditional courses is most valuable. Nonetheless institutional
challenges will present barriers; for instance some of these traditional courses may not be offered at a
given institution and in such cases it is difficult to cover these topics appropriately without a dedicated
course. However, if social, ethical and professional considerations are covered only in a dedicated
course and not in the context of others, it could reinforce the false notion that technical processes are
void of these important aspects, or that they are more isolated than they are in reality. Because of the
broad relevance of these knowledge units, it is important that as many traditional courses as possible
include aspects such as case studies that analyze ethical, legal, social and professional considerations
in the context of the technical subject matter of those courses. Courses in areas such as software
engineering, databases, computer graphics, computer networks, information assurance and security,
and introduction to computing provide obvious context for analysis of such issues. However, an ethics-
related module could be developed for almost any course in the curriculum. It would be explicitly
against the spirit of these recommendations to have only a dedicated course. Further, these topics
should be covered in courses starting from year 1. Presenting them as advanced topics in later courses
only creates an artificial perception that SEP topics are only important at a certain level or complexity.
While it is true that the importance and consequence of SEP topics increases with level and
complexity, introductory topics are not devoid of SEP topics. Further, many SEP topics are best
presented early to lay a foundation for more intricate topics later in the curriculum.
Running through all the issues in this area is the need to speak to the computing practitioner’s
responsibility to proactively address these issues by both ethical and technical actions. Today it is
important not only for the topics in this knowledge area, but for students’ knowledge in general, that the
ethical issues discussed in any course should be directly related to - and arise naturally from - the
subject matter of that course. Examples include a discussion in a database course of the societal,
ethical and professional aspects of data aggregation or data mining, or a discussion in a software
engineering course of the potential conflicts between obligations to the customer and users as well as
all others affected by their work. Computing faculty who are unfamiliar with the content and/or
pedagogy of applied ethics are urged to take advantage of the considerable resources from ACM,
IEEE-CS, SIGCAS (ACM Special Interest Group on Computers and Society), and other organizations.
Additionally, it is the educator’s responsibility to impress upon students that this area is just as
important - in ways more important - than technical areas. The societal, ethical, and professional
knowledge gained in studying topics in this knowledge area will be used throughout one’s career and
215
are transferable between projects, jobs, and even industries, particularly as one’s career progresses
into project leadership and management.
The ACM Code of Ethics and Professional Conduct [3], the IEEE Code of Ethics [4], and the AAAI
Code of Ethics and Professional Conduct [5] provide guidance that serve as the basis for the conduct of
all computing professionals in their work. The ACM Code emphasizes that ethical reasoning is not an
algorithm to be followed and computer professionals are expected to consider how their work impacts
the public good as the primary consideration. It falls to computing educators to highlight the domain-
specific role of these topics for our students, but programs should certainly be willing to lean heavily on
complementary courses from the humanities and social sciences.
We observe that computing educators are not moral philosophers. Yet CS2023, as with past CS
curricular recommendations, indicate the need for ethical analysis. CS2023 along with all previous CS
Curricular reports are quite clear on the required mathematical foundations that students are expected
to gain and the courses from mathematics departments that provide such training. Yet, the same is not
true of moral philosophy. No one would expect a student to be able to provide a proof by induction until
after having successfully completed a course in discrete mathematics. Yet the parallel with respect to
ethical analyses is somehow absent. We seemingly do expect our students to perform ethical analysis
without having the appropriate prerequisite knowledge from philosophy.
The lack of such prerequisite training has facilitated graduates operating with a certain ethical egoism
(e.g., ‘Here's what I believe/think/feel is right’). However, regardless of how well intentioned, one might
conclude that this is what brought us to this point in history where computer crimes, hacks, scandals,
data breaches, and the general misuse of computing technology (including the data it consumes and
produces) is a frequent occurrence. Certainly, computing graduates who have learned how to apply the
various ethical frameworks or lenses proposed through the ages would only serve to improve this
situation. In retrospect, to ignore the lessons from moral philosophy, which have been debated and
refined for millenia, on what it means to act justly, or work for the common good, appears as hubris.
[1] ACM/IEEE-CS Joint Curriculum Task Force, Computing Curricula 1991 (1991), ACM Press and
IEEE Computer Society Press.
[2] Randy Connolly. 2020. Why computing belongs within the social sciences. Commun. ACM 63, 8
(August 2020), 54–59. https://doi.org/10.1145/3383444
216
[5] AAAI Code of Professional Ethics and Conduct. https://aaai.org/Conferences/code-of-ethics-and-
conduct.php
Core Hours
Social Context 3 2
Professional Ethics 2 2
Intellectual Property 1 1
Communication 2 1
Sustainability 1 1
History 1 1
Economies of Computing 0 1
Total 18 14
Knowledge Units
217
1. Social implications (e.g. political and cultural ideologies) in a hyper-networked world where the
capabilities and impact of social media, artificial intelligence and computing in general are rapidly
evolving
2. Impact of computing applications (e.g. social media, artificial intelligence applications) on individual
well-being, and safety of all kinds (e.g., physical, emotional, economic)
3. Consequences of involving computing technologies, particularly artificial intelligence, biometric
technologies and algorithmic decision-making systems, in civic life (e.g., facial recognition
technology, biometric tags, resource distribution algorithms, policing software)
4. How deficits in diversity and accessibility in computing affect society and what steps can be taken to
improve diversity and accessibility in computing
KA Core:
5. Growth and control of the internet, computing, and artificial intelligence
6. Often referred to as the digital divide, differences in access to digital technology resources and its
resulting ramifications for gender, class, ethnicity, geography, and/or underdeveloped countries
7. Accessibility issues, including legal requirements and dark patterns
8. Context-aware computing
1. Describe different ways that computer technology (networks, mobile computing, cloud computing)
mediates social interaction at the personal and social group level.
2. Identify developers’ assumptions and values embedded in hardware and software design,
especially as they pertain to usability for diverse populations including under-represented
populations and the disabled.
3. Interpret the social context of a given design and its implementation.
4. Evaluate the efficacy of a given design and implementation using empirical data.
5. Articulate the implications of social media use for different identities, cultures, and communities.
KA Core:
6. Explain the internet’s role in facilitating communication between citizens, governments, and each
other.
7. Analyze the effects of reliance on computing in the implementation of democracy (e.g., delivery of
social services, electronic voting).
8. Describe the impact of the under-representation of people from historically minoritized populations
in the computing profession (e.g., industry culture, product diversity).
9. Explain the implications of context awareness in ubiquitous computing systems.
10. Explain how access to the internet and computing technologies affect different societies.
11. Discuss why/how internet access can be viewed as a human right.
218
Ethical theories and principles are the foundations of ethical analysis because they are the viewpoints
which can provide guidance along the pathway to a decision. Each theory emphasizes different
assumptions and methods for determining the ethicality of a given action. It is important for students to
recognize that decisions in different contexts may require different ethical theories to arrive at ethically
acceptable outcomes, and what constitutes ‘acceptable’ depends on a variety of factors such as
cultural context. Applying methods for ethical analysis requires both an understanding of the underlying
principles and assumptions guiding a given tool and an awareness of the social context for that
decision. Traditional ethical frameworks as provided by western philosophy can be useful, but they are
not all-inclusive. Effort must be taken to include decolonial, indigenous and historically marginalized
ethical perspectives whenever possible. No theory will be universally applicable to all contexts, nor is
any single ethical framework the ‘best’. Engagement across various ethical schools of thought is
important for students to develop the critical thinking needed in judiciously applying methods for ethical
analysis of a given situation.
CS Core:
1. Avoiding fallacies and misrepresentation in argumentation
2. Ethical theories and decision-making (philosophical and social frameworks)
3. Recognition of the role culture plays in our understanding, adoption, design, and use of computing
technology
4. Why ethics is important in computing, and how ethics is similar to, and different from, laws and
social norms
KA Core:
5. Professional checklists
6. Evaluation rubrics
7. Stakeholder analysis
8. Standpoint theory
9. Introduction to ethical frameworks (e.g., consequentialism such as utilitarianism, non-
consequentialism such as duty, rights or justice, agent-centered such as virtue or feminism,
contractarianism, ethics of care) and their use for analyzing an ethical dilemma
KA Core:
7. Evaluate all stakeholder positions in relation to their cultural context in a given situation.
219
8. Evaluate the potential for introducing or perpetuating ethical debt (deferred consideration of ethical
impacts or implications) in technical decisions.
9. Discuss the advantages and disadvantages of traditional ethical frameworks
10. Analyze ethical dilemmas related to the creation and use of technology from multiple perspectives
using ethical frameworks
KA Core:
8. The role of the computing professional and professional societies in public policy
9. Maintaining awareness of consequences
10. Ethical dissent and whistle-blowing
11. The relationship between regional culture and ethical dilemmas
12. Dealing with harassment and discrimination
13. Forms of professional credentialing
14. Ergonomics and healthy computing environments
15. Time to market and cost considerations versus quality professional standards
CS Core:
1. Identify ethical issues that arise in software design, development practices, and software
deployment
220
2. Demonstrate how to address ethical issues in specific situations.
3. Explain the ethical responsibility of ensuring software correctness, reliability and safety including
from where this responsibility arises (e.g., ACM/IEEE/AAAI Codes of Ethics, laws and regulations,
organizational policies).
4. Describe the mechanisms that typically exist for a professional to keep up-to-date in ethical
matters.
5. Describe the strengths and weaknesses of relevant professional codes as expressions of
professionalism and guides to decision-making.
6. Analyze a global computing issue, observing the role of professionals and government officials in
managing this problem.
KA Core:
8. Describe ways in which professionals and professional organizations may contribute to public
policy.
9. Describe the consequences of inappropriate professional behavior.
10. Be familiar with whistleblowing and have access to knowledge to guide one through an incident.
11. Provide examples of how regional culture interplays with ethical dilemmas.
12. Discuss forms of harassment and discrimination and avenues of assistance.
13. Examine various forms of professional credentialing.
14. Explain the relationship between ergonomics in computing environments and people’s health.
15. Describe issues associated with industries’ push to focus on time to market versus enforcing
quality professional standards.
KA Core:
6. Philosophical foundations of intellectual property
7. Forms of intellectual property (e.g., copyrights, patents, trade secrets, trademarks) and the rights
they protect
221
8. Limitations on copyright protections, including fair use and the first sale doctrine
9. Intellectual property laws and treaties that impact the enforcement of copyrights
10. Software piracy and technical methods for enforcing intellectual property rights, such as digital
rights management and closed source software as a trade secret
11. Moral and legal foundations of the open source movement
12. Systems that use others’ data (e.g., large language models)
1. Describe and critique legislation and precedent aimed at digital copyright infringements.
2. Identify contemporary examples of intangible digital intellectual property.
3. Select an appropriate software license for a given project.
4. Justify legal and ethical uses of copyrighted materials.
5. Interpret the intent and implementation of software licensing.
6. Determine whether a use of copyrighted material is likely to be fair use.
7. Evaluate the ethical issues inherent in various plagiarism detection mechanisms.
8. Identify multiple forms of plagiarism beyond verbatim copying of text or software (e.g., intentional
paraphrasing, authorship misrepresentation, and improper attribution).
KA Core:
9. Discuss the philosophical bases of intellectual property in an appropriate context (e.g., country,
etc.).
10. Weigh the conflicting issues involved in securing software patents.
11. Characterize and contrast the protections and obligations of copyright, patent, trade secret, and
trademarks.
12. Explain the rationale for the legal protection of intellectual property in the appropriate context (e.g.,
country, etc.).
13. Evaluate the use of copyrighted work under the concepts of fair use and the first sale doctrine.
14. Identify the goals of the open source movement and its impact on fields beyond computing, such
as the right-to-repair movement.
15. Characterize the global nature of software piracy.
16. Critique the use of technical measures of digital rights management (e.g., encryption,
watermarking, copy restrictions, and region lockouts) from multiple stakeholder perspectives.
17. Discuss the nature of anti-circumvention laws in the context of copyright protection.
222
provide platforms for user-generated content are under increasing pressure to perform governance
tasks, potentially facing liability for their decisions.
CS Core:
1. Privacy implications of widespread data collection including but not limited to transactional
databases, data warehouses, surveillance systems, and cloud computing
2. Conceptions of anonymity, pseudonymity, and identity
3. Technology-based solutions for privacy protection (e.g., end-to-end encryption and differential
privacy)
4. Civil liberties and cultural differences
KA Core:
5. Philosophical and legal conceptions of the nature of privacy
6. Legal foundations of privacy protection in relevant jurisdictions (e.g., GDPR in the EU)
7. Privacy legislation in areas of practice (e.g., HIPAA in the US)
8. Freedom of expression and its limitations
9. User-generated content, content moderation, and liability
KA Core:
6. Discuss the philosophical basis for the legal protection of personal privacy in an appropriate
context (e.g., country, etc.).
7. Critique the intent, potential value and implementation of various forms of privacy legislation.
8. Identify strategies to enable appropriate freedom of expression.
SEP-Communication
Computing is an inherently collaborative and social discipline making communication an essential
aspect of computing. Much but not all of this communication occurs in a professional setting where
communication styles, expectations, and norms differ from other contexts where similar technology, such
as email or messaging, might be used. Both professional and informal communication conveys
information to various audiences who may have very different goals and needs for that information. It is
also important to note that computing professionals are not just communicators, but are also listeners
223
who must be able to hear and thoughtfully make use of feedback received from various stakeholders.
Effective communication skills are not something one ‘just knows’ - they are developed and can be
learned. Communication skills are best taught in context throughout the undergraduate curriculum.
CS Core:
1. Interpreting, summarizing, and synthesizing technical material, including source code and
documentation
2. Writing effective technical documentation and materials (tutorials, reference materials, API
documentation)
3. Identifying, describing, and employing (clear, polite, concise) oral, written, and electronic team and
group communication.
4. Understanding and enacting awareness of audience in communication by communicating
effectively with different stakeholders such as customers, leadership, or the general public
5. Appropriate and effective team communication including utilizing collaboration tools and conflict
resolution
6. Recognizing and avoiding the use of rhetorical fallacies when resolving technical disputes
7. Understanding accessibility and inclusivity requirements for addressing professional audiences
KA Core:
8. Demonstrating cultural competence in written and verbal communication
9. Using synthesis to concisely and accurately convey tradeoffs in competing values driving software
projects including technology, structure/process, quality, people, market and financial
10. Use writing to solve problems or make recommendations in the workplace, such as raising ethical
concerns or addressing accessibility issues
CS Core:
1. Understand the importance of writing concise and accurate technical documents following well-
defined standards for format and for including appropriate tables, figures, and references.
2. Evaluate written technical documentation for technical accuracy, concision, lack of ambiguity, and
awareness of audience.
3. Develop and deliver an audience aware, accessible, and organized formal presentation.
4. Plan interactions (e.g., virtual, face-to-face, shared documents) with others in ways that invite
inclusive participation, model respectful consideration of others’ contributions, and explicitly value
diversity of ideas.
5. Recognize and describe qualities of effective communication (e.g., virtual, face-to-face, intragroup,
shared documents).
6. Understand how to effectively and appropriately communicate as a member of a team including
conflict resolution techniques.
KA Core:
7. Discuss ways to influence performance and results in diverse and cross-cultural teams.
224
8. Evaluate personal strengths and weaknesses to work remotely as part of a team drawing from
diverse backgrounds and experiences.
SEP-Sustainability
Sustainability is defined by the United Nations as “development that meets the needs of the present
without compromising the ability of future generations to meet their own needs."1 Alternatively, it is the
“balance between the environment, equity and economy.”2 As computing extends into more and more
aspects of human existence, we are already seeing estimates that 10% of global electricity usage is
spent on computing, and that percentage will continue growing. Further, electronics contribute
individually to demand for rare earth elements, mineral extraction, and countless e-waste concerns.
Students should be prepared to engage with computing with a background that recognizes these global
and environmental costs and their potential long term effects on the environment and local
communities.
1
https://www.un.org/en/academic-impact/sustainability
2
https://www.sustain.ucla.edu/what-is-sustainability
CS Core:
1. Being a sustainable practitioner by taking into consideration environmental, social, and cultural
impacts of implementation decisions (e.g., sustainability goals, algorithmic bias/outcomes,
economic viability, and resource consumption)
2. Local/regional/global social and environmental impacts of computing systems use and disposal
(e.g. carbon footprints, e-waste) in hardware (e.g., data centers) and software (e.g. blockchain, AI
model training and use).
3. Discuss the tradeoffs involved in proof-of-work and proof-of-stake algorithms
KA Core:
4. Guidelines for sustainable design standards
5. Systemic effects of complex computer-mediated phenomena (e.g., social media, offshoring, remote
work)
6. Pervasive computing: Information processing that has been integrated into everyday objects and
activities, such as smart energy systems, social networking and feedback systems to promote
sustainable behavior, transportation, environmental monitoring, citizen science and activism
7. Conduct research on applications of computing to environmental issues, such as energy, pollution,
resource usage, recycling and reuse, food management / production, and others
8. How the sustainability of software systems are interdependent with social systems, including the
knowledge and skills of its users, organizational processes and policies, and its societal context
(e.g., market forces, government policies)
CS Core:
225
1. Identify ways to be a sustainable practitioner.
2. For any given project (software artifact, hardware, etc.) evaluate the environmental impacts of its
deployment. (e.g., energy consumption, contribution to e-waste, impact of manufacturing).
3. Illustrate global social and environmental impacts of computer use and disposal (e-waste).
4. List the sustainable effects of modern practices and activities (e.g., remote work, online commerce,
cryptocurrencies, data centers).
KA Core:
5. Describe the environmental impacts of design choices within the field of computing that relate to
algorithm design, operating system design, networking design, database design, etc.
6. Investigate the social and environmental impacts of new system designs.
7. Identify guidelines for sustainable IT design or deployment.
8. Investigate pervasive computing in areas such as smart energy systems, social networking,
transportation, agriculture, supply-chain systems, environmental monitoring and citizen activism.
9. Assess computing applications in respect to environmental issues (e.g., energy, pollution, resource
usage, recycling and reuse, food management and production).
SEP-History
History is important because it provides a mechanism for understanding why our computing systems
operate the way they do, the societal contexts in which these approaches arose, and how those
continue to echo through the discipline today. This history of computing is taught to provide a sense of
how the rapid change in computing impacts society on a global scale. It is often taught in context with
foundational concepts, such as system fundamentals and software development fundamentals.
CS Core:
1. The history of computing: hardware, software, and human/organizational and the role of this in
present social contexts
KA Core:
2. Age I: Prehistory—the world before ENIAC (1946): Ancient analog computing (Stonehenge,
Antikythera mechanism, Salisbury Cathedral clock, etc.), human-calculated number tables, Euclid,
Lovelace, Babbage, Gödel, Church, Turing, pre-electronic (electro-mechanical and mechanical)
hardware
3. Age II: Early modern (digital) computing - ENIAC, UNIVAC, Bombes (Bletchley Park
codebreakers), computer companies (e.g., IBM), mainframes, etc.
4. Age III: Modern (digital) computing - PCs, modern computer hardware and software, Moore’s Law
5. Age IV: Internet - networking, internet architecture, browsers and their evolution, standards, big
players (Google, Amazon, Microsoft, etc.), distributed computing
6. Age V: Cloud - smartphones (Apple, Android, and minor ones), cloud computing, remote servers,
software as a service (SaaS), security and privacy, social media
7. Age VI: Emerging AI-assisted technologies including decision making systems, recommendation
systems, generative AI and other machine learning driven tools and technologies
226
CS Core:
1. Understand the relevance and impact of computing history on recent events, present context, and
possible future outcomes. Ideally from more than one cultural perspective.
KA Core:
2. Identify significant trends in the history of the computing field.
3. Identify the contributions of several pioneering individuals or organizations (research labs,
computer companies, government offices) in the computing field.
4. Discuss the historical context for important moments in history of computing, such as the move
from vacuum tubes to transistors (TRADIC), early seminal operating systems (e.g., OS 360), Xerox
PARC and the first Apple computer with a GUI, the creation of specific programming language
paradigms, the first computer virus, the creation of the internet, the creation of the WWW, the dot
com bust, Y2K, the introduction of smartphones, etc.
5. Compare daily life before and after the advent of personal computers and the Internet.
KA Core:
1. Summarize concerns about monopolies in tech, walled gardens vs open environments, etc.
2. Identify several ways in which the information technology industry and users are affected by
shortages in the labor supply.
3. Outline the evolution of pricing strategies for computing goods and services.
4. Explain the social effects of the knowledge and attention economies.
227
5. Summarize the consequences of globalization and nationalism in the computing industry.
6. Describe the effects of automation on society, and job markets in particular.
7. Detail how computing has changed the corporate landscape
8. Outline how computing has changed personal finance and the consequences of this, both positive
and negative.
KA Core:
KA Core:
228
6. Investigate measures that can be taken by both individuals and organizations including
governments to prevent or mitigate the undesirable effects of computer crimes and identity theft.
7. Draft a company-wide security policy, which includes procedures for managing passwords and
employee monitoring.
8. Understand how legislation from one region may affect activities in another (e.g. how EU GDPR
applies globally, when EU persons are involved).
CS2023’s sponsoring organizations are ACM, IEEE CS, and AAAI. Each of those organizations
[https://www.acm.org/diversity-inclusion/about#DEIPrinciples, https://www.ieee.org/about/diversity-
index.html, https://aaai.org/Organization/diversity-statement.php] place a high value on inclusion,
diversity, equity, and accessibility; and our computer science classrooms should promote and model
those principles. We should welcome and seek diversity—the gamut of human differences including
gender, gender identity, race, politics, ability and attributes, religion, nationality, etc.—in our
classrooms, departments and campuses. We should strive to make our classrooms, labs, and curricula
accessible and to promote inclusion; the sense of belonging we feel in a community where we are
respected and wanted. To achieve equity, we must allocate resources, promote fairness, and check our
biases to ensure persons of all identities achieve success. Accessibility should be considered and
implemented in all computing activities and products.
Explicitly infusing inclusion, diversity, equity, and accessibility across the computer science curriculum
demonstrates its importance for the department, institution, and our field—all of which likely have a
IDEA statement and/or initiative(s). This emphasis on IDEA is important ethically and a bellwether issue
of our time. Many professionals in computing already recognize attention to diversity, equity, inclusion,
accessibility as integral parts of disciplinary practice. Regardless of the degree to which IDEA values
appear in any one computer science class, research suggests that a lack of attention to IDEA will result
in inferior designs. Not only does data support that diverse teams outperform homogeneous ones, but
diverse teams may have prevented egregious technology failures in the headlines such as facial
recognition misuse, airbag injuries, and deaths.
CS Core:
1. How identity impacts and is impacted by computing environments (academic and professional) and
technologies
2. The benefits of diverse development teams and the impacts of teams that are not diverse.
3. Inclusive language and charged terminology, and why their use matters
4. Inclusive behaviors and why they matter
5. Designing and developing technology with accessibility in mind
229
6. Designing for accessibility
7. How computing professionals can influence and impact inclusion, diversity, equity, and accessibility
both positively and negatively, not only through the software they create.
KA Core:
8. Highlight experts (practitioners, graduates, and upper level students) who reflect the identities of
the classroom and the world
9. Benefits of diversity and harms caused by a lack of diversity
10. Historic marginalization due to technological supremacy and global infrastructure challenges to
equity and accessibility
KA Core:
9. Highlight experts (practitioners, graduates, and upper level students—current and historic) who
reflect the identities of the classroom and the world.
10. Identify examples of the benefits that diverse teams can bring to software products, and those
where a lack of diversity have costs.
11. Give examples of systemic changes that could positively address diversity, equity, and inclusion in
a familiar context (i.e. in an introductory computing course).
Professional Dispositions
230
● Critical Self-reflection - Being able to inspect one’s own actions, thoughts, biases, privileges, and
motives will help in discovering places where professional activity is not up to current standards.
Understand both conscious and unconscious bias and continuously work to counteract them.
● Responsiveness - Ability to quickly and accurately respond to changes in the field and adapt in a
professional manner, such as shifting from in-person office work to remote work at home. These
shifts require us to rethink our entire approach to what is considered “professional”.
● Proactiveness - Being professional in the workplace means finding new trends (e.g. in accessibility
or inclusion) and understanding how to implement them immediately for a more professional working
environment.
● Cultural Competence - Prioritize cultural competence—the ability to work with people from cultures
different from your own—by using inclusive language, watching for and counteracting conscious and
unconscious bias, and encouraging honest and open communication.
● Advocation - Thinking, speaking and acting in ways that foster and promote inclusion, diversity,
equity and accessibility in all ways including but not limited to teamwork, communication, and
developing products (hardware and software).
In computing, Societal, Ethical, and Professional topics arise in all other knowledge areas and therefore
should arise in the context of other computing courses, not just siloed in an “SEP course”. These topics
should be covered in courses starting from year 1(the only likely exception is SEP-Ethical-Analysis:
Methods for Ethical Analysis) which could be delivered as part of a first-year course or via a seminar or
an online independent study.
Presenting SEP topics as advanced topics only covered in later courses could create the incorrect
perception that SEP topics are only important at a certain level or complexity. While it is true that the
importance and consequence of SEP topics increases with level and complexity, introductory topics are
not devoid of SEP topics. Further, many SEP topics are best presented early to lay a foundation for
more intricate topics later in the curriculum.
Who should teach some of these topics is a complex topic. When SEP topics arise in other courses
these are naturally often taught by the instructor teaching that course, although at times bringing in
expert educators from other disciplines (e.g., law, ethics, etc.) could be advantageous. Stand-alone
courses in SEP could be taught by a team of CS and other disciplines - although more logistically
complicated, this may be a better approach than being taught by a single CS instructor. Regardless,
who teaches SEP topics and/or courses warrants
At a minimum the SEP CS Core learning outcomes* are best covered in the context of courses
covering other knowledge areas - ideally the SEP KA Core hours are also. *With the likely
exception of SEP-Ethical Analysis: Methods for Ethical Analysis which could be delivered as discussed
above.
At some institutions an in-depth dedicated course at the mid- or advanced-level may be offered
covering all recommended topics in both the CS Core and KA Core knowledge units in close
coordination with learning outcomes best covered in the context of courses covering other knowledge
areas. Such a course could include:
231
● SEP-Context (5 hours)
● SEP-Ethical-Analysis: Methods for Ethical Analysis (3 hours)
● SEP-Professional-Ethics: Professional Ethics (4 hours)
● SEP-IP (2 hours)
● SEP-Privacy: Privacy and Civil Liberties (3 hours)
● SEP-Communication (3 hours)
● SEP-Sustainability (2 hours)
● SEP-History (2 hours)
● SEP-Economies: Economies of Computing (1 hour)
● SEP-Security: Security Policies, Laws and Computer Crimes (3 hours)
● SEP-IDEA: Diversity, Equity, and Inclusion (4 hours)
At some institutions a dedicated minimal course may be offered covering the CS Core knowledge
units in close coordination with learning outcomes best covered in the context of courses covering other
knowledge areas. Such a course could include:
● SEP-Context (3 hours)
● SEP-Ethical-Analysis: Methods for Ethical Analysis (2 hours)
● SEP-Professional-Ethics (2 hours)
● SEP-IP (1 hour)
● SEP-Privacy: Privacy and Civil Liberties (2 hours)
● SEP-Communication (2 hours)
● SEP-Sustainability (1 hour)
● SEP-History (1 hour)
● SEP-Security: Security Policies, Laws and Computer Crimes (2 hours)
● SEP-IDEA: Inclusion, Diversity, Equity, and Accessibility (2 hours)
References
1. Emanuelle Burton, Judy Goldsmith, Nicholas Mattei, Cory Siler, and Sara-Jo Swiatek. 2023.
Teaching Computer Science Ethics Using Science Fiction. In Proceedings of the 54th ACM
Technical Symposium on Computer Science Education V. 2 (SIGCSE 2023). Association for
Computing Machinery, New York, NY, USA, 1184. https://doi.org/10.1145/3545947.3569618
2. Randy Connolly. 2020. Why computing belongs within the social sciences. Commun. ACM 63, 8
(August 2020), 54–59. https://doi.org/10.1145/3383444
3. Casey Fiesler. Tech Ethics Curricula: A Collection of Syllabi Used to Teach Ethics in
Technology Across Many Universities
a. https://cfiesler.medium.com/tech-ethics-curricula-a-collection-of-syllabi-3eedfb76be18
b. Tech Ethics Curricula
4. Casey Fiesler. Tech Ethics Readings: A Spreadsheet of Readings Used to Teach Ethics in
Technology Tech Ethics Class Readings
5. Stanford Embedded EthiCS, Embedding Ethics in Computer Science.
https://embeddedethics.stanford.edu/
6. Jeremy, Weinstein, Rob Reich, and Mehran Sahami. System Error: Where Big Tech Went
Wrong and How We Can Reboot. Hodder Paperbacks, 2023.
7. Baecker, R. Computers in Society: Modern Perspectives, Oxford University Press. (2019).
232
8. Embedded EthiCS @ Harvard: bringing ethical reasoning into the computer science curriculum.
https://embeddedethics.seas.harvard.edu/about
233
Systems Fundamentals (SF)
Preamble
A computer system is a set of hardware and software infrastructures upon which applications are
constructed. Computer systems have become a pillar of people's daily life. As such, learning the
knowledge about computer systems, grasping the skills to use and design these systems, and
understanding the fundamental rationale and principles in computer systems are essential to equip
students with the necessary competency toward a career related to computer science.
In the curriculum of computer science, the study of computer systems typically spans across multiple
courses, including, but not limited to, operating systems, parallel and distributed systems,
communications networks, computer architecture and organization and software engineering. The
System Fundamentals knowledge area, as suggested by its name, focuses on the fundamental
concepts in computer systems that are shared by these courses within their respective cores. The goal
of this knowledge area is to present an integrative view of these fundamental concepts in a unified
albeit simplified fashion, providing a common foundation for the different specialized mechanisms and
policies appropriate to the particular domain area. These concepts include an overview of computer
systems, basic concepts such as state and state transition, resource allocation and scheduling, and so
on.
234
11. Renamed the unit of reliability through redundancy to system reliability.
Core Hours
Knowledge Units
Learning Outcomes:
1. Describe the basic building blocks of computers and their role in the historical development of
computer architecture.
2. Design a simple logic circuit using the fundamental building blocks of logic design to solve a simple
problem (e.g., adder).
3. Use tools for capture, synthesis, and simulation to evaluate a logic circuit design.
4. Describe how computing systems are constructed of layers upon layers, based on separation of
concerns, with well-defined interfaces, hiding details of low layers from the higher layers.
5. Describe that hardware, OS, VM, application are additional layers of interpretation/processing.
235
6. Describe the mechanisms of how errors are detected, signaled back, and handled through the
layers.
7. Construct a simple program (e.g., a TCP client/server) using methods of layering, error detection
and recovery, and reflection of error status across layers.
8. Find bugs in a layered program by using tools for program tracing, single stepping, and debugging.
9. Understand the concept of strong vs. weak scaling, i.e., how performance is affected by scale of
problem vs. scale of resources to solve the problem. This can be motivated by simple, real-world
examples.
Learning Outcomes:
1. Describe the differences between digital and analog systems, and between discrete and continuous
systems. Can give real-world examples of these systems.
2. Describe computations as a system characterized by a known set of configurations with transitions
from one unique configuration (state) to another (state).
3. Describe the distinction between systems whose output is only a function of their input (stateless)
and those with memory/history (stateful).
4. Develop state machine descriptions for simple problem statement solutions (e.g., traffic light
sequencing, pattern recognizers).
5. Describe a computer as a state machine that interprets machine instructions.
6. Explain how a program or network protocol can also be expressed as a state machine, and that
alternative representations for the same computation can exist.
7. Derive time-series behavior of a state machine from its state machine representation (e.g., TCP
connection management state machine).
8. Write a simple sequential problem and a simple parallel version of the same program.
9. Evaluate the performance of simple sequential and parallel versions of a program with different
problem sizes, and be able to describe the speed-ups achieved.
10. Demonstrate on an execution timeline that parallelism events and operations can take place
simultaneously (i.e., at the same time). Explain how work can be performed in less elapsed
time if this can be exploited.
236
SF-C: Resource Allocation and Scheduling
CS Core:
1. Different types of resources (e.g., processor share, memory, disk, net bandwidth)
2. Common scheduling algorithms (e.g., first-come-first-serve scheduling, priority-based scheduling, fair
scheduling and preemptive scheduling )
KA Core:
1. Advantages and disadvantages of common scheduling algorithms
Learning Outcomes:
1. Define how finite computer resources (e.g., processor share, memory, storage and network
bandwidth) are managed by their careful allocation to existing entities.
2. Describe how common scheduling algorithms work.
3. Describe the pros and cons of common scheduling algorithms
4. Implement common scheduling algorithms, and evaluate their performances.
Learning Outcomes:
1. Explain the importance of locality in determining system performance.
2. Calculate average memory access time and describe the tradeoffs in memory hierarchy
performance in terms of capacity, miss/hit rate, and access time.
3. Explain why it is important to isolate and protect the execution of individual programs and
environments that share common underlying resources.
4. Describe how the concept of indirection can create the illusion of a dedicated machine and its
resources even when physically shared among multiple programs and environments.
5. Measure the performance of two application instances running on separate virtual machines,
and determine the effect of performance isolation.
237
1. Performance figures of merit
2. Workloads and representative benchmarks, and methods of collecting and analyzing performance
figures of merit
3. CPI (Cycles per Instruction) equation as tool for understanding tradeoffs in the design of instruction
sets, processor pipelines, and memory system organizations.
4. Amdahl’s Law: the part of the computation that cannot be sped up limits the effect of the parts that
can
5. Order of magnitude analysis (Big O notation)
6. Analysis of slow and fast paths of a system
7. Events on their effect on performance (e.g., instruction stalls, cache misses, page faults)
KA Core:
1. Analytical tools to guide quantitative evaluation
2. Understanding layered systems, workloads, and platforms, their implications for performance, and
the challenges they represent for evaluation
3. Microbenchmarking pitfalls
Learning Outcomes:
1. Explain how the components of system architecture contribute to improving its performance.
2. Explain the circumstances in which a given figure of system performance metric is useful.
3. Explain the usage and inadequacies of benchmarks as a measure of system performance.
4. Describe Amdahl’s law and discuss its limitations.
5. Use limit studies or simple calculations to produce order-of-magnitude estimates for a given
performance metric in a given context.
6. Use software tools to profile and measure program performance.
7. Design and conduct a performance-oriented experiment of a common system (e.g., an OS
and Spark).
8. Conduct a performance experiment on a layered system to determine the effect of a system
parameter on system performance.
KA Core:
Other approaches to reliability
Learning Outcomes:
1. Explain the distinction between program errors, system errors, and hardware faults (e.g.,
bad memory) and exceptions (e.g., attempt to divide by zero).
238
2. Articulate the distinction between detecting, handling, and recovering from faults, and the
methods for their implementation.
3. Describe the role of error correction codes in providing error checking and correction
techniques in memories, storage, and networks.
4. Apply simple algorithms for exploiting redundant information for the purposes of data
correction.
5. Compare different error detection and correction methods for their data overhead,
implementation complexity, and relative execution time for encoding, detecting, and
correcting errors.
Learning Outcomes:
1. Describe some common system security issues and give examples.
2. Describe some countermeasures against system security issues.
KA Core:
1. Designs of representative systems (e.g., Apache web server, Spark and Linux)
Learning Outcomes:
1. Describe common criteria of system design.
2. Given the functionality requirements of a system and its key design criteria, provide a high-level
design of this system.
3. Describe the design of some representative systems.
Professional Dispositions
● Meticulousness: students must pay attention to details of different perspectives when learning
about and evaluating systems.
239
● Adaptiveness: students must be flexible and adaptive when designing systems. Different
systems have different requirements, constraints and working scenarios. As such, they require
different designs. Students must be able to make appropriate design decisions correspondingly.
Math Requirements
Required:
● Discrete Math:
o Sets and relations
o Basic graph theory
o Basic logic
● Linear Algebra:
o Basic matrix operations
● Probability and Statistics
o Random variable
o Bayes theorem
o Expectation and Variation
o Cumulative distribution function and probability density function
Desirable:
● Basic queueing theory
● Basic stochastic process
Shared Topics:
● Networking and communication (NC) –
● Operating system (OS)
● Architecture and organization (AR)
● Parallel and distributed computing (PDC).
240
● SF-G: System Security - 6 hours
● SF-H: System Design - 6 hours
Pre-requisites:
● Sets and relations, basic graph theory and basic logic from Discrete Math
● Basic matrix operations from Linear Algebra
● Random variable, Bayes theorem, expectation and variation, cumulative distribution function
and probability density function from Probability and Statistics
Skill statement: A student who completes this course should be able to (1) understand the fundamental
concepts in computer systems; (2) understand the key design principles, in terms of performance,
reliability and security, when designing computer systems; (3) deploy and evaluate representative
complex systems (e.g., MySQL and Spark) based on their documentations, and (4) design and
implement simple computer systems (e.g., an interactive program, a simple web server, and a simple
data storage system).
Committee
Members:
● Doug Lea, State University of New York at Oswego, Oswego, NY, USA
● Monica D. Anderson, University of Alabama, Tuscaloosa, AL, USA
● Matthias Hauswirth, University of Lugano, Lugano, Switzerland
● Ennan Zhai, Alibaba Group, Hangzhou, China
241
● Yutong Liu, Shanghai JiaoTong University, Shanghai, China
Contributors:
● Michael S. Kirkpatrick, James Madison University, Harrisonburg, VA, USA
● Linghe Kong, Shanghai JiaoTong University, Shanghai, China
242
Specialized Platform Development (SPD)
Preamble
The Specialized Platform Development (SPD) Knowledge Area (KA) refers to attributes involving
creating a software platform to address specific needs and requirements for particular areas.
Specialized platforms, such as healthcare providers, financial institutions, or transportation companies,
are tailored to meet specific needs. Developing a specialized platform typically involves several key
stages, i.e., requirements, design, development, deployment, and maintenance.
243
particular KA is often called software platform development since the specialized development
takes part in the software stages for multi-platform development.
● Increase of Computer Science Core Hours: Based on the already mentioned needs, the SPD
beta version has increased the number of computer science course hours from 0 to 9. The KA
subsets the web and mobile knowledge units (often the most closely related units in CS Core) into
foundations and specialized platforms core hours to provide flexibility and adaptability. This division
allows programs at different institutions to offer different interests, concentrations, or degrees that
focus on different application areas, where many of these concepts intersect the CS core.
Therefore, the common aspects web and mobile foundations have concepts in many CS programs'
core. Finally, the rest of the knowledge units permit the curriculum to have an extended and flexible
number of KA core hours.
● Renamed old knowledge units and incorporated new ones: in the spirit of capturing the future
technology and societal responsibilities, the Robotics, Embedded Systems, and Society, Ethics,
and Professionalism (SEP) knowledge units were introduced in this version. These KU work
harmoniously with other KAs, consistent with the topics and concepts covered in specific KAs’
knowledge units.
Specialized platform development provides a deep understanding of a particular user group’s needs
and requirements and considers other knowledge areas that help design and build a platform that
meets other KA's needs. Considering other KAs' needs, SPD helps to streamline workflows, improve
efficiency, and drive innovation across the recommended curriculum discussed in CS2023.
Core Hours
SPD/Common Aspects 3 *
SPD/Web Platforms *
SPD/Mobile Platforms *
SPD/Robot Platforms *
SPD/Embedded Platforms *
SPD/Game Platforms *
Total 3
Knowledge Units
244
CS-Core:
1. Overview of development platforms (i.e., web, mobile, game, robotics, embedded, and Interactive)
a. Input/sensors/control devices/haptic devices
b. Resource constraints
i. Computational
ii. Data storage
iii. Memory
iv. Communication
c. Requirements
i. security, uptime availability, fault tolerance (See also: SE-Reliability)
d. Output/actuators/haptic devices
2. Programming via platform-specific Application Programming Interface (API) vs traditional
application construction
3. Overview of platform Languages, (e.g., Python, Swift, Lua, Kotlin)
4. Programming under platform constraints and requirements (e.g., available development tools,
development, security considerations)
5. Techniques for learning and mastering a platform-specific programming language.
Non-Core:
6. Analyzing requirements for web applications
7. Computing services (see also, DM-NoSQL: 8)
a. Cloud Hosting
b. Scalability (e.g., Autoscaling, Clusters)
c. How to estimate costs for these services (based on requirements or time usage)
8. Data management (See also: DM-Core)
a. Data residency (where the data is located and what paths can be taken to access it)
245
b. Data integrity: guaranteeing data is accessible and guaranteeing that data is deleted when
required
9. Architecture
a. Monoliths vs. Microservices
b. Micro-frontends
c. Event-Driven vs. RESTful architectures: advantages and disadvantages
d. Serverless, cloud computing on demand
10. Storage solutions (See also: DM-Relational/ DM-SQL)
a. Relational Databases
b. NoSQL databases
Non-Core:
4. Development
a. Native versus cross-platform development
b. Software design/architecture patterns for mobile applications (See also: SE-Design)
5. Mobile platform constraints
a. Responsive user interface design (see also: HCI-Accessibility and Inclusive Design)
b. Diversity and mobility of devices
c. User experiences differences (e.g., between mobile and web-based applications)
d. Power and performance tradeoff
6. Mobile computing affordances
246
a. Location-aware applications
b. Sensor-driven computing (e.g., gyroscope, accelerometer, health data from a watch)
c. Telephony and instant messaging
d. Augmented reality (See also: GIT-Immersion)
7. Specification and testing (See also: SDF: Software Development Practices, SE-Validation)
8. Asynchronous computing (See also: PDC)
a. Difference from traditional synchronous programming
b. Handling success via callbacks
c. Handling errors asynchronously
d. Testing asynchronous code and typical problems in testing
247
SPD-Embedded: Embedded Platforms
This Knowledge unit considers embedded computing platforms and their applications. Embedded
platforms cover knowledge ranging from sensor technology to ubiquitous computing applications.
Non-Core:
1. Introduction to the unique characteristics of embedded systems
a. real-time vs. soft real-time and non-real-time systems
b. Resource constraints, such as memory profiles, deadlines (See also: AR-Memory: 2)
2. API for custom architectures
a. GPU technology (See also: AR-Heterogeneity: 1, GIT-Interfaces (AI))
b. Field Programmable Gate Arrays (See also: AR-Logic: 2 )
c. Cross platform systems
3. Embedded Systems
a. Microcontrollers
b. Interrupts and feedback
c. Interrupt handlers in high-level languages (See also: SF-Overview: 5)
d. Hard and soft interrupts and trap-exits (See also: OS-Principles: 6)
e. Interacting with hardware, actuators, and sensors
f. Energy efficiency
g. Loosely timed coding and synchronization
h. Software adapters
4. Real-time systems
a. Hard real-time systems vs. soft real-time systems (See also: OS-Real-time: 3)
b. Timeliness
c. Time synchronization/scheduling
d. Prioritization
e. Latency
f. Compute jitter
5. Memory management
a. Mapping programming construct (variable) to a memory location(See also: AR-Memory)
b. Shared memory (See also: OS-Memory)
c. Manual memory management
d. Garbage collection (See also: FPL-Language Translation)
6. Safety considerations and safety analysis (See also: SEP-Context, SEP-Professional)
7. Sensors and actuators
8. Embedded programming
9. Real-time resource management
10. Analysis and verification
11. Application design
248
3. Interface with sensors/actuators
4. Debug a problem with an existing embedded platform
5. Identify different types of embedded architectures
6. Evaluate which architecture is best for a given set of requirements
7. Design and develop software to interact with and control hardware
8. Design methods for real-time systems
9. Evaluate real-time scheduling and schedulability analysis
10. Evaluate formal specification and verification of timing constraints and properties
Non-Core:
1. Historical and contemporary platforms for games (See also: AR-A: Digital Logic and Digital
Systems, (See also: AR-A: Interfacing and Communication)
a. Evolution of Game Platforms
(e.g., Brown Box to Metaverse and beyond; Improvement in Computing Architectures
(CPU and GPU); Platform Convergence and Mobility)
b. Typical Game Platforms
(e.g., Personal Computer; Home Console; Handheld Console; Arcade Machine;
Interactive Television; Mobile Phone; Tablet; Integrated Head-Mounted Display;
Immersive Installations and Simulators; Internet of Things enabled Devices; CAVE
Systems; Web Browsers; Cloud-based Streaming Systems)
c. Characteristics and Constraints of Different Game Platforms
(e.g., Features (local storage, internetworking, peripherals); Run-time performance
(GPU/CPU frequency, number of cores); Chipsets (physics processing units, vector co-
processors); Expansion Bandwidth (PCIe); Network throughput (Ethernet); Memory
types and capacities (DDR/GDDR); Maximum stack depth; Power consumption; Thermal
design; Endian)
d. Typical Sensors, Controllers, and Actuators (See also: GIT-Interaction)
(e.g., typical control system designs—peripherals (mouse, keypad, joystick), game
controllers, wearables, interactive surfaces; electronics and bespoke hardware;
computer vision, inside-out tracking, and outside-in tracking; IoT-enabled electronics and
i/o.
249
e. Esports Ecosystems
(e.g., evolution of gameplay across platforms; games and esports; game events such as
LAN/arcade tournaments and international events such the Olympic Esports Series;
streamed media and spectatorship; multimedia technologies and broadcast
management; professional play; data and machine learning for coaching and training)
2. Real-time Simulation and Rendering Systems
a. CPU and GPU architectures: (See also, AR-Heteroginity: 2)
(e.g., Flynn’s taxonomy; parallelization; instruction sets; common components—graphics
compute array, graphics memory controller, video graphics array basic input/output
system; bus interface; power management unit; video processing unit; display interface)
b. Pipelines for physical simulations and graphical rendering: (See also, GIT-Rendering)
(e.g., tile-based, immediate-mode)
c. Common Contexts for Algorithms, Data Structures, and Mathematical Functions: (See
also, MSF:?, AL-Fundamentals)
(e.g., game loops; spatial partitioning, viewport culling, and level of detail; collision
detection and resolution; physical simulation; behavior for intelligent agents; procedural
content generation)
d. Media representations (See also, GIT-Fundamentals: 8, GIT-Geometric: 3)
(e.g., i/o, and computation techniques for virtual worlds: audio; music; sprites; models
and textures; text; dialogue; multimedia (e.g., olfaction, tactile).
3. Game Development Tools and Techniques
a. Programming Languages:
(e.g., C++; C#; Lua; Python; JavaScript)
b. Shader Languages: HLSL; GLSL; ShaderGraph.
c. Graphics Libraries and APIs (See also, GIT-Rendering, HCI-System Design: 3)
(e.g., DirectX; SDL; OpenGL; Metal; Vulkan; WebGL)
d. Common Development Tools and Environments: (See also: SDF-Practices: 2, SE-Tools,
)
(e.g., IDEs; Debuggers; Profilers; Version Control Systems including those handling
binary assets; Development Kits and Production/Consumer Kits; Emulators)
4. Game Engines
a. Open Game Engines
(e.g.,Unreal; Unity; Godot; CryEngine; Phyre; Source 2; Pygame and Ren’Py; Phaser;
Twine; SpringRTS)
b. Techniques (See also: AR-Performance and Energy Efficiency, (See also: SE-
Requirements)
(e.g., Ideation; Prototyping; Iterative Design and Implementation; Compiling Executable
Builds; Development Operations and Quality Assurance—Play Testing and Technical
Testing; Profiling; Optimization; Porting; Internationalization and Localization;
Networking)
5. Game Design
a. Vocabulary
(e.g., game definitions; mechanics-dynamics-aesthetics model; industry terminology;
experience design; models of experience and emotion)
250
b. Design Thinking and User-Centered Experience Design (See also: SE-?)
(e.g., methods of designing games; iteration, incrementing, and the double-diamond;
phases of pre- and post-production; quality assurance, including alpha and beta testing;
stakeholder and customer involvement; community management)
c. Genres
(e.g., adventure; walking simulator; first-person shooter; real-time strategy; multiplayer
online battle arena (MOBA); role-playing game (rpg))
d. Audiences and Player Taxonomies (See also: HCI-Understanding the User: 5)
(e.g., people who play games; diversity and broadening participation; pleasures, player
types, and preferences; Bartle, yee)
e. Proliferation of digital game technologies to domains beyond entertainment: (See also:
AI-E: Applications and Societal Impact)
(e.g., Education and Training; Serious Games; Virtual Production; Esports; Gamification;
Immersive Experience Design; Creative Industry Practice; Artistic Practice; Procedural
Rhetoric.)
251
b. Design tools requiring low-latency feedback loops
i. rendering tools
ii. graphic design tools
3. Programming by Prompt
a. Generative AI (e.g., OpenAI’s ChatGPT, OpenAI’s Codex, GitHub’s Copilot) and LLMs
are accessed/interacted.
b. Quantum Platforms (See also: AR-Quantum, FPL-Quantum: Quantum Computing)
i. Program quantum logic operators in quantum machines
ii. Use API for available quantum services
iii. Signal analysis / Fourier analysis / Signal processing (for music composition,
audio/RF analysis) (See also: GIT-Image)
SPD-SEP/Mobile
Topics
1. Privacy and data protection
2. Accessibility in mobile design
3. Security and cybersecurity:
4. Social impacts of mobile technology
5. Ethical use of AI and algorithms
252
3. Demonstrate proficiency in secure coding practices to mitigate risks associated with various
security threats in mobile development.
4. Analyze the broader social impacts of mobile technology, including its influence on
communication patterns, relationships, and mental health.
5. Comprehend the ethical considerations related to the use of AI in mobile applications, ensuring
algorithms are unbiased and fair.
SPD-SEP/Web
Topics
1.
Illustrative Learning Outcomes
1. Understand how mobile computing impacts communications and the flow of information within
society
2. Design mobile apps that have made daily tasks easier/faster
3. Recognize how the ubiquity of mobile computing has affected work-life balance
4. Understand how mobile computing impacts health monitoring and healthcare services
5. Comprehend how mobile apps are used to educate on and help achieve UN sustainability goals
SPD-SEP/Game
Topics
1. Intellectual Property Rights in Creative Industries:
a. Intellectual Property Ownership: copyright; trademark; design right; patent; trade secret;
civil versus criminal law; international agreements; procedural content generation and
the implications of generative artificial intelligence.
b. Licensing: usage and fair usage exceptions; open-source license agreements;
proprietary and bespoke licensing; enforcement.
2. Fair Access to Play:
a. Game Interface Usability: user requirements; affordances; ergonomic design; user
research; experience measurement; heuristic evaluation methods for games.
b. Game Interface Accessibility: forms of impairment and disability; means to facilitate
access to games; universal design; legislated requirements for game platforms;
compliance evaluation; challenging game mechanics and access.
3. Game-Related Health and Safety:
a. Injuries in Play: ways of mitigating common upper body injuries, such as repetitive strain
injury; exercise psychology and physiotherapy in esports.
b. Risk Assessment for Events and Manufacturing: control of substances hazardous to
health (COSHH); fire safety; electrical and electronics safety; risk assessment for games
and game events; risk assessment for manufacturing.
c. Mental Health: motivation to play; gamification and gameful design; game psychology—
internet gaming disorder.
4. Platform Hardware Supply Chain and Sustainability:
a. Platform Lifecycle: platform composition—materials, assembly; mineral excavation and
processing; power usage; recycling; planned obsolescence.
253
b. Modern Slavery: supply-chains; forced labour and civil rights; working conditions;
detection and remission; certification bodies and charitiable endeavours.
5. Representation in the Media and Industry:
a. Inclusion: identity and identification; exclusion of characters diverse audiences identify
with; media representation and its effects; media literacy; content analysis; stereotyping;
sexualization.
b. Equality: histories and controversies, such as gamergate; quality of life in the industry;
professional discourse and conduct in business contexts; pathways to game
development careers; social mobility; experience of developers from different
backgrounds and identities; gender, and technology.
SPD-SEP/Robotics
SPD-SEP/Interactive
This knowledge unit captures the society, ethics, and professionalism aspects from the specialized
platform development viewpoint. Every stage from the software development perspective impacts the
SEP knowledge unit.
Topics
1. Augmented technology and societal impact
2. Robotic design
254
3. Graphical User Interfaces Considerations for DEI
4. Recognizing data privacy and implications
5. LLMs and global compliance regulations, such as copyright law
6. Mobile development and global equality
Professional Dispositions
Math Requirements
Required:
Desired:
● Equations and Basic Algebra
● Geometry (e.g., 2d and 3d coordinate systems—cartesian and polar—points, lines, and angles)
● Trigonometry
● Vectors, Matrices, and Quaternions—linear transformations, affine transformations, perspective
projections, exponential maps, rotation, etc.
● Geometric primitives
● Rigid body kinematics and Newtonian physics
● Signal processing
● Coordinate-space transformations
● Parametric curves
● Binary and Hexadecimal Number Systems
● Logic and Boolean Algebra
● Calculus
● Linear Algebra
● Probability/Statistics (e.g., dynamic systems, visualization e.g., algorithmically generated
Tuftian-style displays)
● Discrete Math/Structures (e.g., graphs for process control and path search)
255
Courses Common to CS and KA Core Approaches
Specialized Platform Development
● SPD-Common
● SPD-Web
● SPD-Mobile
● SPD-Robotic
● SPD-Interactive
CS for Non-Majors
● SPD-Common
●
Mobile Development 8 Week Course
● SPD-Mobile
○ API Design and Development
○ User-Centered Design and the Mobile Platform
○ Software Engineering Applications in Mobile Computing (covers design patterns, testing,
async programming, and the like in a mobile context)
○ Challenges with Mobile Computing Security
○ Societal Impact of Mobile Computing
○ Mobile Computing Capstone Course
Committee
256
Chair: Christian Servin (El Paso Community College, El Paso, TX, USA)
Members:
● Sherif G. Aly, The American University in Cairo, Egypt
● Yoonsik Cheon, The University of Texas at El Paso, El Paso, Texas, USA
● Eric Eaton, University of Pennsylvania, Philadelphia, PA, USA
● Claudia L. Guevara, Jochen Schweizer mydays Holding GmbH, Munich, Germany
● Larry Heimann, Carnegie Mellon University, Pittsburgh, Pennsylvania, USA
● Amruth N. Kumar, Ramapo College of New Jersey, Mahwah, NJ, USA
● R. Tyler Pirtle, Google
● Michael James Scott, Falmouth University, UK
Contributors:
● Sean R. Piotrowski, Rider University, USA
● Mark 0’Neil, Blackboard Inc., USA
● John DiGennaro, Qwickly
● Rory K. Summerley, London South Bank University, UK.
257
Core Topics and Hours
258
9. Additional depth on AI Applications, growth,
and Impact (economic, societal, ethics)
259
AI Fundam 7. Types of representations Explain CS 2
ental a. Symbolic, logical
Knowled i. Creating a representation
ge from a natural language
Represe problem statement
ntation b. Learned subsymbolic
and representations
Reasoni c. Graphical models (e.g., naive Bayes,
ng Bayes net)
8. Review of probabilistic reasoning, Bayes
theorem (cross-reference with DS/Discrete
Probability)
260
3. Simple statistical-based supervised learning Apply/
such as Naive Bayes, Decision trees Develo
a. Focus on how they work without p/
going into mathematical or Evaluat
optimization details; enough to e
understand and use existing
implementations correctly
4. The overfitting problem and controlling
solution complexity (regularization, pruning
– intuition only)
a. The bias (underfitting) - variance
(overfitting) tradeoff
5. Working with Data
a. Data preprocessing
i. Importance and pitfalls of
b. Handling missing values (imputing,
flag-as-missing)
i. Implications of imputing vs
flag-as-missing
c. Encoding categorical variables,
encoding real-valued data
d. Normalization/standardization
e. Emphasis on real data, not textbook
examples
6. Representations
a. Hypothesis spaces and complexity
b. Simple basis feature expansion,
such as squaring univariate features
c. Learned feature representations
7. Machine learning evaluation
a. Separation of train, validation, and
test sets
b. Performance metrics for classifiers
c. Estimation of test performance on
held-out data
d. Tuning the parameters of a machine
learning model with a validation set
e. Importance of understanding what
your model is actually doing, where
its pitfalls/shortcomings are, and the
implications of its decisions
8. Basic neural networks
261
a. Fundamentals of understanding how
neural networks work and their
training process, without details of
the calculations
262
■ Parameter tuning
(grid/random search, via
cross-validation)
● Overview of reinforcement learning
● Two or more applications of machine
learning algorithms
○ E.g., medicine and health,
economics, vision, natural language,
robotics, game play
263
Diffusion, etc.), how they work, their
uses, and their shortcomings/pitfalls.
○ High-level overview of large
language models (e.g. as of 2023,
ChatGPT, Bard, etc.), how they
work, their uses, and their
shortcomings/pitfalls.
● Societal impact of AI
○ Ethics
○ Fairness
○ Trust / explainability
264
○ Ethics
○ Fairness
○ Trust / explainability
AL Foundational 11a. Search O(n), (e.g., Linear search of an array) Apply CS 0.5
Complexity 2b iii. Foundational complexity classes: Linear O(n) Evaluate
Strategies 1a. Brute Force Explain
AL Foundational 12a. Sorting O(n2), (e.g., Selection sort of an array) Apply CS 0.5
Complexity 2b v. Foundational complexity classes: Evaluate
Quadratic O(n2) Explain
Strategies 1a. Brute Force
AL Foundational 11b. Search O(log2 n), (e.g., Binary search of an array) Apply 1
Complexity 2b ii. Foundational complexity classes: Logarithmic Evaluate
Strategies 1b ii. Decrease and Conquer Explain
AL Foundational 12b. Sorting O(n log n), (e.g., Quick, Merge, Tim: array) Apply 1
Complexity 2b iv. Foundational complexity classes: Log Linear Evaluate
Strategies 1c. Divide-and-Conquer Explain
265
Complexity 2b iii. Foundational complexity classes: Constant O(n) Explain
Strategies 1f. Time vs. Space tradeoff Explain 1
266
Complexity 4. Tractability and Intractability 4
Foundational Complexity Classes: Exponential O(2n)
P, NP and NP-C complexity classes
Reductions
Problems Traveling Salesperson, Knapsack, SAT
Strategies 1. Paradigms: Exhaustive brute force, Dynamic
Programming
Complexity 2b viii. Factorial complexity classes: Factorial O(n!)
All Permutations, Hamiltonian Circuit
267
● High-level synthesis
○ Register transfer notation
○ Hardware description language
(e.g. Verilog/VHDL/Chisel)
● System-on-chip (SoC) design flow
● Physical constraints
○ Gate delays
○ Fan-in and fan-out
○ Energy/power [Shared with
SPD]
○ Speed of light
268
● Main memory organization and
operations
● Persistent memory (e.g. SSD,
standard disks) [Shared with OS]
● Latency, cycle time, bandwidth and
interleaving
● Virtual memory (hardware support,
cross-reference OS/Virtual Memory)
[Shared with OS]
● Fault handling and reliability [Shared
with OS] CS 6
Memory ● Reliability (cross-reference
AR Hierarchy SF/Reliability through Redundancy)
o Error coding
o Data compression
o Data integrity
● Non-von Neumann Architectures
o In-Memory Processing (PIM)
269
● Introduction to instruction-level
parallelism (ILP)
270
● What is a Qubit? Superposition
and measurement. Photons as
qubits.
● Systems of two qubits.
Entanglement. Bell states. The
No-Signaling theorem.
● Axioms of QM: superposition principle,
measurement axiom, unitary evolution
● Single qubit gates for the circuit model
of quantum computation: X, Z, H.
● Two qubit gates and tensor products.
Working with matrices.
● The No-Cloning Theorem. The
Quantum Teleportation protocol.
● Algorithms
● Simple quantum algorithms:
Bernstein-Vazirani, Simon’s
algorithm.
● Implementing Deutsch-Josza
with Mach-Zehnder
Interferometers.
● Quantum factoring (Shor’s
Algorithm)
● Quantum search (Grover’s
Algorithm)
● Implementation aspects
● The physical implementation of
qubits (there are currently nine
qubit modalities)
● Classical control of a Quantum
Processing Unit (QPU)
● Error mitigation and control.
NISQ and beyond.
● Emerging Applications
● Post-quantum encryption
● The Quantum Internet
● Adiabatic quantum computation
(AQC) and quantum annealing
271
● Purpose and advantages of database
systems
● Components of database systems
● Design of core DBMS functions (e.g.,
DM Core DB query mechanisms, transaction
System management, buffer management,
Concepts access methods)
● Database architecture, data Explain CS 2
independence, and data abstraction
● Transaction mgmt
● Normalization
● Approaches for managing large
volumes of data (e.g., noSQL database
systems, use of MapReduce). [crosslist
PD]
● Distributed databases/cloud-based
systems
● Structured, semi-structured, and
unstructured databases
272
○ Decomposition of a schema;
lossless-join and dependency-
preservation properties of a
decomposition Normal forms
(BCNF)
○ Denormalization (for
efficiency)
273
○ ACID
○ Serializability Explain KA 4
● Concurrency Control: [crosslist PD]
○ 2-Phase Locking
○ Deadlocks handling strategies
● Recovery Manager
○ Relation with Buffer Manager
DM Data tbd
Analytics
DM Data tbd
Security &
Privacy
274
Object-Oriented 1. Imperative programming as a sunset if Develop CS 5
Programing object-oriented programming
2. Object-oriented design
a. Decomposition into objects
carrying state and having behavior
b. Class-hierarchy design for
modeling
3. Definition of classes: fields, methods,
and constructors
4. Subclasses, inheritance (including
multiple inheritance), and method
overriding
5. Dynamic dispatch: definition of
method-call
6. Exception handling
7. Object-oriented idioms for
encapsulation
a. Privacy, data hiding, and visibility of
class members
b. Interfaces revealing only method
signatures
c. Abstract base classes, traits and
mixins
8. Dynamic vs static properties
9. Composition vs inheritance
10. Subtyping
a. Subtype polymorphism; implicit
upcasts in typed languages
b. Notion of behavioral replacement:
subtypes acting like supertypes
c. Relationship between subtyping
and inheritance
275
Functional 1. Lambda expressions and evaluation Develop CS 4
Programming a. Variable binding and scope rules
b. Parameter passing
c. Nested lambda expressions and
reduction order
2. Effect-free programming
a. Function calls have no side effects,
facilitating compositional reasoning
b. Immutable variables and data
copying vs. reduction
c. Use of recursion vs. loops vs.
pipelining (map/reduce)
3. Processing structured data (e.g.,
trees) via functions with cases for
each data variant
a. Functions defined over compound
data in terms of functions applied to
the constituent pieces
b. Persistent data structures
4. Using higher-order functions (taking,
returning, and storing functions)
276
Logic 1. Universal vs. existential quantifiers Explain KA 3
Programming 2. First order predicate logic vs. higher
order logic
3. Expressing complex relations using
logical connectives and simpler
relations
4. Definitions of Horn clause, facts,
goals, and subgoals
5. Unification and unification algorithm;
unification vs. assertion vs expression
evaluation
6. Mixing relations with functions
7. Cuts, backtracking and non-
determinism
8. Closed-world vs. open-world
assumptions
277
5. Using a reactive framework Develop KA 2
a. Defining event handlers/listeners
b. Parameterization of event senders
and event arguments
c. Externally-generated events and
program-generated events
6. Separation of model, view, and
controller
278
6. Futures Explain KA 2
7. Language support for data parallelism
such as forall, loop unrolling,
map/reduce
8. Effect of memory-consistency models
on language semantics and correct
code generation
9. Representational State Transfer
Application Programming Interfaces
(REST APIs)
10. Technologies and approaches: cloud
computing, high performance
computing, quantum computing,
ubiquitous computing
11. Overheads of message passing
12. Granularity of program for efficient
exploitation of concurrency.
13. Concurrency and other programming
paradigms (e.g., functional).
279
Type Systems 1. A type as a set of values together with Develop CS 3
a set of operations
a. Primitive types (e.g., numbers,
Booleans)
b. Compound types built from other
types (e.g., records, unions, arrays,
lists, functions, references) using
set operations
2. Association of types to variables,
arguments, results, and fields
3. Type safety as an aspect of program
correctness
4. Type safety and errors caused by using
values inconsistently given their
intended types
5. Statically-typed vs dynamically-typed
programming languages
6. Goals and limitations of static and
dynamic typing
a. Detecting and eliminating errors as
early as possible
7. Generic types (parametric
polymorphism)
a. Definition and advantages of
polymorphism: parametric,
subtyping, overloading and
coercion
b. Comparison of monomorphic and
polymorphic types
c. Comparison with ad-hoc
polymorphism (overloading) and
subtype polymorphism
d. Generic parameters and typing
e. Use of generic libraries such as
collections
f. Comparison with ad hoc
polymorphism (overloading) and
subtype polymorphism
g. Prescriptive vs. descriptive
polymorphism
h. Implementation models of
polymorphic types
280
i. Subtyping
281
8. Type equivalence: structural vs name Develop KA 4
equivalence
9. Complementary benefits of static and
dynamic typing
a. Errors early vs. errors late/avoided
b. Enforce invariants during code
development and code
maintenance vs. postpone typing
decisions while prototyping and
conveniently allow flexible coding
patterns such as heterogeneous
collections
c. Typing rules
i. Rules for function, product,
and sum types
d. Avoid misuse of code vs. allow
more code reuse
e. Detect incomplete programs vs.
allow incomplete programs to run
f. Relationship to static analysis
g. Decidability
282
Systems 1. Data structures for translation, Develop CS 3
Programming execution, translation and code
mobility such as stack, heap, aliasing
(sharing using pointers), indexed
sequence and string
2. Direct, indirect, and indexed access to
memory location
3. Run-time representation of data
abstractions such as variables, arrays,
vectors, records, pointer-based data
elements such as linked-lists and trees,
and objects
4. Abstract low-level machine with simple
instruction, stack and heap to explain
translation and execution
5. Run-time layout of memory: activation
record (with various pointers), static
data, call-stack, heap
a. Translating selection and iterative
constructs to control-flow diagrams
b. Translating control-flow diagrams
to low level abstract code
c. Implementing loops, recursion, and
tail calls
a. Translating function/procedure calls
and return from calls, including
different parameter passing
mechanism using an abstract machine
6. Memory management
a. Low level allocation and accessing
of high-level data structures such
as basic data types, n-dimensional
array, vector, record, and objects
b. Return from procedure as
automatic deallocation mechanism
for local data elements in the stack
c. Manual memory management:
allocating, de-allocating, and
reusing heap memory
d. Automated memory management:
garbage collection as an
283
automated technique using the
notion of reachability
284
Language 1. Interpretation vs. compilation to native Explain CS 4
Translation and code vs. compilation to portable
Execution intermediate representation
a. BNF and extended BNF
representation of context-free
grammar
b. Parse tree using a simple sentence
such as arithmetic expression or if-
then-else statement
c. Execution as native code or within
a virtual machine
2. Language translation pipeline: syntax
analysis, parsing, optional type-
checking, translation/code generation
and optimization, linking, loading,
execution
285
Program 1. BNF and regular expressions Explain KA 3
Abstraction and 2. Programs that take (other) programs
Representation as input such as interpreters,
compilers, type-checkers,
documentation generators
3. Components of a language
a. Definitions of alphabets, delimiters,
sentences, syntax and semantics
b. Syntax vs. semantics
4. Program as a set of non-ambiguous
meaningful sentences
5. Basic programming abstractions:
constants, variables, declarations
(including nested declarations),
command, expression, assignment,
selection, definite and indefinite
iteration, iterators, function, procedure,
modules, exception handling
6. Mutable vs. immutable variables:
advantages and disadvantages of
reusing existing memory location vs.
advantages of copying and keeping old
values; storing partial computation vs.
recomputation
7. Types of variables: static, local,
nonlocal, global; need and issues with
nonlocal and global variables
8. Scope rules: static vs. dynamic;
visibility of variables; side-effects
9. Side-effects induced by nonlocal
variables, global variables and aliased
variables
286
Core ● Applications Explain CS 4
● Human vision system
● Digitization of analog data
● Standard media formats
● Color Models
● Tradeoffs between storing data and re-
computing data
● Animation as a sequence of still
images
● SEP related to graphics
287
Visualization ● Visualization of: Explain and KA 3
KA Core ○ 2D/3D scalar fields: color Implement
mapping
○ Time-varying data
Explain and KA 2
● Visualization techniques (color Implement
mapping, dimension reduction)
288
Shading KA ● Time (motion blur) and lens position Explain and KA 6
Core (focus) and their impact on rendering Use
● Shadow mapping
● Occlusion culling
● Area light sources
● Hierarchical depth buffering
● Non-photorealistic rendering
289
● Stereoscopic display Explain and KA 3
● Viewer tracking Implement
○ Inside out vs Outside In
○ Head / Body / Hand / tracking
● Visibility computation
290
● Connection to physical artifacts Explain, KA 3
○ Computer Aided Design evaluate,
○ Computer Aided Manufacturing and
○ Fabrication Implement
■ prototyping (shared with an example
HCI)
■ Additive (3D printing)
■ Subtractive (CNC
milling)
■ Forming (vacuum
forming)
291
○ Hourglass model
292
● Error control Evaluate
○ Retransmission
○ Error correction
● Ethernet Explain
● Switching Apply
293
● Local Area Network Topologies (e.g. Explain
data center networks)
294
NC Emergin ● Middleboxes (e.g. filtering, deep packet Explain
g Topics inspection, load balancing, NAT, CDN) KA 4
295
operating system functions [Shared
with AR]
● Protection of resources means
protecting some machine
instructions/functions[Shared with
AR]
● Leveraging interrupts from hardware
level: service routines and
implementations[Shared with AR]
● Concept of user/system state and
protection, transition to kernel mode
using system calls [Shared with AR]
● Mechanism for invoking of system
calls, the corresponding mode and
context switch and return from
interrupt [Shared with AR]
OS Concurrenc ● Thread abstraction relative to Explain CS 3
y concurrency
(non-core ● Race conditions, critical sections
topics not (role of interrupts if needed)
listed) ● Deadlocks and starvation
● Multiprocessor issues (spin-locks,
reentrancy)
● Thread creation, states, structures Apply KA 3
● Thread APIs
● Deadlocks and starvation (necessary
conditions/mitigations)
● Implementing thread safe code
(semaphores, mutex locks, cond
vars)
● Race conditions in shared memory
[Shared with PD]
OS Scheduling ● Preemptive and non-preemptive Explain KA 3
(non-core scheduling
topics not ● Timers (e.g. building many timers
listed) out of finite hardware timers).
● Schedulers and policies
● Concepts of SMP/multiprocessor
scheduling [Shared with AR]
OS Process ● Processes and threads relative to Explain KA 3
Model virtualization-Protected memory,
process state, memory isolation, etc
296
● Memory footprint/segmentation
(stack, heap, etc)
● Creating and loading executables
and shared libraries
● Dispatching and context switching
● Interprocess communication
OS Memory ● Review of physical memory, address Explain KA 4
Manageme translation and memory
nt management hardware
(non-core ● Impact of memory hierarchy
topics not including cache concept, cache
listed) lookup, etc on operating system
mechanisms and policy
● Logical and physical addressing
● Concepts of paging, page
replacement, thrashing and
allocation of pages and frames
● Allocation/deallocation/storage
techniques (algorithms and data
structure) performance and
flexibility
● Memory Caching and cache
coherence
● Security mechanisms and concepts
in memory management including
sandboxing, protection, isolation,
and relevant vectors of attack
OS Protection ● Overview of operating system Apply CS 3
and Safety security mechanisms
(Overlap ● Attacks and antagonism (scheduling,
with etc.)
Security) ● Review of major vulnerabilities in
real operating systems
● Operating systems mitigation
strategies such as backups
● Policy/mechanism separation Apply KA 1
● Security methods and devices
● Protection, access control, and
authentication
OS Device ● Buffering strategies Explain KA 2
Manageme ● Direct Memory Access and Polled
nt I/O, Memory-mapped I/O Historical
and contextual - Persistent storage
297
(non-core device management (magnetic, SSD,
not listed) etc.)
[Shared
memory in
AR]
OS File ● Concept of a file Explain KA 2
Systems ● File system mounting
API and ● File access control
Implementa ● File sharing
tion ● Basic file allocation methods
(Historical ● File system structures comprising
significance file allocation including various
but may directory structures and methods for
play uniquely identifying files (name,
decreasing identified or metadata storage
role moving location)
forward) ● Allocation/deallocation/storage
techniques (algorithms and data
structure) impact on performance
and flexibility
● Free space management
● Implementation of directories to
segment and track file location
OS Advanced ● File systems: partitioning, Explain KA 2
File mount/unmount, virtual file systems
Systems ● In-depth implementation techniques
(non-core ● Memory-mapped files
topics not ● Special-purpose file systems
listed) ● Naming, searching, access, backups
● Journaling and log-structured file
systems
OS Virtualizati ● Using virtualization and isolation to Explain KA 2
on(non-core achieve protection and predictable
topics not performance
listed) ● Advanced paging and virtual
memory
● Virtual file systems and virtual
devices
● Thrashing
● Containers
OS Real-time ● Process and task scheduling Explain KA 1
and ● Deadlines and real-time issues
298
Embedded ● Low-latency/soft real-time" vs "hard
Systems real time" [shared with PL]
(non-core
topics not
listed)
OS Fault ● Reliable and available systems Explain KA 1
Tolerance ● Software and hardware approaches
(non-core to address tolerance (RAID)
topics not
listed)
OS Social, ● Open source in operating systems Explain KA 4
Ethical and ● End-of-life issues with sunsetting
Professiona operating systems [Covered in SE]
l topics
299
● Procedural: Enabling multiple actions to
start at a given program point
● Reactive: Enabling upon an event
● Dependent: Enabling upon completion
of others
300
○ Multiple layers of sharing
domains, scopes and caches
● Data Stores
○ Cooperatively maintained
structured data implementing
maps, sets, and related ADTs
○ Varieties: Owned, shared,
sharded, replicated, immutable,
versioned
301
receiving.(usually reactive or
RPC-based)
○ Formats, marshaling, validation,
encryption, compressIon
○ Multiplexing and demultiplexing
in contexts with many relatively
slow IO devices or parties;
completion-based and scheduler-
based techniques; async-await,
select and polling APIs.
○ Formalization and analysis; for
example using CSP
● Memory
○ Memory models: sequential and
release/acquire consistency
○ Memory management; including
reclamation of shared data;
reference counts and alternatives
○ Bulk data placement and
transfer; reducing message
traffic and improving locality;
overlapping data transfer and
computation; impact of data
layout such as array-of-structs vs
struct-of-arrays
○ Emulating shared memory:
distributed shared memory,
RDMA
● Data Stores
○ Consistency: atomicity,
linearizability, transactionality,
coherence, causal ordering,
conflict resolution, eventual
consistency, blockchains,
○ Faults and partial failures;
voting; protocols such as Paxos
and Raft
○ Security and trust: Byzantine
failures, proof of work and
alternatives
302
○ Execution control when one
activity’s initiation or progress
depends on actions of others
○ Completion-based: Barriers,
joins
○ Data-enabled: Produce-
Consumer designs
○ Condition-based: Polling,
retrying, backoffs, helping,
suspension, queueing, signaling,
timeouts
○ Reactive: enabling and
triggering continuations
● Progress
○ Dependency cycles and
deadlock; monotonicity of
conditions
● Atomicity
○ Atomic instructions, enforced
local access orderings
○ Locks and mutual exclusion
○ Deadlock avoidance: ordering,
coarsening, randomized retries;
encapsulation via lock managers
○ Common errors: failing to lock
or unlock when necessary,
holding locks while invoking
unknown operations, deadlock
303
○ Performance: contention,
granularity, convoying, scaling
○ Non-blocking data structures
and algorithms
● Atomicity
○ Ownership and resource control
○ Lock variants: sequence locks,
read-write locks; reentrancy;
tickets
○ Transaction-based control:
Optimistic and conservative
○ Distributed locking: reliability
● Interaction with other forms of program
control
○ Alternatives to barriers: Clocks;
Counters, virtual clocks;
Dataflow and continuations;
Futures and RPC; Consensus-
based, Gathering results with
reducers and collectors
○ Speculation, selection,
cancellation; observability and
security consequences
○ Resource-based: Semaphores
and condition variables
○ Control flow: Scheduling
computations, Series-parallel
loops with (possibly elected)
leaders, Pipelines and Streams,
nested parallelism.
○ Exceptions and failures.
Handlers, detection, timeouts,
fault tolerance, voting
304
(including check-then-act errors), or
termination (livelock).
305
constructs, and channel, socket, and/or
remote procedure call APIs
306
s ● Basic constructs such as assignment
statements, conditional and iterative
structures and flow of control
● Key modularity constructs such as
functions/methods and classes, and related
concepts like parameter passing, scope,
abstraction, data encapsulation, etc.
● Input and output using files, console, and
APIs
● Structured data types available in the
chosen programming language like
sequences
● Libraries and frameworks provided by the
language (when/where applicable)
● Recursion
307
SDF Softwar ● Programming style that improves Evaluat CS 1
e readability [Shared with SE] e
Develop
ment
practices
Explain KA 2
308
Environ analysis tools
ments ● Software process automation Explain KA 3
● Design and communication tools (docs,
diagrams, common forms of design
diagrams like UML)
● Tool integration concepts and
mechanisms
● Use of modern IDE facilities - debugging,
refactoring, searching/indexing, etc.
309
SE Softwar ● Verification and validation concepts Explain CS 1
e ● Why testing matters
Verificati ● Testing objectives
on and ● Test kinds
Validatio ● Stylistic differences between tests and
n production code
310
2. Impact of social media and artificial
intelligence on individual well-being,
political ideology, and cultural ideology
3. Impact of involving computing
technologies, particularly artificial
intelligence, biometric technologies and
algorithmic decision-making systems, in
civic life (e.g., facial recognition
technology, biometric tags, resource
distribution algorithms, policing software)
311
4. Professional certification, codes of ethics,
conduct, and practice, such as the
ACM/IEEE-CS, SE, AITP, IFIP and
international societies
5. Accountability, responsibility and liability
(e.g., software correctness, reliability and
safety, as well as ethical confidentiality of
cybersecurity professionals)
6. Introduction to theories describing the
human creation and use of technology
including instrumentalism, sociology of
technological systems, disability justice,
neutrality thesis, pragmatism,
utilitarianism, and decolonial theories
7. Develop strategies for recognizing and
reporting designs, systems, software, and
professional conduct (or their outcomes)
that may violate law or professional codes
of ethics
312
ual 2. Copyrights, patents, trade secrets,
Property trademarks
3. Plagiarism
4. Foundations of the open source movement
5. Software piracy
313
2. Using synthesis to concisely and
accurately convey tradeoffs in competing
values driving software projects including
technology, structure/process, quality,
people, market and financial
3. Use writing to solve problems or make
recommendations in the workplace, such
as raising ethical concerns or addressing
accessibility issues
314
SEP History 1. Age I: Prehistory—the world before Explain KA 1
ENIAC (1946): Ancient analog computing
(Stonehenge, Antikythera mechanism,
Salisbury Cathedral clock, etc.), Euclid,
Lovelace, Babbage, Gödel, Church,
Turing, pre-electronic (electro-mechanical
and mechanical) hardware
2. Age II: Early modern (digital) computing -
ENIAC, UNIVAC, Bombes (Bletchley
Park codebreakers), mainframes, etc.
3. Age III: Modern (digital) computing - PCs,
modern computer hardware, Moore’s Law
4. Age IV: Internet - networking, internet
architecture, browsers and their evolution,
standards, big players (Google, Amazon,
Microsoft, etc.), distributed computing
5. Age V: Cloud - smart phones (Apple,
Android, and minor ones), cloud
computing, remote servers, software as a
service (SaaS), security and privacy, social
media
6. Age VI: Emerging AI-assisted
technologies including decision making
systems, recommendation systems,
generative AI and other machine learning
driven tools and technologies
315
9. Economies of scale, startups,
entrepreneurship, philanthropy
10. How computing is changing personal
finance: Blockchain and cryptocurrencies,
mobile banking and payments, SMS
payment in developing regions, etc.
316
Equity, 2. Benefits of diversity and harms caused by
and a lack of diversity
Accessi 3. Historic marginalization due to
bility technological supremacy and global
infrastructure challenges to equity and
accessibility
317
SF Resource ● Different types of resources (e.g., Explain CS/ 1/2
Allocation processor share, memory, disk, net KA
and bandwidth)
Scheduling ● Common scheduling algorithms (e.g.,
first-come-first-serve scheduling,
priority-based scheduling, fair
scheduling and preemptive scheduling)
● Advantages and disadvantages of
common scheduling algorithms
318
● Amdahl’s Law: the part of the
computation that cannot be sped up
limits the effect of the parts that can
● Analytical tools to guide quantitative
evaluation
● Order of magnitude analysis (Big O
notation)
● Analysis of slow and fast paths of a
system
● Events on their effect on performance
(e.g., instruction stalls, cache misses,
page faults)
● Understanding layered systems,
workloads, and platforms, their
implications for performance, and the
challenges they represent for evaluation
● Microbenchmarking pitfalls
319
KA KU Topic Skill Core Hours
320
Game ● Historic and Contemporary Platforms for Games Apply KA 4
Platforms ● Social, Legal, and Ethical Considerations for
Game Platforms
● Real-time Simulation and Rendering Systems
● Game Development Tools and Techniques
● Game Design
SEP ● TBD KA 3
321
Course Packaging by Competency Area
Software
Courses that span Software Development Fundamentals (SDF), Algorithms and Complexity
(AL), Programming Languages (PL) and Software Engineering (SE).
Systems
Courses that span Systems Fundamentals (SF), Architecture and Organization (AR),
Operating Systems (OS), Parallel and Distributed Computing (PDC), Networking and
Communication (NC), Security (SEC) and Data Management (DM).
Applications
Courses span Graphics and Interactive Techniques (GIT), Artificial Intelligence (AI),
Specialized Platform Development (SPD), Human-Computer Interaction (HCI), Security (SEC)
and Data Management (DM).
Introduction to Data Science:
● GIT-Visualization (8 hours): types of visualization, libraries, foundations
● GIT-SEP (2 hours): ethically responsible visualization
● DM-Core: Parallel and distributed processing (MapReduce, cloud frameworks, etc.)
● DM-Modeling: Graph representations, entity resolution
● DM-Querying (4 hours): SQL, query formation
● DM-NoSQL (2 hours): Graph DBs, data lakes, data consistency
● DM-Security: privacy, personally identifying information and its protection
● DM-Analytics (2 hours)
● DM-SEP (2 hours): Data provenance
● AI-ML (17 hours): Data preprocessing, missing data imputation, supervised/semi-
supervised/unsupervised learning, text analysis, graph analysis and PageRank, experimental
methodology, evaluation, and ethics
● AI-SEP (3 hours): Applications specific to data science, interspersed throughout the course
● MSF-Statistics: Statistical analysis, hypothesis testing, experimental design
322
● AI-Robo: Robotics: (25 hours)
● SPD-D: Robot Platforms: (4 hours) (focusing on hardware, constraints/considerations, and
software architectures; some other topics in SPD/Robot Platforms overlap with AI/Robotics)
● AI-Search: Search: (4 hours) (selected topics well-integrated with robotics, e.g., A* and path
search)
● AI-ML: (6 hours) (selected topics well-integrated with robotics, e.g., neural networks for object
recognition)
● AI-SEP: (3 hours) (should be integrated throughout the course; robotics is already a huge
application, so this really should focus on societal impact and specific robotic applications)
Prerequisites:
● CS2
● Linear algebra
Skill statement: A student who completes this course should be able to understand and use robotic
techniques to perceive the world using sensors, localize the robot based on features and a map, and
plan paths and navigate in the world in simple robot applications. They should understand and be able
to apply simple computer vision, motion planning, and forward and inverse kinematics techniques.
GIT-Foundations (4 hours)
GIT-Rendering (6 hours)
GIT-Interaction (3 hours)
HCI-User (8 hours)
HCI-Accessibility (3 hours)
HCI-SEP (4 hours)
SE-Testing (4 hours)
SPD-Web/SPD-Game/SPD-Mobile (8 hours)
Mobile Computing
● DM-Modeling (2 hours)
● DM-Querying (2 hours)
● SPD-Common: Overview of development platforms (3 hours)
● SPD-Common: Considerations and Requirements (4 hours)
● SPD-Mobile: Data Management (3 hours)
● SPD-Mobile: Development (6 hours)
● SPD-Mobile: Mobile Platform Constraints (4 hours)
● SPD-Mobile: Access (3 hours)
323
● SPD-Mobile: Architecture (3 hours)
● SPD-Mobile: Storage Solutions (4 hours)
● SPD-Mobile: Specification and testing (3 hours)
● SPD-Mobile: Asynchronous computing (3 hours)
● SPD-SEP/Mobile:
324
Curricular Packaging
8 Course Model
This is a minimal course configuration that covers all the CS core topics. But, it does not leave much
room for exploration:
1. CS I (SDF, SEP)
2. CS II (SDF, AL, SEP, Generic KA)
3. Math and Statistical Foundations (MSF)
4. Algorithms (AL, PDC, SEP)
5. Introduction to Systems (SF, OS, AR, NC, SEP)
6. Formal Methods (FPL, AL-Formal, PDC, SEP)
7. Introduction to Applications (SEC, AI, HCI, GIT, SPD, DM, SEP)
8. Capstone (SE, SEP)
The capstone course is expected to provide the opportunity to cover any CS core topics not covered
elsewhere in the curriculum.
12 Course Model
325
16 Course Model
Three different models are offered here, each with its own benefits.
Model 1:
1. CS I (SDF, SEP)
2. CS II (SDF, AL, DM, SEP)
3. Math and Statistical Foundations (MSF)
4. Algorithms (AL, SEP)
5. Introduction to Systems (SF, SEP)
6. Programming Languages (FPL, AL, PDC, SEP)
7. Theory of Computation (AL, SEP)
8. Software Engineering (SE, HCI, GIT, PDC, SPD, DM, SEP)
9. Operating Systems (OS, PDC, SEP)
10. Computer Architecture (AR, SEP)
11. Parallel and Distributed Computing (PDC, SEP)
12. Networking (NC, SEP)
13. Pick one of:
a. Introduction to Artificial Intelligence (AI, SEP)
b. Machine Learning (AI, SEP)
c. Robotics (AI, SPD, SEP)
14. Pick one of:
a. Graphics (GIT, SEP)
b. Human-Centered Design (GIT, SEP)
c. Animation (GIT, SEP)
d. Virtual Reality (GIT, SEP)
15. Security (SEC, SEP)
16. Capstone (SE, SEP)
Model 2:
1. CS I (SDF, SEP)
2. CS II (SDF, AL, DM, SEP)
3. Math and Statistical Foundations (MSF, AI, DM)
4. Algorithms (AL, SEP)
5. Introduction to Systems (SF, SEP)
6. Programming Languages (FPL, AL, PDC, SEP)
7. Theory of Computation (AL, SEP)
8. Software Engineering (SE, HCI, GIT, PDC, SPD, DM, SEP)
9. Operating Systems (OS, PDC, SEP)
10. Two electives from:
a. Computer Architecture (AR, SEP)
b. Parallel and Distributed Computing (PDC, SEP)
c. Networking (NC, SEP)
326
d. Network Security (NC, SEC, SEP)
e. Security (SEC, SEP)
11. Pick three of:
a. Introduction to Artificial Intelligence (AI, SEP)
b. Machine Learning (AI, SEP)
c. Deep Learning (AI, SEP)
d. Robotics (AI, SPD, SEP)
e. Data Science (AI, DM, GIT)
f. Graphics (GIT, SEP)
g. Human-computer interaction (HCI, SEP)
h. Human-Centered Design (GIT, HCI, SEP)
i. Animation (GIT, SEP)
j. Virtual Reality (GIT, SEP)
k. Physical Computing (GIT, SPD, SEP)
12. Society, Ethics and Professionalism (SEP)
13. Capstone (SE, SEP)
Model 3:
1. CS I (SDF, SEP)
2. CS II (SDF, AL, DM, SEP)
3. Math and Statistical Foundations (MSF)
4. Algorithms (AL, AI, SEC, SEP)
5. Introduction to Systems (SF, OS, AR, NC)
6. Programming Languages (FPL, AL, PDC, SEP)
7. Software Engineering (SE, HCI, GIT, PDC, SPD, DM, SEP)
8. Two from Systems electives:
a. Operating Systems (OS, PDC)
b. Computer Architecture (AR)
c. Parallel and Distributed Computing (PDC)
d. Networking (NC, SEC, SEP)
e. Databases (DM, SEP)
9. Two electives from Applications:
a. Artificial Intelligence (AI, SPD, SEP)
b. Graphics (GIT, HCI, SEP)
c. Application Security (SEC, SEP)
d. Human-Centered Design (HCI, GIT, SEP)
10. Three open CS electives
11. Society, Ethics and Professionalism (SEP) course
12. Capstone (SE, SEP)
327
Section 3
A Competency
Framework
328
329
A Competency Model Framework
Definition of Competency
Competency was defined as the sum of knowledge, skills and dispositions in IT2017 [7]. Dispositions
are defined as cultivable behaviors desirable in the workplace [15].
Competency = Knowledge + Skills + Dispositions in context
In CC 2020 [8], competency was further elaborated as the sum of the three within the performance of a
task. Instead of the additive model of IT 2017, CC2020 defined competency as an intersection of the
three:
Competency = Knowledge ∩ Skills ∩ Dispositions
In CS2023, competency is treated as a point in a 3D space with knowledge, skills and dispositions as
the three axes of the space (Figure 1) [15]: all three are required for proper execution of a task.
Knowledge is covered by topics enumerated in knowledge units and knowledge areas; skills are
identified as one or some of Explain, Apply, Evaluate and Develop. In the knowledge model (Section 2),
appropriate dispositions were identified for each knowledge area.
In the specification of a competency, the task is the sole independent variable. The knowledge, skills
and dispositions needed to complete a task depend on the task and vary from one task to another. So,
a competency model of a curriculum should necessarily start with identification of the targeted tasks. To
this end, in this section:
1. A framework is proposed for systematically identify tasks and representative tasks are identified;
2. A format is introduced for competency specification;
3. Competency specifications are provided for selected tasks identified in step 1 using the format
in step 2; and
4. An algorithm is provided for educators to build a competency model that is tailored to their local
needs.
330
A Framework for Identifying Tasks
Computer science is a versatile discipline: the range of the tasks for which it prepares graduates is
vast. In order to keep the task of identifying tasks tractable:
The list will be restricted to atomic tasks that can be combined in infinite ways to create
compound tasks to suit local needs;
Instead of exhaustively listing all the tasks, a framework will be proposed for systematically
identifying atomic tasks.
The framework for systematically identifying atomic tasks consists of three dimensions: component,
activity and constraint. In a task statement, the component is typically the noun, the activity the verb
and the constraint either an adjective or adverb.
The framework is tailored to the three competency areas proposed in the knowledge model:
Software: Specifications that span Software Development Fundamentals (SDF), Algorithmic
Foundations (AL), Foundations of Programming Languages (FPL) and Software Engineering
(SE).
Systems: Specifications that span Systems Fundamentals (SF), Architecture and Organization
(AR), Operating Systems (OS), Parallel and Distributed Computing (PDC), Networking and
Communication (NC), Security (SEC) and Data Management (DM).
Applications: Specifications span Graphics and Interactive Techniques (GIT), Artificial
Intelligence (AI), Specialized Platform Development (SPD), Human-Computer Interaction (HCI),
Security (SEC) and Data Management (DM).
The following is an initial list of components in these three competency areas:
● Software: Program, algorithm, and language/paradigm.
● Systems: Processor, storage, communication, architecture, I/O, data, and service.
● Applications: Input, computation, output and platform.
A representative set of activities applicable to the competency areas are design, develop, document,
evaluate, maintain, improve, humanize and research. While most of the activities are self-explanatory,
humanize refers to activities that address society, ethics and professionalism issues and research
refers to activities that study theoretical underpinnings.
The components, activities and constraints listed above are representative, not prescriptive or
comprehensive. All three axes use nominal scale, with no ordinality implied.
Each atomic task is a point in the three-dimensional space of component x activity x constraint as
shown in Figure 1. At the bottom-right of the figure are the following three tasks mapped on software
competency area:
Develop a program for an open-ended problem (blue star);
Evaluate the efficiency of a parallel algorithm (green star);
331
Research language features for writing secure code (red star).
Figure 1: Software competency area (top-left); Systems competency area (top-right); Applications
competency area (bottom-left) and three tasks mapped on software competency area (bottom-right)
The framework is offered as a starting point for identifying atomic tasks – it is meant to be used to
generate tasks. One may want to add other components, activities and constraints to the framework as
appropriate for their local needs. It is expected that most competency specifications will be written for
compound tasks created by combining two or more atomic tasks, e.g., “Design, implement and
document a parallelized scheduling program.”
Representative Tasks
Most of the representative tasks listed in this section are atomic in nature. The tasks are not restricted
to CS or KA core topics only.
332
● Design test cases to determine if a program is functionally
correct
● Identify appropriate tools to assist in developing a program.
● Design an API for a service
Program Develop ● Write a program for a given problem.
● Develop a program that leverages libraries and APIs.
● Automate testing of new code under development.
● Work in a team effectively to solve a problem.
Program Document ● Document a program.
● Consistently format source code.
Program Evaluate Evaluate an existing application (open source or proprietary)
as a whole or partial solution for meeting a defined
requirement
Program Maintain ● Refactor a program.
● Perform code review to evaluate the quality of code
Program Humanize ● Defend the design/choices made for a program.
● Ensure fair and equitable access in a program
● Document the accountability, responsibility and liability an
individual/company assumes when releasing a given
service/software/product
● Develop a strategy for a team to keep up to date with ethical,
legal, and professional issues in relation to company strategy
● Incorporate legal and ethical privacy requirements into a given
service/software/product’s development cycle
● Convey the benefits of diverse development teams and user
bases on company culture and the services/software/products
the company provides, as well as the impacts that a lack of
diversity can have on these
Program Improve ● Debug a program
Program Research Compute the running time of a program
Formally prove the correctness of code
333
● Write a program using multiple languages and have the
components interact effectively and efficiently with each other.
Language / Document ● Justify the choice of a paradigm/language for a program.
Paradigm ● Write a white paper to describe how a program is translated
into machine code and executed.
● White a white paper explaining how a program executes in an
efficient manner with respect to memory and CPU utilization.
Language / Evaluate ● Evaluate the appropriateness of a language/paradigm for an
Paradigm application.
● Explain the benefits and challenges of converting an
application into parallel/distributed version.
● White a white paper explaining how a program effectively
utilizes language features to make it safe and secure
Language / Maintain
Paradigm
Language / Humanize
Paradigm
Language / Improve
Paradigm
Language / Research
Paradigm
Storage Design
Storage Develop
Storage Document
334
Storage Evaluate ● Assess the performance implications of cache memories in
your application
● Apply knowledge of operating systems to assess page
faults in CPU-GPU memory management and their
performance impact on the accelerated application.
Storage Maintain
Storage Humanize ●
Storage Improve ●
Storage Research ●
335
● Design a system to meet functional/non-functional
specifications.
● Document a system’s design choices and proposed system
hardware and software architecture.
Architecture Develop ● Deploy a system in a cloud environment
● Deploy an application component on a virtualized container
Architecture Document
Architecture Evaluate ● Evaluate the performance of a given system
● Find the performance bottleneck of a given system
● Choose among different parallel/distributed designs for
components of a given system
Architecture Maintain
Architecture Humanize
Architecture Improve ● Find and fix bugs in a system
Architecture Research
Design
Develop
Document
Evaluate
Maintain
Humanize
Improve
Research
336
Applications Competency Area
Computation Design ● Specify the operators and partial-order planning graph to solve
a logistics problem, showing all ordering constraints.
Computation Develop ● Implement an agent to play a two-player complete information
board game.
● Implement an agent to play a two-player incomplete information
board game.
● Write a program that uses Bayes rule to predict the probability
of disease given the conditional probability table and a set of
observations.
● Train and evaluate a neural network for playing a video game
(e.g., Mario, Atari).
● Develop a tool for identifying the sentiment of social media
posts.
● Write a program to solve a puzzle or gridworld
Computation Document
Computation Evaluate ● Compare the performance of three supervised learning models
on a dataset
● Explain some of the pitfalls of deep generative models for
image or text and how this can affect their use in an application.
Computation Maintain
Computation Humanize ● Write an essay on the effects of data set bias and how to
mitigate them.
Computation Improve
Computation Research
Output Design
Output Develop
Output Document
Output Evaluate
Output Maintain
Output Humanize
Output Improve
Output Research
337
Platform Design ● Determine whether to develop an app natively or using cross-
platform tools.
Platform Develop ● Create a mobile app that provides a consistent user experience
across various devices, screen sizes, and operating systems.
● Build a secure web page for evolving business needs
● Develop application programming interfaces (APIs) to support
mobile functionality.
Platform Document
Platform Evaluate ● Analyze people's experience using a novel peripheral for an
immersive system, with attention to usability and accessibility
specifications
Platform Maintain
Platform Humanize
Platform Improve ● Optimize a secure web page for evolving business needs
Platform Research
Design
Develop
Document
Evaluate
Maintain
Humanize
Improve
Research
This format differs from earlier proposals (e.g., [8, 16, 17]) in some key respects:
338
The task is separated from the competency statement, since the task is the independent
component of the specification, with all the other components depending on it. The task is typically
written in layman terms whereas technical details for completing the task are included in the
competency statement.
Dispositions are listed in knowledge areas and not in competency specifications. A competency
specification inherits its dispositions from the knowledge areas listed in it.
The reader is invited to adopt/adapt the format that best meets their local needs and suits their
preferences.
The following are some sample competency specifications that draw upon various knowledge areas of
computer science. They illustrate a range of competencies across all three competency areas
(Software, Systems and Applications), multiple competency units/activities (Design, Develop,
Document, Evaluate, Maintain, Humanize, Improve, Research) and all four skill levels (Explain, Apply,
Evaluate and Develop) at the undergraduate level. Some draw upon a single knowledge area while
others span multiple knowledge areas.
339
● Competency unit/activity: Design, Develop, Document
● Required knowledge areas and knowledge units:
○ SDF-Development Methods
○ SE-Software Construction
● Required skill level: Apply, Develop
● Desirable professional dispositions:
Task FPL2: Effectively use a programming language’s type system to develop safe
and secure software.
Competency statement: Apply knowledge of static and dynamic type rules for a
language to ensure an application is safe, secure, and correct.
Competency area: Software, Application
340
Competency unit/activity: Develop
Required knowledge areas and knowledge units:
FPL-Type Systems
Required skill level: Develop
Desirable professional dispositions:
● Task SEP1: Produce a white paper assessing the social and ethical implications of
collecting and storing the data from a new (or existing) application.
● Competency statement: Identify the stakeholders and evaluate the potential long-
term consequences for the collection and retention of data objects. Consider both
potential harm from unintended data use and from data breaches.
● Competency area: Systems
● Competency unit/activity: Evaluate, Humanize.
● Required knowledge areas and knowledge units:
○ SEP-Social Context
○ SEP-Methods for Ethical Analysis
○ SEP-Privacy and Civil Liberties
○ SEP-Professional Ethics
○ SEP-Security Policies, Laws and Computer Crimes
○ SEP-Equity, Diversity and Inclusion
○ DM-The Role of Data
○ SEC-Foundational Security
● Required skill level: Evaluate, Explain
● Desirable professional dispositions:
341
● Task DM1: Secure data from unauthorized access.
● Competency statement: Create database views to ensure data access is
appropriately limited.
● Competency area: Systems
● Competency unit/activity: Maintain
● Required knowledge areas and knowledge units:
○ DM-The Role of Data
○ DM-Relational Databases
○ DM-Query Processing
○ SEP-Security Policies, Laws and Computer Crimes
○ SEP-Professional Ethics
○ SEP-Privacy and Civil Liberties
○ SEC-Foundational Security
● Required skill level: Develop
● Desirable professional dispositions:
342
○ DM-DBMS Internals
○ SEP-Social Context
○ SEP-Methods for Ethical Analysis
○ SEP-Privacy and Civil Liberties
○ SEP-Professional Ethics
○ SEP-Security Policies, Laws and Computer Crimes
○ SEP-Equity, Diversity and Inclusion
○ SEC-Foundational Security
● Required skill level: Develop
● Desirable professional dispositions:
343
● Competency unit/activity: Evaluate, Maintain, Improve
● Required knowledge areas and knowledge units:
○ OS-Role and Purpose of Operating Systems
○ OS-Principles of Operating Systems
○ OS-Concurrency
○ OS-Scheduling
○ OS-Process Model
○ OS-Memory Management
○ OS-Protect and Safety
○ AR-Assembly Level Machine Organization
● Required skill level: Apply
● Desirable professional dispositions:
344
Applications Competency Area
● Task AI1: Implement an agent to make strategic decisions in a two-player adversarial game
with uncertain actions (e.g., a board game, strategic stock purchasing).
● Competency statement: Use minimax with alpha-beta pruning, and possible chance nodes
(expectiminimax), and heuristic move evaluation (at a particular depth) to solve a two-player
zero-sum game.
● Competency area: Applications
● Competency unit/activity: Design, Develop
● Required knowledge areas and knowledge units:
○ AI-Search
○ AI-Fundamental Knowledge Representation and Reasoning
● Required skill level: Apply, Develop
● Desirable professional dispositions:
● Task AI2: Analyze tabular data (e.g., customer purchases) to identify trends and predict
variables of interest.
● Competency statement: Use machine learning libraries, data preprocessing, training
infrastructures, and evaluation methodologies to create a basic supervised learning pipeline.
● Competency area: Applications
345
● Competency unit/activity: Design, Develop, Evaluate
● Required knowledge areas and knowledge units:
○ AI-Machine Learning
○ AI-Applications and Societal Impact
● Required skill level: Apply, Develop
● Desirable professional dispositions:
● Task AI3: Critique a deployed machine learning model in terms of potential bias and correct
the issues.
● Competency statement: Understand, recognize, and evaluate issues of data set bias in AI,
the types of bias, and algorithmic strategies for mitigation.
● Competency area: Applications
● Competency unit/activity: Document
● Required knowledge areas and knowledge units:
○ AI-Machine Learning
○ AI-Applications and Societal Impact
● Required skill level: Explain
● Desirable professional dispositions:
346
○ HCI-Accessibility
○ HCI-Evaluation
○ HCI-Design
○ HCI-SEP
● Required skill level: Evaluate
● Desirable professional dispositions:
● Task SEP1: Assess the ethical and societal implications of deploying a given AI-
powered service/software/product.
● Competency statement: Determine who will be affected and how.
● Competency area: Applications
● Competency unit/activity: Evaluate, Humanize
● Required knowledge areas and knowledge units:
○ AI-Fundamental Issues
○ SEP-Social Context
347
○ SEP-Methods for Ethical Analysis
○ SEP-Privacy and Civil Liberties
○ SEP-Justice. Equity, Diversity and Inclusion
● Required skill level: Explain, Evaluate
● Desirable professional dispositions:
348
● Required knowledge areas and knowledge units:
○ SE-Tools and Environments
○ SPD-Common Aspects
○ SPD-Mobile Platform
● Required skill level: Explain
● Desirable professional dispositions:
● Task SPD2: Build and optimize a secure web page for evolving business needs using
a variety of appropriate programming languages.
● Competency statement: Evaluate potential security hazards and apply optimization
techniques.
● Competency area: Applications
● Competency unit/activity: Evaluate
● Required knowledge areas and knowledge units:
○ AR-Performance and Energy Efficiency
○ NC-Network Security
○ OS-Protection and Safety
○ SF-System Security
○ SE-Software Design
○ SE-Tools and Environments
○ SPD-Common Aspects
○ SPD-Mobile Platform
○ SEP-Privacy
● Required skill level: Develop
● Desirable professional dispositions:
The following is proposed as the procedure for creating a competency model for a curriculum, to be
carried out in consultation with local stakeholders (academics, industry representatives, policy makers,
etc.):
1. Identify the competency area(s) targeted by the curriculum based on local needs;
2. For each targeted competency area, identify the atomic tasks that must be targeted by the
curriculum using the component x activity x constraint three-dimensional space of the competency
area shown in Figure 1. The targeted atomic tasks will each be a point in the three-dimensional
space as illustrated at the bottom-right in Figure 1.
3. If it is desirable to reduce the number of competency specifications, create compound tasks by
combining two or more related atomic tasks.
4. Use a format of choice to write a competency specification for each atomic or compound task
identified in the previous two steps.
5. The aggregate of the competency specifications for all the identified atomic/compound tasks is the
competency model of the curriculum.
349
o Ensure that the competency model draws upon all the topics identified as CS core.
350
Knowledge Model or Competency Model?
A knowledge model organizes content into knowledge areas, which are silos of related content. Each
knowledge area consists of multiple knowledge units, and each knowledge unit consists of multiple
topics. This epistemological organization of content facilitates the process of designing courses and
curricula: multiple courses may be carved out of a single knowledge area and a course may draw
content from multiple knowledge areas. Therefore, a knowledge model with its initial emphasis on
knowledge areas facilitates the needs of teaching.
A knowledge model with its initial emphasis on content and a competency model with its initial
emphasis on outcomes are complementary views of the same learning continuum. For computer
science, neither model is a substitute for the other. The two models complement each other, and work
better considered together than apart. So, a Combined Knowledge and Competency (CKC) model [15]
will be used in this report that synergistically combines the two and offers the benefits of both.
CKC Model
The Combined Knowledge and Competency (CKC) model is illustrated in Figure 1 [15]. In the figure:
The knowledge model appears on the left and consists of knowledge areas that in turn consist of
knowledge units.
The topics in the knowledge units are categorized as CS core (topics that every computer science
graduate must know), KA core (recommended topics for inclusion in a dedicated coverage of a
knowledge area) and Non-core (electives).
A computer science program may choose to cover some knowledge areas in greater depth/breadth
than other knowledge areas. When coherently chosen, the knowledge areas covered by a computer
science program will constitute the program’s competency area(s).
The competency model appears on the right and consists of competency areas that in turn consist
of competency units.
Competency units are activities that apply to every competency area, such as: design, develop,
document, evaluate, maintain, humanize, improve and theorize. A competency area is the sum of
its competency units/activities. Whereas the number of competency areas targeted by a program
indicates its breadth, the number competency units targeted by the program in each competency
area indicates its depth.
351
A competency is the application of knowledge, skills and dispositions to the completion of a task.
Since task is the only objective component of a competency statement, tasks are separated out of
competency statements and identified at the atomic level for each competency area.
Even though the emphasis on dispositions is greater in a competency model, dispositions are
generic to knowledge areas. So, they are associated with knowledge areas. This makes it easier for
educators to consistently promote them during the accomplishment of tasks associated with the
knowledge areas.
Finally, skill levels connect tasks in the competency model with knowledge and dispositions in the
knowledge model.
Figure 1. Combined Knowledge and Competency (CKC) Model of Computer Science Curricula.
To summarize the CKC model, competency areas are referred to in the knowledge model, knowledge
areas are referred to in the competency model, skill levels provide alignment between the two models
and dispositions are associated with knowledge areas in the knowledge model, but used to facilitate
completion of tasks specified in the competency model. The knowledge component of the CKC model
is presented in section 2. A framework for designing a competency model is presented earlier in section
3.
The following is proposed as the procedure for creating the curriculum of a computer science program
from the CKC model [15]:
1. Identify the competency area(s) targeted by the curriculum based on local needs;
352
2. Design courses and curricula using the knowledge areas and knowledge units of the CKC model as
described in Section 2;
3. Design a competency model consisting of competency specifications for the targeted competency
area(s) as described in this section;
4. Use the courses and curricula designed in step 2 for instruction and the competency model
designed in step 3 to evaluate the outcomes of the program.
5. In a cycle of continual improvement, repeat steps 1 – 4 to improve courses, competency statements
and outcomes of the program.
353
Section 4
Curricular
Issues
354
355
Characteristics of Graduates
We will endeavor to cite other articles where characteristics of CS graduates have been enumerated.
The Process
CS 2013 Version Beta SC survey Open feedback form (61 responses) Article
356
Institutional Challenges
Some institutional challenges are:
Faculty recruitment and retention
Workload to manage explosive enrollments
AI generation of code and its implications
Integrating AI into the core educational requirements
Fragmentation (Data Science, AI, etc. etc.) are all becoming degrees in their own right
Rethinking core curriculum
Unprepared high school students - Getting students to put the effort in to be successful
as CS students.
Elevating teaching track to professional status
Certificates, micro-credentials, associates in computing
CS for all (packaging it in an accessible way to a broad set of majors)
Broadening Participation of under-represented groups in Computing
Online education
The Process
CS 2013 Version Beta SC survey Open feedback form (32 responses) Article
357
Generative AI and the Curriculum
Introduction
Generative AI technologies have the potential to greatly disrupt computer science education. While it is
too early to confidently prognosticate how they will change computer science education, it is instructive
to consider some of the ramifications already apparent. In this section, a few of the ramifications of
generative AI technology are explored by knowledge area.
358
Mathematical and Statistical Foundations (MSF)
Security (SEC)
359
think more about what is being created rather than how to implement it. Critically, this requires a
substantially deeper investment in design (especially the vocabulary of design) and code
comprehension, while potentially decreasing the need for hands-on programming time. Similar
advances in static analysis and code review are anticipated to have meaningful impact on code
quality and clarity, while ideally reducing the impact of implicit bias by increasing consistency and
quality of comments and diagnostics.
It is clear that students must be taught how to correctly use generative AI technologies for
coursework. The boundary between using generative AI as a resource and using it to plagiarize
must be clarified. The limitations (e.g., hallucinations) of the technology must be adequately
discussed, as also inherent biases that may have been baked into the technology by virtue of the
data used to train them.
Every new technology has redefined the boundary between tasks that can be mechanized and
those that will need human participation. Generative AI is no different. Correctly identifying the
boundary will be the challenge for computer science educators going forward.
Generative AI may be used to facilitate undergraduate research – regardless of their technical
abilities, students can now be asked to design the correct prompts to recreate the software reported
in research publications before proceeding to validate the results reported in the publications.
Students may use generative AI to summarize assigned readings, help explain gaps in their
understanding of course material and fill in gaps in the presentations of the classes they missed.
360
Pedagogical Considerations
Introduction
What are some current trends in the teaching and learning of computer science? What are the
controversies of the day in terms of the pedagogy of computer science education? In this section, a top
few trends, controversies, and challenges have been listed for each knowledge area. These issues are
expected to influence the future evolution of computer science curricula.
How to solve the conundrum that for the most part, students write code that either reads/writes to a
file, or is interactive. Yet, in industry, the vast majority of data is obtained programmatically from a
database. Shouldn’t this be how our curricula be structured as well?
SQL vs NoSQL databases?
361
Graphics and Interactive Techniques (GIT)
Text
Summary of recommendations
● Standardize the prerequisites to discrete math. The faculty survey shows that institutional
variation in discrete-math prerequisites distributes nearly evenly across algebra, precalculus
and calculus, suggesting differing approaches to the mathematical maturity sought. Requiring
precalculus appears to be a reasonable compromise, so that students come in with some
degree of comfort with symbolic math and functions.
● Include applications in math courses. Studies show that students are motivated when they
see applications. We recommend including minor programming assignments or demonstrations
of applications to increase student motivation. While computer science
departments may not be able to insert such applications into courses offered by other
departments, it is possible to include applications of math in the computer science courses that
are co-scheduled with mathematical requirements, and to engage with textbook publishers to
provide such material.
● Apply available resources to enable student success. The subcommittee recommends that
institutions adopt preparatory options to ensure sufficient background without lowering
standards in mathematics. Theory courses can be moved further back in the curriculum to
accommodate first-year preparation, for example. And, where possible, institutions can avail of
online self-paced tutoring systems alongside regular coursework.
● Expand core mathematical requirements to meet the rising demand in new growth areas
of computer science. What is clear, looking forward to the next decade, is that exciting high-
growth areas of computer science require a strong background in linear algebra, probability and
statistics (preferably calculus-based). Accordingly, we recommend including as much of this
material into the standard curriculum as possible,
● Send a clear message to students about mathematics while accommodating their
individual circumstances. Faculty and institutions are often under pressure to help every
362
student succeed, many of whom struggle with math. While pathways, including computer
science-adjacent degrees or tracks, can be created to steer students past math requirements
towards software-focused careers, faculty should be equally direct in explaining the importance
of sufficient mathematical preparation for graduate school and for the very topical areas that
excite students.
Security (SEC)
Text
Is introduction to ethical thinking and awareness of social issues sufficient for our graduates to act
ethically? If not, does that put the burden on instructors to not only lead discussion about the
pressing questions of the day (free speech, filter bubbles, the rise of nationalism, cryptocurrencies,
economic disruption of automation) but also to weigh in on those matters? How should that be
done? Will we just be imparting our own biases upon our students?
How could we weave SEP throughout the curriculum in practice? Is this realistic? How much
coordination would it take? Is it possible in reality to have experts in ethics, philosophy, etc. *and*
CS course X, Y, and Z deliver some of the SEP content for courses X, Y and Z? How much less
optimal is it to have a standalone ethics course? Is there another model in between these two
extremes (neglecting the super extreme of not having any coordinated or targeted SEP content in
our courses)?
How can we effectively impart the core values of DEIA into our students’ education? How is this
best done in a CS context? How can we effectively impart the core values and skills of
professionalism into our students’ education? Are toy projects a suitable context for these? Are
work-placements / internships better? Should we put more focus on efforts to having more
programs / degrees contain these placements / internships?
Should software developers be licensed by the state just as engineers, architects, and medical
practitioners are? This is an older debate, but given the impact of software systems (akin to safe
bridges, buildings, etc), maybe it is time to revisit it. This speaks more to SEP since it addresses the
question of assignment of (financial) responsibility to a licensed expert who signed off on a software
design or actual code.
363
What would a set of current SEP case studies look like?
Collateral learning of SEP issues?
364
Considerations by Curriculum
Active learning is an important component of any computer science course – doing helps learning
computer science. Courses that use electronic books (ebooks) are a significant improvement over
the traditional lecture-based courses that do not involve any active learning component – the
provide ample opportunities to apply learning through problem-solving activities. In this vein, it is
important to emphasize that ideally, active learning should cover the entire gamut of skill levels –
not just apply, but also evaluate and develop.
365
Curricular Practices
Introduction
Prior curricular reports enumerated issues in the design and delivery of computer science curriculum.
Given the increased importance of these issues, the decision was made to provide in the CS2023
curricular report, guidelines for computer science educators to address these issues in their teaching
practices. To this end, experts were identified and peer-reviewed, well-researched, in-depth articles
were solicited from them to be published under the auspices of CS2023. These articles complement the
curricular guidelines in sections 2 and 3: whereas curricular guidelines list what should be covered in
the curriculum, these articles describe how and why they should be covered, including challenges, best
practices, etc.
The articles have been subjected to peer review when possible. The computer science education
community has been invited to provide feedback and suggestions on the first drafts of most of the
articles. Many of the articles have been or are in the process of being published in conferences and
journals. In this section, abstracts of the articles have been provided. The full articles are accessible at
the website csed.acm.org.
Social Aspects
Given the pervasive nature of computing applications, educators would be remiss not to teach their
students the principles of responsible computing. How they should go about doing so is explored in the
article “Multiple Approaches for Teaching Responsible Computing”. It uses research in the social
366
sciences and humanities to nudge responsible computing away from mere post-hoc risk management
to an integrated consideration of values throughout the lifecycle of computing products.
In a globalized world, applications of computing transcend national borders. In this context, students in
the Global North must develop awareness of transnational complexities whereas those from the Global
Souths must adapt to foreign standards and practices. This is brought home by the article “Making
ethics at home in Global CS Education: Provoking stories from the Souths.”
The article “Computing for Social Good in Education” highlights how computing education can be
used to improve society and address societal needs while also providing authentic computing
environments in education and appealing as a discipline to women and groups underrepresented in
computing.
Professional Practices
No curricular guidelines are complete by themselves. They must be adapted to local strengths and
needs. In this regard, the article on “Computer science in the liberal arts context” points a way to
adapt CS2023 to the needs of liberal arts colleges that constrain the size of the major in order to allow
their students exposure to a broad range of subjects. In the same vein, in Section 2, curricular
packaging has been suggested for programs that are 8, 12 and 16 courses large.
Community and technical colleges award academic transfer degrees that enable students to transfer to
four-year colleges. They provide an affordable on-ramp to baccalaureate degrees that are attuned to
the needs of the local workforce. The article “Computer Science Education in Community Colleges”
provides a roadmap for how the CS2023 curricular guidelines can be adapted to achieve these
objectives.
Programmatic Considerations
Several themes such as abstraction, modularity, generalization, and tradeoff cut across the various
knowledge areas of computer science. Recognizing and appreciating these themes is essential for
developing maturity as computer science professionals. The article “Connecting Concepts across
Knowledge Areas” enumerates these themes as a helpful guide for educators.
367
The article “The Future of Computer Science Educational Materials” provides a fascinating look into
the rapidly changing landscape of educational materials for computer science, including issues of
personalization, cloud-based access, integration of Artificial Intelligence, attending to social aspects of
learning, catering to underrepresented students, and emphasizing mastery-based learning, to name a
few. It provides a peek into the future of computer science education itself.
The article “The Role of Formal Methods in Computer Science Education” makes the case for
incorporating formal methods in computer science education. It buttresses its case with testimonials
from industry.
368
Multiple Approaches for Teaching Responsible Computing
Stacy A. Doore, Colby College, Colby, ME, USA
Atri Rudra, University at Buffalo, Buffalo, NY, USA
Michelle Trim, University Massachusetts Amherst, Amherst, MA, USA
Joycelyn Streator, Mozilla Foundation, USA
Richard Blumenthal, Regis University, Denver, CO, USA
Bobby Schnabel, University Colorado Boulder, Boulder, CO, USA
Teaching applied ethics in computer science (and computing in general) has shifted from a perspective
of teaching about professional codes of conduct and an emphasis on risk management towards a
broader understanding of the impacts of computing on humanity and the environment and the principles
and practices of responsible computing. This shift has produced a diversity of approaches for
integrating responsible computing instruction into core computer science knowledge areas and for an
expansion of dedicated courses focused on computing ethics. There is an increased recognition that
students need intentional and consistent opportunities throughout their computer science education to
develop the critical thinking, analytical reasoning, and cultural competency skills to understand their
roles and professional duties in the responsible design, implementation, and management of complex
socio-technological systems. Therefore, computing programs are re-evaluating the ways in which
students learn to identify and assess the impact of computing on individuals, communities, and
societies along with other critical professional skills such as effective communication, workplace
conduct, and regulatory responsibilities.
One of the primary shifts in the approach to teaching computing ethics comes from research in the
social sciences and humanities. This position is grounded in the idea that all computing artifacts,
projects, tools, and products are situated within a set of ideas, attitudes, goals, and cultural norms. This
means that all computing endeavors have embedded within them a set of values. Through teaching
students critical analysis methods, we can help them to identify potential biases, flaws, and
unintentional harms in applications or systems if they can examine the underlying assumptions driving
those designs and work with others to correct them. This kind of analysis makes space to bring real
world technologies, stakeholders, and domain experts into the classroom for discussion, avoiding the
pitfall of only engaging in toy problems. To teach responsible computing always requires us to first
recognize that computing happens in a context that is shaped by cultural values, including our own
professional culture and values.
The purpose of this paper is to highlight current scholarship, principles, and practices in the teaching of
responsible computing in undergraduate computer science settings. The paper is organized around
four primary sections: 1) a high-level rationale for the adoption of different pedagogical approaches
based on program context and course learning goals, 2) a brief survey of responsible computing
pedagogical approaches; 3) illustrative examples of how topics within the CS 2023 Social, Ethical, and
Professional (SEP) knowledge area can be implemented and assessed across the broad spectrum of
undergraduate computing courses; and 4) links to examples of current best practices, tools, and
resources for faculty to build responsible computing teaching into their specific instructional settings
and CS2023 knowledge areas.
369
Making ethics at home in Global CS Education: Provoking stories
from the Souths
There are few studies about how CS programs should account for the ways ethical dilemmas and
approaches to ethics are situated in cultural, philosophical and governance systems, religions and
languages (Hughes et al, 2020). We explore some of the complexities that arise for teaching and
learning about ethics in the Global Souths, or the geographic and conceptual spaces that are negatively
impacted by capitalist globalization and the US-European norms and values exported in computing
products, processes and education.
We consulted 46 participants in First Nations Australia, Bangladesh, Brazil, Chile, Colombia, Ecuador,
Egypt, Ghana, India, Kenya, Lebanon, Malaysia, Mexico, Namibia and Sri Lanka. Most participants are
university educators, but nine are computer industry professionals., We worked in four geographical
teams: Africa and the Middle East, Asia-Oceania, Latin America and South-Asia. A group of
coordinators (co-authors 1, 2, 3, 4 and 13) decided the main topics to explore, and then each team
conducted interviews or administered questionnaires suited to their region.
We organise participants’ insights and experience as stories under four main themes. Firstly, ethics
relate to diverse perspectives on privacy and institutional approaches to confidentiality. Secondly,
people enact ethics by complying with regulations that also attain other goals and difficulties arise in
education when regulations are absent or practices are ambiguous. Thirdly, discriminations occur
based on people’s gender, technical ability and/or minoritised position. Finally, participants’ insights
suggest a relational rather than transactional approach to ethics.
Ethical guidelines are entangled in socioeconomic circumstances, cultural norms and existing/non-
existing policies and structures. Diverse participants explained how their practices cannot align with
globalised guidelines and made impressive efforts to fill the gaps and maintain integrity. Thus, guidance
should speak to and come “from within” local realities and should focus on leveraging students’ values,
370
knowledge and experiences. This requires CS ethics education to promote respect for localised ethical
judgements and recognise that approaches in the Global Souths are situated in transnational politics
and demands to juggle many factors in managing globalised regulations (Israel, 2017; Png, 2022). To
prepare students for careers in a global industry, CS educators in the Global North should ensure
students are aware of the many transnational complexities when they introduce examples from the
Souths.
CS students in the Global Souths must adapt at an extraordinary pace. Many learn the globalised
professional standards, that signal legitimacy, in settings that markedly differ from the places where
they were raised, live, or work. At the same time, they must adapt as their nations introduce new rules
and regulations (e.g., data protection laws) and their educators negotiate the gaps created by static,
anticipatory, globalised ethical codes. Thus, we advocate for In-Action Ethics (Frauenberger et al’s,
2017). Rather than focus on ethical principles embedded with a particular ontological stance, In-Action
Ethics centres the ways moral positions are embodied in actions. In-Action Ethics can be applied
across CS knowledge areas, within and between diverse settings and prompt students to reflect on
their actions along their own trajectories in considering what is the right thing to do.
References
Christopher Frauenberger, Marjo Rauhala & Geraldine Fitzpatrick .2017. In-action ethics. Interacting
with computers, 29(2), 220-236.
Janet Hughes, Ethan Plaut, Feng Wang, Elizabeth von Briesen, Cheryl Brown, Gerry Cross, Viraj
Kumar, & Paul Myers. 2020. Global and local agendas of computing ethics education. In Proceedings
of the Conference on Innovation and Technology in Computer Science Education 239-245. ACM.
Mark Israel. 2017. Ethical imperialism? Exporting research ethics to the global south. The Sage
handbook of qualitative research ethics, 89-102.
Marie-Therese Png. 2022. At the Tensions of Souths and North: Critical Roles of Global South
stakeholders in AI Governance. In Proceedings ACM Conference on Fairness, Accountability, and
Transparency 1434-1445.
371
Computing for Social Good in Education
Heidi J. C. Ellis, Western New England University, Springfield, MA, USA
Gregory W. Hislop, Drexel University, Philadelphia, PA, USA
Mikey Goldweber, Denison University, Granville, OH, USA
Sam Rebelsky, Grinnell College, Grinnell, IA, USA
Janice L. Pearce, Berea College, KY, USA
Patti Ordonez, University of Maryland Baltimore County, MD, USA
Marcelo Pias, Universidade Federal do Rio Grande, Brazil
Neil Gordon, University of Hull, Hull, UK
Computing for Social Good (CSG) encompasses the potential of computing to have a positive impact
on individuals, communities, and society, both locally and globally. Computing for Social Good in
Education (CSG-Ed) addresses the role of CSG as a component of computing education including the
importance of CSG, recommended CSG content, depth of coverage, approaches to teaching, and
benefits of CSG in computing education. Also covered are a summary of prior work in CSG-Ed, related
topics in computing, and suggestions for implementation of CSG in computing curricula.
The discussion begins with an overview of CSG that expands from defining the term to identifying key
topics that are within scope. The focus of computing for social good is the potential of computing to
improve society and to address common societal needs such as education, health care, and economic
development. This discussion of social good naturally brings to mind the potential for computing to
cause harm as well as good. This connection raises a set of closely related topics including
professional ethics, and various harms of computing such as algorithmic bias, privacy loss, and
ecological impact of computer hardware creation, operation, and disposal.
The importance of CSG in computing education has been recognized for decades, but as with many
important topics, CSG is in tight competition for a share of the available curriculum time. On one hand,
CSG is recognized as essential and therefore has a place in curricular recommendations, codes of
ethics, and accreditation standards. On the other hand, CSG may not be seen as providing knowledge
students need immediately to begin a computing career. As such, the amount of time devoted directly
to CSG is likely to be rather limited in most computing curricula. This tension informs the discussion of
approaches to teaching CSG, where there is consideration of ways to incorporate CSG as part of
teaching core computing technical topics.
The discussion also includes a summary of benefits of CSG-Ed including research results in this area.
CSG has been used to enhance computing education by providing examples and case studies that
embody authentic computing environments. This is especially useful for study of software development
and software engineering with regard to both technical skills such as design under constraints and
dealing with complexity and also professional skills such as problem solving and communication. CSG
has also been shown to impact student motivation and interest. Of particular interest is evidence that
incorporating CSG has a strong positive appeal to women and some other underrepresented groups of
computing students.
Computing continues to expand world-wide and it touches and is changing more and more aspects of
everyday life in simple and profound ways. Graduates of computing degree programs must have an
understanding of computing for social good as part of their understanding of computing. This
discussion of CSG-Ed summarizes what we know about delivering that understanding in the rapidly
changing context of computing education.
372
Computer Science Curriculum Guidelines: A New Liberal Arts
Perspective
Jakob Barnard; University of Jamestown; Jamestown, MD, USA
Valerie Barr; Bard College; Annandale-on-Hudson, NY, USA
Grant Braught; Dickinson College; Carlisle, PA, USA
Janet Davis; Whitman College; Walla Walla, WA, USA
Amanda Holland-Minkley; Washington & Jefferson College; Washington, PA, USA
David Reed; Creighton University; Omaha, NE, USA
Karl Schmitt; Trinity Christian College; Palos Heights, IL, USA
Andrea Tartaro; Furman University; Greenville, SC, USA
James Teresco; Siena College; Loudonville, NY, USA
ACM/IEEE curriculum guidelines for computer science, such as CS2013 or the forthcoming CS2023,
provide well-researched and detailed guidance regarding the content and skills that make up an
undergraduate computer science (CS) program. Liberal arts CS programs often struggle to apply these
guidelines within their institutional and departmental contexts. Historically, this has been addressed
through the development of model CS curricula tailored for the liberal arts context. We take a different
position: that no single model curriculum can apply across the wide range of liberal arts institutions.
Instead, we argue that liberal arts CS educators need best practices for using guidelines such as
CS2023 to inform curriculum design. These practices must acknowledge the opportunities and priorities
of a liberal arts philosophy as well as institutional and program missions, priorities, and identities.
The history, context, and data about liberal arts CS curriculum design support the position that the
liberal arts computing community is best supported by a process for working with curricular guidelines
rather than a curriculum model or set of exemplars. Previous work with ACM/IEEE curriculum
guidelines over the decades has trended towards acknowledging the variety of forms liberal arts CS
curricula may take and away from presenting a unified “liberal arts” model. A review of liberal arts CS
programs demonstrates how institutional context, including institutional mission and structural factors,
shape their curricula. Survey data indicates that liberal arts programs have distinct identities or
missions, and this directly impacts curriculum and course design decisions. Programs prioritize flexible
pathways through their programs coupled with careful limits on required courses and lengths of
prerequisite chains. This can drive innovative course design where content from Knowledge Areas is
blended rather than compartmentalized into distinct courses. The CS curriculum is viewed as part of the
larger institutional curriculum and the audience for CS courses is broader than just students in the
major, at both the introductory level and beyond.
To support the unique needs of CS liberal arts programs, we propose a process that guides programs
to work with CS2023 through the lens of institutional and program missions and identities, goals,
priorities and situational factors. The Process Workbook we have developed comprises six major steps:
2. develop curricular design principles driven by identity and structural factors, with attention to
diversity, equity, and inclusion;
373
3. identify aspirational learning outcomes in response to principles and identity;
5. evaluate the current program, with attention to current strengths, unmet goals, and opportunities
for improvement;
An initial version of the Process Workbook, based on our research and feedback from workshops and
pilot usage within individual departments, is available as a supplement to this article. The authors will
continue this iterative design process and release additional updates as we gather more feedback.
Future work includes development of a repository of examples of how programs have made use of the
Workbook to review and redesign their curricula in the light of CS2023.
374
Computer Science Education in Community Colleges
These colleges offer specialized programs that help students focus on specific educational
pathways. Among the programs available, computing-related courses are prominent, including
Computer Science degrees, particularly the Associate in Arts (AA) and Sciences (AS) degrees,
known as academic transfer degrees. These transfer degrees are designed to align with the
ACM/IEEE curricular guidelines, primarily focusing on creating two-year programs that
facilitate smooth transferability to four-year colleges.
Furthermore, the computing programs offered by Community Colleges are influenced by the
specific needs and aspirations of the regional workforce and industry. Advisory boards and
committees play a significant role in shaping these programs by providing recommendations
based on the demands of the job market. While the ACM Committee for Computing in
Community Colleges (CCECC) and similar entities help address inquiries related to these
transfer degrees, there is a desire to capture the challenges, requirements, and
recommendations from the Community College perspective in developing general curricular
guidelines.
This work presents the context and perspective of a community college education during the
design and development of curricular guidelines, exemplified by the ACM/IEEE/AAAI CS2023
project. It emphasizes the importance of understanding the unique challenges faced by
Community Colleges and their specific needs while formulating curricular guidelines.
Additionally, the paper envisions considerations for future years regarding curricular
development and administrative efforts, considering the evolving educational landscape and
industry demands. By doing so, the vision is to enhance the effectiveness and relevance of
computing programs offered by Community Colleges and foster better alignment with the
needs of students and the job market.
[1] Christian Servin, Elizabeth K. Hawthorne, Lori Postner, Cara Tang, and Cindy Tucker.
375
2023. Community Colleges Perspectives: From Challenges to Considerations in Curricula
Development. In Proceedings of the 54th ACM Technical Symposium on Computer Science
Education V. 2 (SIGCSE 2023). Association for Computing Machinery, New York, NY, USA,
1244. https://doi.org/10.1145/3545947.3573335
[2] Elizabeth Hawthorne, Cara Tang, Cindy Tucker, and Christian Servin. 2017. Computer
Science Curricular Guidelines for Associate-Degree Transfer Programs (Abstract Only). In
Proceedings of the 2017 ACM SIGCSE Technical Symposium on Computer Science
Education (SIGCSE '17). Association for Computing Machinery, New York, NY, USA,
725. https://doi.org/10.1145/3017680.3022348
376
Connecting Concepts across Knowledge Areas
We identify fundamental concepts that we believe students in computing should be familiar with. These
concepts cut across various computational topics and can be introduced to students in a variety of
settings. This work seeks to map these concepts to a set of courses that roughly align with the
ACM/IEEE CS2023 Knowledge-Areas. It also provides suggestions to teach these concepts across
course sequences.
As an example, the concept of a machine state can be conveyed in many ways across many courses.
In digital logic a state may be flip-flop outputs, whereas in programming a set of variables may indicate
state. Similar ideas can be explored in algorithms, systems and the theory of computation. Here the key
idea conveyed to students is that the state is that portion of the past that the machine needs to take the
next step.
On the whole, a goal of this work is to facilitate the placement of “concept-dots" across the curriculum,
with “dots" placed in upstream courses being “connected" in advanced downstream courses to make
key concepts clear. We expect such an effort to provide students a deeper understanding of the
concepts, and broader perspective on their applicability.
377
The Future of Computer Science Educational Materials
Peter Brusilovsky, University of Pittsburgh, PA, USA
Barbara Ericson, University of Michigan, USA
Cay Horstmann, PFH Göttingen, Germany
Craig Johnson, University of Illinois at Urbana-Champaign, IL, USA
Christian Servin, El Paso Community College, TX, USA
Frank Vahid, University of California Riverside, CA, USA
CS education relies on diverse educational materials like textbooks, presentation slides, labs,
and test banks, which have evolved significantly over the past two decades. New additions,
such as videos, animations, online homework systems, and auto-graded programming
exercises, aim to enhance student success and elevate the instructor's role. This article
explores the future of educational materials in CS education, focusing on effective approaches.
A prominent trend is the growing interactivity of educational materials, providing immediate
feedback to students throughout their learning journey. Integrating artificial intelligence allows
these materials to adapt to individual learners and offer valuable assistance. Many educational
resources are transitioning to cloud-based platforms, enabling continuous data collection and
analysis for improvement.
Another essential aspect is the emphasis on supporting the social aspects of learning,
promoting peer collaboration and coaching. Open education resources (OER) and products
from educational technology companies are expanding, leading to a demand for customization,
content creation, and seamless sharing of high-quality materials. Automation is increasingly
streamlining class administration, and the importance of tool interoperability is growing.
Learning management systems (LMS) are continually improving to accommodate the changing
landscape of educational materials. Moreover, there's a rising focus on developing materials
that cater to traditionally underrepresented students, fostering inclusivity and diversity in CS
education.
[1] Peter Brusilovsky, Barbara J. Ericson, Cay S. Horstmann, Christian Servin, Frank Vahid,
and Craig Zilles. 2023. Significant Trends in CS Educational Material: Current and Future. In
Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 2
(SIGCSE 2023). Association for Computing Machinery, New York, NY, USA,
1253. https://doi.org/10.1145/3545947.3573353
378
The Role of Formal Methods in Computer Science Education
Maurice ter Beek, Manfred Broy, Brijesh Dongol, Emil Sekerinski, et al
Formal Methods provide a wide range of techniques and tools for specifying, developing, analysing, and
verifying software and hardware systems. In the papers, we make four key points: (1) Every Computer
Science graduate needs to have an education in Formal Methods; (2) Formal Methods can support
Teamwork, Code Review, Software Testing, and more; (3) Formal Methods are applicable in numerous
domains (not only in safety-critical applications); and (4) The current offering of Formal Methods in the
Computer Science education is inadequate.
Computer Science, namely the science of solving problems with software and software-intensive
systems, provides the knowledge and skills to understand and capture precisely what a situation requires,
and then develop a formal solution in a programming language. The most fundamental skill of a computer
scientist, that of abstraction, is best addressed by Formal Methods. They provide the rigor for reasoning
about goals, such as validation and verification, thus guaranteeing adequacy, accuracy, and correctness
of implementations.
Formal Methods thinking, i.e., the ideas from Formal Methods applied in informal, lightweight,
practical, and accessible ways, should be part of the recommended curriculum for every Computer
Science student. Even students who train only in that “thinking” will become much better programmers.
In addition, there are students who, exposed to those ideas, will be ideally positioned to study more: why
the techniques work; how they can be automated; and how new ones can be developed. They could
follow subsequently an optional path, including topics such as semantics, logics, and proof-automation
techniques.
Formal Methods were conceived for teaching programming to novices more effectively than by
informal reasoning and testing. Formal Methods explain algorithmic problem solving, design patterns,
model-driven engineering, software architecture, software product lines, requirements engineering, and
security. Formalisms can concisely and precisely express underlying fundamental design principles and
equip programmers with a tool to handle related problems.
Formal Methods are becoming widely applied in industry, from eliciting requirements and early
design to deployment, configuration, and runtime monitoring. Successfully applying Formal Methods in
industry ranges from well-known stories in the safety-critical domain, such as railways and other
transportation domains, to areas such as lithography manufacturing and cloud security in e-commerce,
for example. Testimonies come from representatives who, either directly or indirectly, use or have used
Formal Methods in their industrial project endeavours. Importantly, they are spread geographically,
including Europe, Asia, North and South America.
ACM-CS2023 is the ideal time and place to adjust the way we teach Computer Science. There
are mature tools and proofs of concept available and the possibility of designing coherent teaching paths.
Importantly, this can be done without displacing the other “engineering” aspects of Computer Science
already widely accepted as essential. Support for teachers is available.
379
Acknowledgments
General
Reviewers
380
Thomas Clemen, Hamburg University of Applied Sciences, Hamburg, Germany
Jon Crowcroft, University of Cambridge, Cambridge, UK
Melissa Dark, Dark Enterprises, Inc., Lafayette, IN, USA
Arindam Das, Eastern Washington University, Cheney, WA, USA
Karen C. Davis, Miami University, Oxford, OH, USA
Henry Duwe, Iowa State University, Ames, IA, USA
Roger D. Eastman, University of Maryland, College Park, MD, USA
Yasmine Elglaly, Western Washington University, Bellingham WA, USA
Trilce Estrada, University of New Mexico, Albuquerque, NM, USA
David Flanagan, Text book Author
Akshay Gadre, University of Washington, Seattle, WA, USA
Ed Gehringer, North Carolina State University, Raleigh, NC, USA
Sheikh Ghafoor, Tennessee Tech University, Cookville, TN, USA
Tirthankar Ghosh, University of New Haven, West Haven, CT, USA
Michael Goldwasser, Saint Louis University, St. Louis, MO, USA
Martin Goodfellow, University of Strathclyde, Glasgow, UK
Vikram Goyal, IIIT, Delhi, India
Xinfei Guo, Shanghai Jiao Tong University, Shanghai, China
Anshul Gupta, IBM Research, Yorktown Heights, NY, USA
Sally Hamouda, Virginia Tech, Blacksburg, VA, USA
Matthew Hertz, University at Buffalo, Buffalo, NY, USA
Michael Hilton, Carnegie Mellon University, Pittsburgh, PA, USA
Bijendra Nath Jain, IIIT, Delhi, India
Kenneth Johnson, Auckland University of Technology, Auckland, New Zealand
Krishna Kant, Temple University, Philadelphia, PA, USA
Hakan Kantas, Halkbank, Istanbul, Turkiye
Amey Karkare, Indian Institute of Technology, Kanpur, India
Kamalakar Karlapalem, International Institute of Information Technology, Hyderabad, India
Theodore Kim, Yale University, New Haven, CT, USA
Michael S. Kirkpatrick, James Madison University, Harrisonburg, VA, USA
Tobias Kohn, Vienna University of Technology, Vienna, Austria
Eleandro Maschio Krynski, Universidade Tecnológica Federal do Paraná, Guarapuava, Paraná,
Brazil
Ludek Kucera, Charles University, Prague, Czechia
Fernando Kuipers, Delft University of Technology, Delft, The Netherlands
Matthew Fowles Kulukundis, Google, Inc., New York, NY, USA
Zachary Kurmas, Grand Valley State University, Allendale, MI, USA
Rosa Lanzilotti, Università di Bari, Bari, Italy
Alexey Lastovetsky, University College Dublin, Dublin, Ireland
Gary T. Leavens, University of Central Florida, Orlando, FL, USA
Kent D. Lee, Luther College, Decorah, IA, USA
Bonnie Mackellar, St. John’s University, Queens, NY, USA
Alessio Malizia, Università di Pisa, Pisa, Italy
Sathiamoorthy Manoharan, University of Auckland, Auckland, New Zealand
381
Maristella Matera, Politecnico di Milano, Milano, Italy
Stephanos Matsumoto, Olin College of Engineering, Needham, MA, USA
Paul McKenney, Facebook, Inc.
Michael A. Murphy, Coastal Carolina University, Conway, SC, USA
Raghava Mutharaju, IIIT, Delhi, India
V. Lakshmi Narasimhan, Georgia Southern University, Statesboro, GA, USA
Peter Pacheco, University of San Francisco, San Francisco, CA, USA
Andrew Petersen, University of Toronto, Mississauga, Canada
Cynthia A Phillips, Sandia National Lab, Albuquerque, NM, USA
Benjamin C. Pierce, University of Pennsylvania, Philadelphia, PA, USA
Sushil K. Prasad, University of Texas, San Antonio, TX, USA
Rafael Prikladnicki, Pontificia Universidade Catolica do Rio Grande do Sul, Porto Alegre, Brazil
Keith Quille, Technological University Dublin, Dublin, Ireland
Catherine Ricardo, Iona University, New Rochelle, NY, USA
Luigi De Russis, Politecnico di Torino, Torino, Italy
Beatriz Sousa Santos, University of Aveiro, Aveiro, Portugal
Michael Shindler, University of California, Irvine, CA, USA
Ben Shneiderman, University of Maryland, College Park, MD, USA
Anna Spagnolli, Università di Padova, Padova, Italy
Davide Spano, Università di Cagliari, Cagliari, Italy
Anthony Steed, University College London, London, UK
Michael Stein, Metro State University, Saint Paul, MN, USA
Alan Sussman, University of Maryland, College Park, MD, USA
Andrea Tartaro, Furman University, Greenville, SC, USA
Tim Teitelbaum, Cornell University, Ithaca, NY, USA
Joseph Temple, Coastal Carolina University, Conway, SC, USA
Ramachandran Vaidyanathan, Louisiana State University, Baton Rouge, LA, USA
Salim Virji, Google Inc., New York, NY, USA
Guiliana Vitiello, Università di Salerno, Salerno, Italy
Philip Wadler, The University of Edinburgh, Edinburgh, UK
Charles Weems, University of Massachusetts, Amherst, MA, USA
Xiaofeng Wang, Free University of Bozen-Bolzano, Bolzano, Italy
Miguel Young de la Sota, Google Inc., USA
Massimo Zancanaro, Università di Trento, Trento, Italy
Ming Zhang, Peking University, Beijing, China
382
References
1. Atchison, W. F., Conte, S. D., Hamblen, J. W., Hull, T. E., Keenan, T. A., Kehl, W. B., McCluskey,
E. J., Navarro, S. O., Rheinboldt, W. C., Schweppe, E. J., Viavant, W., and Young, D. “Curriculum
68: Recommendations for academic programs in computer science.” Communications of the ACM,
11, 3 (1968): 151–197.
2. Austing, R. H., Barnes, B. H., Bonnette, D. T., Engel, G. L., and Stokes, G. “Curriculum ’78:
Recommendations for the undergraduate program in computer science.” Communications of the
ACM, 22, 3 (1979): 147–166.
3. ACM/IEEE-CS Joint Curriculum Task Force. “Computing Curricula 1991.” (New York, USA: ACM
Press and IEEE Computer Society Press, 1991).
4. ACM/IEEE-CS Joint Curriculum Task Force. “Computing Curricula 2001 Computer Science.” (New
York, USA: ACM Press and IEEE Computer Society Press, 2001).
5. ACM/IEEE-CS Interim Review Task Force. “Computer Science Curriculum 2008: An interim revision
of CS 2001.” (New York, USA: ACM Press and IEEE Computer Society Press, 2008).
6. ACM/IEEE-CS Joint Task Force on Computing Curricula. “Computing Science Curricula 2013.”
(New York, USA: ACM Press and IEEE Computer Society Press, 2013).
7. Sabin, M., Alrumaih, H., Impagliazzo, J., Lunt, B., Zhang, M., Byers, B., Newhouse, W., Paterson,
W., Tang, C., van der Veer, G. and Viola, B. Information Technology Curricula 2017: Curriculum
Guidelines for Baccalaureate Degree Programs in Information Technology. Association for
Computing Machinery, New York, NY, USA, (2017).
8. Clear, A., Parrish, A., Impagliazzo, J., Wang, P., Ciancarini, P., Cuadros-Vargas, E., Frezza, S.,
Gal-Ezer, J., Pears, A., Takada, S., Topi, H., van der Veer, G., Vichare, A., Waguespack, L. and
Zhang, M. Computing Curricula 2020 (CC2020): Paradigms for Future Computing Curricula.
Technical Report. Association for Computing Machinery / IEEE Computer Society, New York, NY,
USA, (2020).
9. Leidig, P. and Salmela, H. A Competency Model for Undergraduate Programs in Information
Systems (IS2020). Technical Report. Association for Computing Machinery, New York, NY, USA,
(2021).
10. Danyluk, A. and Leidig, P. Computing Competencies for Undergraduate Data Science Curricula
(DS2021). Technical Report. Association for Computing Machinery, New York, NY, USA, (2021).
11. https://iiitd.ac.in/sites/default/files/docs/aicte/AICTE-CSE-Curriculum-Recommendations-
July2022.pdf, last accessed July 2023.
12. Prasad, S. K., Estrada, T., Ghafoor, S., Gupta, A., Kant, K., Stunkel, C., Sussman, A.,
Vaidyanathan, R., Weems, C., Agrawal, K., Barnas, M., Brown, D. W., Bryant, R., Bunde, D. P.,
Busch, C., Deb, D., Freudenthal, E., Jaja, J., Parashar, M., Phillips, C., Robey, B., Rosenberg, A.,
Saule, E., Shen, C. 2020. NSF/IEEE-TCPP Curriculum Initiative on Parallel and Distributed
Computing - Core Topics for Undergraduates, Version II-beta,
Online: http://tcpp.cs.gsu.edu/curriculum/, 53 pages.
13. https://ccecc.acm.org/files/publications/Cyber2yr2020.pdf, last accessed July 2023.
14. https://www.computer.org/volunteering/boards-and-committees/professional-educational-
activities/software-engineering-competency-model, last accessed July 2023.
15. Amruth N. Kumar, Brett A. Becker, Marcelo Pias, Michael Oudshoorn, Pankaj Jalote, Christian
Servin, Sherif G. Aly, Richard L. Blumenthal, Susan L. Epstein, and Monica D. Anderson. 2023. A
383
Combined Knowledge and Competency (CKC) Model for Computer Science Curricula. ACM
Inroads 14, 3 (September 2023), 22–29. https://doi.org/10.1145/3605215
16. Clear, A., Clear, T., Vichare, A., Charles, T., Frezza, S., Gutica, M., Lunt, B., Maiorana, F., Pears,
A., Pitt, F., Riedesel, C. and Szynkiewicz, J. Designing Computer Science Competency Statements:
A Process and Curriculum Model for the 21st Century. In Proceedings of the Working Group
Reports on Innovation and Technology in Computer Science Education (ITiCSE-WGR '20).
Association for Computing Machinery, New York, NY, USA, (2020), 211–246.
17. Frezza, S., Daniels, M., Pears, A., Cajander, A., Kann, V., Kapoor, A., McDermott, R., Peters, A.,
Sabin, M. and Wallace, C. Modelling Competencies for Computing Education beyond 2020: A
Research Based Approach to Defining Competencies in the Computing Disciplines. In Proceedings
Companion of the 23rd Annual ACM Conference on Innovation and Technology in Computer
Science Education (Larnaca, Cyprus) (ITiCSE 2018 Companion). Association for Computing
Machinery, New York, NY, USA, (2018), 148–174.
18. Anderson, Lorin W. and Krathwohl, David R., eds. (2001). A taxonomy for learning, teaching, and
assessing: A revision of Bloom's taxonomy of educational objectives. New York: Longman. ISBN
978-0-8013-1903-7.
19. Adeleye Bamkole, Markus Geissler, Koudjo Koumadi, Christian Servin, Cara Tang, and Cindy S.
Tucker. "Bloom’s for Computing: Enhancing Bloom's Revised Taxonomy with Verbs for Computing
Disciplines". The Association for Computing Machinery. (January 2023).
https://ccecc.acm.org/files/publications/Blooms-for-Computing-20230119.pdf
384