Body of Knowledge v1 Bookmarksv2
Body of Knowledge v1 Bookmarksv2
AI Artificial Intelligence 12 12 18
AL Algorithmic Foundations 5 32 32
DM Data Management 13 10 26
OS Operating Systems 14 8 14
SE Software Engineering 9 6 21
SEC Security 7 6 35
SF Systems Fundamentals 9 18 8
1
Artificial Intelligence (AI)
Preamble
Artificial intelligence (AI) studies problems that are difficult or impractical to solve with traditional
algorithmic approaches. These problems are often reminiscent of those considered to require human
intelligence, and the resulting AI solution strategies typically generalize over classes of problems. AI
techniques are now pervasive in computing, supporting everyday applications such as email, social
media, photography, financial markets, and intelligent virtual assistants (e.g., Siri, Alexa). These
techniques are also used in the design and analysis of autonomous agents that perceive their
environment and interact rationally with it, such as self-driving vehicles and other robots.
Traditionally, AI has included a mix of symbolic and subsymbolic approaches. The solutions it provides
rely on a broad set of general and specialized knowledge representation schemes, problem solving
mechanisms, and optimization techniques. These approaches deal with perception (e.g., speech
recognition, natural language understanding, computer vision), problem solving (e.g., search, planning,
optimization), generation (e.g., narrative, conversation, images, models, recommendations), acting
(e.g., robotics, task-automation, control), and the architectures needed to support them (e.g., single
agents, multi-agent systems). Machine learning may be used within each of these aspects, and can
even be employed end-to-end across all of them. The study of Artificial Intelligence prepares students
to determine when an AI approach is appropriate for a given problem, identify appropriate
representations and reasoning mechanisms, implement them, and evaluate them with respect to both
performance and their broader societal impact.
Over the past decade, the term “artificial intelligence” has become commonplace within businesses,
news articles, and everyday conversation, driven largely by a series of high-impact machine learning
applications. These advances were made possible by the widespread availability of large datasets,
increased computational power, and algorithmic improvements. In particular, there has been a shift
from engineered representations to representations learned automatically through optimization over
large datasets. The resulting advances have put such terms as “neural networks” and “deep learning”
into everyday vernacular. Businesses now advertise AI-based solutions as value-additions to their
services, so that “artificial intelligence” is now both a technical term and a marketing buzzword. Other
disciplines, such as biology, art, architecture, and finance, increasingly use AI techniques to solve
problems within their disciplines.
For the first time in our history, the broader population has access to sophisticated AI-driven tools,
including tools to generate essays or poems from a prompt, artwork from a description, and fake
photographs or videos depicting real people. AI technology is now in widespread use in stock trading,
curating our news and social media feeds, automated evaluation of job applicants, detection of medical
conditions, and influencing prison sentencing through recidivism prediction. Consequently, AI
technology can have significant societal impacts and ethical considerations that must be understood
and considered when developing and applying it.
2
Changes since CS2013
To reflect this recent growth and societal impact, the knowledge area has been revised from CS2013 in
the following ways:
● The name has changed from “Intelligent Systems” to “Artificial Intelligence,” to reflect the most
common terminology used for these topics within the field and its more widespread use outside
the field.
● An increased emphasis on neural networks and representation learning reflects the recent
advances in the field. Given its key role throughout AI, search is still emphasized but there is a
slight reduction on symbolic methods in favor of understanding subsymbolic methods and
learned representations. It is important, however, to retain knowledge-based and symbolic
approaches within the AI curriculum because these methods offer unique capabilities, are used
in practice, ensure a broad education, and because more recent neurosymbolic approaches
integrate both learned and symbolic representations.
● There is an increased emphasis on practical applications of AI, including a variety of areas (e.g.,
medicine, sustainability, social media, etc.). This includes explicit discussion of tools that employ
deep generative models (e.g., ChatGPT, DALL-E, Midjourney) and are now in widespread use,
covering how they work at a high level, their uses, and their shortcomings/pitfalls.
● The curriculum reflects the importance of understanding and assessing the broader societal
impacts and implications of AI methods and applications, including issues in AI ethics, fairness,
trust, and explainability.
● The AI knowledge area includes connections to data science through 1) cross-connections with
the Data Management and other knowledge areas and 2) a sample Data Science model course.
● There are explicit goals to develop basic AI literacy and critical thinking in every computer
science student, given the breadth of interconnections between AI and other knowledge areas
in practice.
Core Hours
3
Knowledge Units CS Core KA Core
Fundamental Issues 2 1
Search 2 + 3 (AL) 6
Machine Learning 4 6
Planning
Robotics
Total 12 (16) 18
Note: The CS Core includes 3 hours that are shared with and counted under Algorithm Foundations
(Uninformed search) and 1 hour that is shared with and counted under Mathematical Foundations
(Probability). The AI KA contributes 12 hours in total toward the complete CS Core; 16 hours if you
include those hours counted under other KAs.
Knowledge Units
4
4. Problem characteristics
a. Fully versus partially observable
b. Single versus multi-agent
c. Deterministic versus stochastic
d. Static versus dynamic
e. Discrete versus continuous
5. Nature of agents
a. Autonomous, semi-autonomous, mixed-initiative autonomy
b. Reflexive, goal-based, and utility-based
c. Decision making under uncertainty and with incomplete information
d. The importance of perception and environmental interactions
e. Learning-based agents
f. Embodied agents
i. sensors, dynamics, effectors
6. Overview of AI Applications, growth, and impact (economic, societal, ethics)
KA Core:
7. Practice identifying problem characteristics in example environments
8. Additional depth on nature of agents with examples
9. Additional depth on AI Applications, growth, and Impact (economic, societal, ethics, security)
Non-core:
10. Philosophical issues
11. History of AI
AI-Search: Search
CS Core:
1. State space representation of a problem
a. Specifying states, goals, and operators
b. Factoring states into representations (hypothesis spaces)
c. Problem solving by graph search
i. e.g., Graphs as a space, and tree traversals as exploration of that space
ii. Dynamic construction of the graph (you’re not given it upfront)
2. Uninformed graph search for problem solving (See also: AL-Foundational)
a. Breadth-first search
b. Depth-first search
i. With iterative deepening
5
c. Uniform cost search
3. Heuristic graph search for problem solving (See also: AL-Strategies)
a. Heuristic construction and admissibility
b. Hill-climbing
c. Local minima and the search landscape
i. Local vs global solutions
d. Greedy best-first search
e. A* search
4. Space and time complexities of graph search algorithms
KA Core:
5. Bidirectional search
6. Beam search
7. Two-player adversarial games
a. Minimax search
b. Alpha-beta pruning
i. Ply cutoff
8. Implementation of A* search
9. Constraint satisfaction
Non-core:
10. Understanding the search space
a. Constructing search trees
b. Dynamic search spaces
c. Combinatorial explosion of search space
d. Search space topology (ridges, saddle points, local minima, etc.)
11. Local search
12. Tabu search
13. Variations on A* (IDA*, SMA*, RBFS)
14. Two-player adversarial games
a. The horizon effect
b. Opening playbooks / endgame solutions
c. What it means to “solve” a game (e.g., checkers)
15. Implementation of minimax search, beam search
16. Expectimax search (MDP-solving) and chance nodes
17. Stochastic search
a. Simulated annealing
b. Genetic algorithms
c. Monte-Carlo tree search
6
3. Select and implement an appropriate informed search algorithm for a problem after designing a
helpful heuristic function (e.g., a robot navigating a 2D gridworld).
4. Evaluate whether a heuristic for a given problem is admissible/can guarantee an optimal solution.
5. Apply minimax search in a two-player adversarial game (e.g., connect four), using heuristic
evaluation at a particular depth to compute the scores to back up. [KA Core]
6. Design and implement a genetic algorithm solution to a problem.
7. Design and implement a simulated annealing schedule to avoid local minima in a problem.
8. Design and implement A*/beam search to solve a problem, and compare it against other search
algorithms in terms of the solution cost, number of nodes expanded, etc.
9. Apply minimax search with alpha-beta pruning to prune search space in a two-player adversarial
game (e.g., connect four).
10. Compare and contrast genetic algorithms with classic search techniques, explaining when it is most
appropriate to use a genetic algorithm to learn a model versus other forms of optimization (e.g.,
gradient descent).
11. Compare and contrast various heuristic searches vis-a-vis applicability to a given problem.
12. Model a logic or Sudoku puzzle as a constraint satisfaction problem, solve it with backtrack search,
and determine how much arc consistency can reduce the search space.
KA Core:
4. Random variables and probability distributions
a. Axioms of probability
b. Probabilistic inference
c. Bayes’ Rule (derivation)
d. Bayesian inference (more complex examples)
5. Independence
6. Conditional Independence
7. Markov chains and Markov models
8. Utility and decision making
KA Core:
10. Formulation of simple machine learning as an optimization problem, such as least squares linear
regression or logistic regression
a. Objective function
b. Gradient descent
c. Regularization to avoid overfitting (mathematical formulation)
11. Ensembles of models
a. Simple weighted majority combination
12. Deep learning
a. Deep feed-forward networks (intuition only, no mathematics)
b. Convolutional neural networks (intuition only, no mathematics)
c. Visualization of learned feature representations from deep nets
d. Other architectures (generative NN, recurrent NN, transformers, etc.)
13. Performance evaluation
a. Other metrics for classification (e.g., error, precision, recall)
b. Performance metrics for regressors
c. Confusion matrix
d. Cross-validation
i. Parameter tuning (grid/random search, via cross-validation)
14. Overview of reinforcement learning methods
15. Two or more applications of machine learning algorithms
a. E.g., medicine and health, economics, vision, natural language, robotics, game play
16. Ethics for Machine Learning
a. Continued focus on real data, real scenarios, and case studies (See also: SEP-Context)
b. Privacy (See also: SEP-Privacy)
c. Fairness (See also: SEP-Privacy)
d. Intellectual property
e. Explainability
Non-core:
17. General statistical-based learning, parameter estimation (maximum likelihood)
18. Supervised learning
a. Decision trees
b. Nearest-neighbor classification and regression
c. Learning simple neural networks / multi-layer perceptrons
d. Linear regression
e. Logistic regression
9
f. Support vector machines (SVMs) and kernels
g. Gaussian Processes
19. Overfitting
a. The curse of dimensionality
b. Regularization (mathematical computations, L2 and L1 regularization)
20. Experimental design
a. Data preparation (e.g., standardization, representation, one-hot encoding)
b. Hypothesis space
c. Biases (e.g., algorithmic, search)
d. Partitioning data: stratification, training set, validation set, test set
e. Parameter tuning (grid/random search, via cross-validation)
f. Performance evaluation
i. Cross-validation
ii. Metric: error, precision, recall, confusion matrix
iii. Receiver operating characteristic (ROC) curve and area under ROC curve
21. Bayesian learning (Cross-Reference AI/Reasoning Under Uncertainty)
a. Naive Bayes and its relationship to linear models
b. Bayesian networks
c. Prior/posterior
d. Generative models
22. Deep learning
a. Deep feed-forward networks
b. Neural tangent kernel and understanding neural network training
c. Convolutional neural networks
d. Autoencoders
e. Recurrent networks
f. Representations and knowledge transfer
g. Adversarial training and generative adversarial networks
h. Attention mechanisms
23. Representations
a. Manually crafted representations
b. Basis expansion
c. Learned representations (e.g., deep neural networks)
24. Unsupervised learning and clustering
a. K-means
b. Gaussian mixture models
c. Expectation maximization (EM)
d. Self-organizing maps
25. Graph analysis (e.g., PageRank)
26. Semi-supervised learning
27. Graphical models (See also: AI-Probability)
28. Ensembles
a. Weighted majority
b. Boosting/bagging
10
c. Random forest
d. Gated ensemble
29. Learning theory
a. General overview of learning theory / why learning works
b. VC dimension
c. Generalization bounds
30. Reinforcement learning
a. Exploration vs exploitation trade-off
b. Markov decision processes
c. Value and policy iteration
d. Policy gradient methods
e. Deep reinforcement learning
f. Learning from demonstration and inverse RL
31. Explainable / interpretable machine learning
a. Understanding feature importance (e.g., LIME, Shapley values)
b. Interpretable models and representations
32. Recommender systems
33. Hardware for machine learning
a. GPUs / TPUs
34. Application of machine learning algorithms to:
a. Medicine and health
b. Economics
c. Education
d. Vision
e. Natural language
f. Robotics
g. Game play
h. Data mining (Cross-reference DM/Data Analytics)
35. Ethics for Machine Learning
a. Continued focus on real data, real scenarios, and case studies (See also: SEP-Context)
b. In depth exploration of dataset/algorithmic/evaluation bias, data privacy, and fairness (See also:
SEP-Privacy, SEP-Context)
c. Trust / explainability
11
6. Explain how machine learning works as an optimization/search process.
7. Implement a statistical learning algorithm and the corresponding optimization process to train the
classifier and obtain a prediction on new data.
8. Describe the neural network training process and resulting learned representations
9. Explain proper ML evaluation procedures, including the differences between training and testing
performance, and what can go wrong with the evaluation process leading to inaccurate reporting of
ML performance.
10. Compare two machine learning algorithms on a dataset, implementing the data preprocessing and
evaluation methodology (e.g., metrics and handling of train/test splits) from scratch.
11. Visualize the training progress of a neural network through learning curves in a well-established
toolkit (e.g., TensorBoard) and visualize the learned features of the network.
12. Compare and contrast several learning techniques (e.g., decision trees, logistic regression, naive
Bayes, neural networks, and belief networks), providing examples of when each strategy is
superior.
13. Evaluate the performance of a simple learning system on a real-world dataset.
14. Characterize the state of the art in learning theory, including its achievements and shortcomings.
15. Explain the problem of overfitting, along with techniques for detecting and managing the problem.
16. Explain the triple tradeoff among the size of a hypothesis space, the size of the training set, and
performance accuracy.
17. Given a real-world application of machine learning, describe ethical issues regarding the choices of
data, preprocessing steps, algorithm selection, and visualization/presentation of results.
CS Core:
1. At least one application of AI to a specific problem and field, such as medicine, health,
sustainability, social media, economics, education, robotics, etc. (choose at least one for the CS
Core).
a. Formulating and evaluating a specific application as an AI problem
i. How to deal with underspecified or ill-posed problems
b. Data availability/scarcity and cleanliness
i. Basic data cleaning and preprocessing
ii. Data set bias
c. Algorithmic bias
d. Evaluation bias
e. Assessment of societal implications of the application
2. Deployed deep generative models
12
a. High-level overview of deep image generative models (e.g. as of 2023, DALL-E, Midjourney,
Stable Diffusion, etc.), their uses, and their shortcomings/pitfalls.
b. High-level overview of large language models (e.g. as of 2023, ChatGPT, Bard, etc.), their uses,
and their shortcomings/pitfalls.
3. Overview of societal impact of AI
a. Ethics (See also: SEP-Context)
b. Fairness (See also: SEP-Privacy, SEP-DEIA)
c. Trust / explainability (See also: SEP-Context)
d. Privacy and usage of training data (See also: SEP-Privacy)
e. Human autonomy and oversight/regulations/legal requirements (See also: SEP-Context)
f. Sustainability (See also: SEP-Sustainability)
KA Core:
4. One or more additional applications of AI to a broad set of problems and diverse fields, such as
medicine, health, sustainability, social media, economics, education, robotics, etc. (choose a
different area from that chosen for the CS Core).
a. Formulating and evaluating a specific application as an AI problem
i. How to deal with underspecified or ill-posed problems
b. Data availability/scarcity and cleanliness
i. Basic data cleaning and preprocessing
ii. Data set bias
c. Algorithmic bias
d. Evaluation bias
e. Assessment of societal implications of the application
5. Additional depth on deployed deep generative models
a. Introduction to how deep image generative models (e.g. as of 2023, DALL-E, Midjourney, Stable
Diffusion, etc.) work, including discussion of attention
b. Introduction to how large language models (e.g. as of 2023, ChatGPT, Bard, etc.) work,
including discussion of attention
c. Idea of foundational models, how to use them, and the benefits / issues with training them from
big data
6. Analysis and discussion of the societal impact of AI
a. Ethics (See also: SEP-Context)
b. Fairness (See also: SEP-Privacy, SEP-DEIA)
c. Trust / explainability (See also: SEP-Context)
d. Privacy and usage of training data (See also: SEP-Privacy)
e. Human autonomy and oversight/regulations/legal requirements (See also: SEP-Context)
f. Sustainability (See also: SEP-Sustainability)
13
3. Describe some of the failure modes of current deep generative models for language or images, and
how this could affect their use in an application.
14
1. Conditional Independence review
2. Knowledge representations
a. Bayesian Networks
i. Exact inference and its complexity
ii. Markov blankets and d-separation
iii. Randomized sampling (Monte Carlo) methods (e.g. Gibbs sampling)
b. Markov Networks
c. Relational probability models
d. Hidden Markov Models
3. Decision Theory
a. Preferences and utility functions
b. Maximizing expected utility
c. Game theory
AI-Planning: Planning
Non-core:
1. Review of propositional and first-order logic
2. Planning operators and state representations
3. Total order planning
4. Partial-order planning
5. Plan graphs and GraphPlan
6. Hierarchical planning
7. Planning languages and representations
a. PDDL
8. Multi-agent planning
9. MDP-based planning
10. Interconnecting planning, execution, and dynamic replanning
a. Conditional planning
b. Continuous planning
c. Probabilistic planning
15
Illustrative Learning Outcomes:
1. Construct the state representation, goal, and operators for a given planning problem.
2. Encode a planning problem in PDDL and use a planner to solve it.
3. Given a set of operators, initial state, and goal state, draw the partial-order planning graph and
include ordering constraints to resolve all conflicts
4. Construct the complete planning graph for GraphPlan to solve a given problem.
17
AI-Robotics: Robotics
(See also: SPD-Robot)
Non-core:
1. Overview: problems and progress
a. State-of-the-art robot systems, including their sensors and an overview of their sensor
processing
b. Robot control architectures, e.g., deliberative vs reactive control and Braitenberg vehicles
c. World modeling and world models
d. Inherent uncertainty in sensing and in control
2. Sensors and effectors
a. Sensors: LIDAR, sonar, vision, depth, stereoscopic, event cameras, microphones, haptics, etc.
b. Effectors: wheels, arms, grippers, etc.
3. Coordinate frames, translation, and rotation (2D and 3D)
4. Configuration space and environmental maps
5. Interpreting uncertain sensor data
6. Localization and mapping
7. Navigation and control
8. Forward and inverse kinematics
9. Motion path planning and trajectory optimization
10. Manipulation and grasping
11. Joint control and dynamics
12. Vision-based control
13. Multiple-robot coordination and collaboration
14. Human-robot interaction (See also: HCI-User, HCI-Accessibility)
a. Shared workspaces
b. Human-robot teaming and physical HRI
c. Social assistive robots
d. Motion/task/goal prediction
e. Collaboration and communication (explicit vs implicit, verbal or symbolic vs non-verbal or visual)
f. Trust
15. Applications and Societal, Economic, and Ethical Issues
a. Societal, economic, right-to-work implications
b. Ethical and privacy implications of robotic applications
c. Liability in autonomous robotics
d. Autonomous weapons and ethics
e. Human oversight and control
18
3. Program a robot to accomplish simple tasks using deliberative, reactive, and/or hybrid control
architectures.
4. Implement fundamental motion planning algorithms within a robot configuration space.
5. Characterize the uncertainties associated with common robot sensors and actuators; articulate
strategies for mitigating these uncertainties.
6. List the differences among robots' representations of their external environment, including their
strengths and shortcomings.
7. Compare and contrast at least three strategies for robot navigation within known and/or unknown
environments, including their strengths and shortcomings.
8. Describe at least one approach for coordinating the actions and sensing of several robots to
accomplish a single task.
9. Compare and contrast a multi-robot coordination and a human-robot collaboration approach, and
attribute their differences to differences between the problem settings.
10. Analyze the societal, economic, and ethical issues of a real-world robotics application
19
7. Implement an algorithm combining features into higher-level percepts, e.g., a contour or polygon
from visual primitives or phoneme hypotheses from an audio signal.
8. Implement a classification algorithm that segments input percepts into output categories and
quantitatively evaluates the resulting classification.
9. Evaluate the performance of the underlying feature-extraction, relative to at least one alternative
possible approach (whether implemented or not) in its contribution to the classification task (8),
above.
10. Describe at least three classification approaches, their prerequisites for applicability, their strengths,
and their shortcomings.
11. Implement and evaluate a deep learning solution to problems in computer vision, such as object or
scene recognition.
Professional Dispositions
● Meticulousness: Attention must be paid to details when implementing AI and machine learning
algorithms, requiring students to be meticulous to detail.
● Persistence: AI techniques often operate in partially observable environments and optimization
processes may have cascading errors from multiple iterations. Getting AI techniques to work
predictably takes trial and error, and repeated effort. These call for persistence on the part of the
student.
● Inventive: Applications of AI involve creative problem formulation and application of AI techniques,
while balancing application requirements and societal and ethical issues.
● Responsible: Applications of AI can have significant impacts on society, affecting both individuals
and large populations. This calls for students to understand the implications of work in AI to society,
and to make responsible choices for when and how to apply AI techniques.
Mathematics Requirements
Required:
● Algebra
● Precalculus
● Discrete Math: (See also: MSF-Discrete)
o sets, relations, functions, graphs
o predicate and first-order logic, logic-based proofs
● Linear Algebra: (See also: MSF-Linear)
o Matrix operations, matrix algebra
o Basis sets
● Probability and Statistics: (See also: MSF-Statistics)
o Basic probability theory, conditional probability, independence
o Bayes theorem and applications of Bayes theorem
o Expected value, basic descriptive statistics, distributions
o Basic summary statistics and significance testing
20
o All should be applied to real decision making examples with real data, not “textbook”
examples
Desirable:
● Calculus-based probability and statistics
● Calculus: single-variable and partial derivatives
● Other topics in probability and statistics
o Hypothesis testing, data resampling, experimental design techniques
● Optimization
● Linear algebra (all other topics)
21
Course objective: A student who completes this course should be able to understand, develop, and
apply mechanisms for supervised, unsupervised, and reinforcement learning. They should be able to
select the proper machine learning algorithm for a problem, preprocess the data appropriately, apply
proper evaluation techniques, and explain how to interpret the resulting models, including the model's
shortcomings. They should be able to identify and compensate for biased data sets and other sources
of error, and be able to explain ethical and societal implications of their application of machine learning
to practical problems.
22
Course objective: A student who completes this course should be able to formulate questions as data
analysis problems, understand and use statistical techniques to achieve that analysis from real data,
apply visualization techniques to convey the results, and analyze the ethical and societal implications of
data science applications. Students should also be able to understand and effectively use data
management techniques for preprocessing, storage, security, and retrieval of data in current systems.
Committee
Members:
● Zachary Dodds, Harvey Mudd College, Claremont, CA, USA
● Susan L. Epstein, Hunter College and The Graduate Center of The City University of New York,
New York, NY, USA
● Laura Hiatt, US Naval Research Laboratory, Washington, DC, USA
● Amruth N. Kumar, Ramapo College of New Jersey, Mahwah, NJ, USA
● Peter Norvig, Google, Mountain View, CA, USA
● Meinolf Sellmann, GE Research, Niskayuna, NY, USA
● Reid Simmons, Carnegie Mellon University, Pittsburgh, PA, USA
Contributors:
● Nate Derbinsky, Northeastern University, Boston, MA, USA
● Eugene Freuder, Insight Centre for Data Analytics, University College Cork, Cork, Ireland
● Ashok Goel, Georgia Institute of Technology, Atlanta, GA, USA
● Claudia Schulz, Thomson Reuters, Zurich, Switzerland
23
Algorithmic Foundations (AL)
Preamble
Algorithms and data structures are fundamental to computer science, since every theoretical
computation and applied program consists of algorithms that operate on data elements possessing
some underlying structure. Selecting appropriate computational solutions to real-world problems
benefits from understanding the theoretical and practical capabilities and limitations of available
algorithms and paradigms, including their impact on the environment and society. Moreover, this
understanding provides insight into the intrinsic nature of computation, computational problems, and
computational problem-solving as well as possible solution techniques independent of programming
language, programming paradigm, computer hardware, or other implementation aspects.
This knowledge area focuses on the nature of computation including the concepts and skills required to
design and analyze algorithms for solving real-world computational problems. It complements the
implementation of algorithms and data structures found in the Software Development Foundations
(SDF) knowledge area. As algorithms and data structures are essential in all advanced areas of
computer science, this area provides the algorithmic foundations that every computer science graduate
is expected to know. Exposure to the breadth of these foundational AL topics is designed to provide
students with the basis for studying these topics in more depth, additional computation and algorithm
topics, and for learning advanced algorithms across a variety of CS knowledge areas and CS-X
disciplines.
The increase of four CS Core hours acknowledges the importance of this foundational area in the CS
curriculum and returns it to the 2001 level (less than one course). Despite this increase, there is a
significant overlap in hours with the Software Development Fundamentals (SDF) and Mathematical
Foundations (MSF) areas. There is also a complementary nature of the units in this area since, for
example, while linear search of an array covers topics in AL-Foundational, it can be used to
simultaneously explain AL-Complexity O(n) and AL-Strategies Brute-Force topics.
The KA topics and hours primarily reflect topics studied in a stand-alone computational theory course
and the availability of additional hours when such a course is included in the curriculum.
24
Core Hours
Algorithmic Strategies 6
Complexity Analysis 6 3
Total 32 32
The 11 CS Core hours in AL-Foundational are in addition to the 9 hours found in SDF and 3 in MSF.
Knowledge Units
25
11. Search algorithms
a. O(n) complexity (e.g., linear/sequential array/list search)
b. O(log2 n) complexity (e.g., binary search)
c. O(logb n) complexity (e.g. uninformed depth/breadth-first tree search)
12. Sorting algorithms (e.g., stable, unstable)
a. O(n2) complexity (e.g., insertion, selection),
b. O(n log n) complexity (e.g., quicksort, merge, timsort)
13. Graph algorithms
a. Shortest path (e.g., Dijkstra’s, Floyd’s)
b. Minimal spanning tree (e.g., Prim’s, Kruskal’s)
KA Core:
14. Sorting algorithms
a. O(n log n) complexity heapsort
b. Pseudo O(n) complexity (e.g., bucket, counting, radix)
15. Graph algorithms
a. Transitive closure (e.g., Warshall’s)
b. Topological sort
16. Matching
a. Efficient string matching (e.g., Boyer-Moore, Knuth-Morris-Pratt)
b. Longest common subsequence matching
c. Regular expression matching
Non-core:
17. Cryptography algorithms (e.g., SHA-256) (See also: SEC-Crypto)
18. Parallel algorithms (See also: PDC-Algorithms, FPL-Parallel)
19. Consensus algorithms (e.g., Blockchain) (See also: SEC-Crypto)
a. Proof of work vs proof of stake (See also: SEP-Sustainability)
20. Quantum computing algorithms (See also: AL-Models, AR-Quantum)
a. Oracle-based (e.g. Deutsch-Jozsa, Bernstein-Vazirani, Simon)
b. Superpolynomial speed-up via QFT (e.g., Shor’s)
c. Polynomial speed-up via amplitude amplification (e.g., Grover’s)
21. Fast-Fourier Transform (FFT) algorithm
22. Differential evolution algorithm
26
4. Given requirements for a problem, develop multiple solutions using various data structures and
algorithms. Subsequently, evaluate the suitability, strengths, and weaknesses selecting an
approach that best satisfies the requirements.
5. Explain how collision avoidance and collision resolution is handled in hash tables.
6. Explain factors beyond computational efficiency that influence the choice of algorithms, such as:
programming time, maintainability, and the use of application-specific patterns in the input data.
7. Explain the heap property and the use of heaps as an implementation of a priority queue.
KA Core:
8. For each of the algorithms and algorithmic approaches in the KA Core topics:
a. Explain a prototypical example of the algorithm,
b. Explain step-by-step how the algorithm operates.
Non-core:
9. An appreciation of quantum computation and its application to certain problems.
KA Core:
4. Paradigms
a. Approximation algorithms
b. Iterative improvement (e.g., Ford-Fulkerson, simplex)
c. Randomized/Stochastic algorithms (e.g., max-cut, balls and bins)
Non-core:
5. Quantum computing
27
Illustrative Learning Outcomes:
CS Core:
1. For each of the paradigms in this unit
a. Explain its definitional characteristics,
b. Explain an example that demonstrates the paradigm including how this example satisfies the
paradigm’s characteristics
2. For each of the algorithms in the AL-Foundational unit:
a. Explain the paradigm used by the algorithm and how it exemplifies this paradigm
3. Given an algorithm, explain the paradigm used by the algorithm and how it exemplifies this
paradigm
4. Give a real-world problem, evaluate appropriate algorithmic paradigms and algorithms from these
paradigms that address the problem including evaluating the tradeoffs among the paradigms and
algorithms selected.
5. Give examples of iterative and recursive algorithms that solve the same problem, explain the
benefits and disadvantages of each approach.
6. Evaluate whether a greedy approach leads to an optimal solution.
7. Explain various approaches for addressing computational problems whose algorithmic solutions are
exponential.
AL-Complexity: Complexity
CS Core:
1. Complexity Analysis Framework
a. Best, average, and worst case performance of an algorithm
b. Empirical and relative (Order of Growth) measurements
c. Input size and primitive operations
d. Time and space efficiency
2. Asymptotic complexity analysis (average and worst case bounds)
a. Big-O, Big-Omega, and Big-Theta formal notations
b. Foundational Complexity Classes and Representative Examples/Problems
i. O(1) Constant (e.g., array access)
ii. O(log2 n) Logarithmic (e.g., binary search)
iii. O(n) Linear (e.g., linear search)
iv. O(n log2 n) Log Linear (e.g., mergesort)
v. O(n2) Quadratic (e.g., selection sort)
vi. O(n )
c Polynomial (e.g., O(n3) Gaussian elimination)
vii. O(2n) Exponential (e.g., Knapsack, Satisfiability (SAT),
Traveling Sales-Person (TSP), all subsets)
viii. O(n!) Factorial (e.g., Hamiltonian circuit, all permutations)
3. Empirical measurements of performance
4. Tractability and intractability
a. P, NP and NP-Complete Complexity Classes
b. NP-Complete Problems (e.g., SAT, Knapsack, TSP)
28
c. Reductions
5. Time and space trade-offs in algorithms.
KA Core:
6. Little-o, Little-Omega, and Little Theta notations
7. Formal recursive analysis
8. Amortized analysis
9. Turing Machine-based models of complexity
a. Time complexity
i. P, NP, NP-C, and EXP classes
ii. Cook-Levin theorem
b. Space Complexity
i. NSpace and PSpace
ii. Savitch’s theorem
KA Core:
14. Use recurrence relations to evaluate the time complexity of recursively defined algorithms.
15. Apply elementary recurrence relations using some form of the Master Theorem.
29
16. Apply Big-O notation to give upper case bounds on time/space complexity of algorithms.
17. Explain the Cook-Levin Theorem and the NP-Completeness of SAT.
18. Explain the classes P and NP.
19. Prove that a problem is NP-Complete by reducing a classic known NP-C problem to it (e.g., 3SAT
and Clique)
20. Explain the P-space class and its relation to the EXP class.
KA Core:
7. Deterministic and nondeterministic automata
8. Pumping Lemma proofs
a. Proof of Finite State/Regular-Language limitation
b. Pushdown Automata/Context-Free-Language limitation
9. Decidability
a. Arithmetization and diagonalization
10. Reducibility and reductions
11. Time complexity based on Turing Machine
12. Space complexity (e.g., PSPACE, Savitch’s Theorem)
13. Equivalent models of algorithmic computation
a. Turing Machines and Variations (e.g., multi-tape, non-deterministic)
b. Lambda Calculus (See also: FPL-Functional)
c. Mu-Recursive Functions
Non-core:
30
14. Quantum computation (See also: AR-Quantum)
a. Postulates of quantum mechanics
i. State space
ii. State evolution
iii. State composition
iv. State measurement
b. Column vector representations of qubits
c. Matrix representations of quantum operations
d. Simple quantum gates (e.g., XNOT, CNOT)
31
15. Convert among equivalently powerful notations for a language, including among DFAs, NFAs, and
regular expressions, and between PDAs and CFGs
16. Explain Rice’s theorem and its significance.
17. Explain an example proof of a problem that is uncomputable by reducing a classic known
uncomputable problem to it.
18. Explain the Primitive and General Recursive functions (zero, successor, selection, primitive
recursion, composition, and Mu), their significance, and Turing Machine implementations.
19. Explain how computation is performed in Lambda Calculus (e.g., Alpha conversion and Beta
reduction)
Non-core:
20. For a quantum system give examples that explain the following postulates:
a. State Space: system state represented as a unit vector in Hilbert space,
b. State Evolution: the use of unitary operators to evolve system state,
c. State Composition: the use of tensor product to compose systems states,
d. State Measurement: the probabilistic output of measuring a system state.
21. Explain the operation of a quantum XNOT or CNOT gate on a quantum bit represented as a matrix
and column vector respectively
KA Core:
8. Context aware computing
32
4. Explain an example that articulates how differential privacy protects knowledge of an individual’s
data.
5. Explain the environmental impacts of design choices that relate to algorithm design.
6. Explain the tradeoffs involved in proof-of-work and proof-of-stake algorithms.
Professional Dispositions
Mathematics Requirements
Required:
● MSF-Discrete
As depicted in the following figure, the committee envisions two common approaches for addressing
foundational AL topics in CS courses. Both approaches included required introductory Programming
(CS1) and Data Structures (CS2) courses. In a three-course approach, all CS Core topics are covered
with additional unused hours to cover other topics. Alternatively, in the four-course approach, the AL-
Model knowledge unit CS and KA Core topics are addressed in a Computational Theory focused
course, which leaves room to address additional KA topics in the third Algorithms course. Both
approaches assume Big-O analysis is introduced in the Data Structures (CS2) course and that graphs
are taught in the third Algorithms course. The committee recognizes that there are many different
approaches for packaging AL topics into courses including, for example, introducing graphs in CS2
Data Structures, Backtracking in an AI course, and AL-Model topics in a theory course that also
addresses, for instance, FPL topics. The given example is simply one way to cover the entire AL CS
Core in three introductory courses with additional lecture hours to spare.
33
Programming 1 (CS1)
● AL-Foundational (2 hours)
○ Arrays and Strings
○ Search Algorithms (e.g., O(n) Linear Search)
● AL-SEP (In SEP hours)
Note: the following AL topics are demonstrated in CS1, but not explicitly taught as such:
● AL-Strategies (less than hour)
○ Brute Force (e.g., linear search)
○ Iteration (e.g., linear search)
● AL-Complexity (less than 1 hour)
○ Foundational Complexity Classes
■ O(1) Constant and O(n) Linear runtime complexities
Course objectives: Students should be able to explain, evaluate, and apply arrays in a variety of
problem-solving contexts including using linear search for elements in an array. They should also be
able to begin to explain the impact algorithmic design and use has on society.
34
Course objectives: Students should be able to explain, evaluate, and apply the specified data
structures and algorithms in a variety of problem-solving contexts. Additionally, they should be able
demonstrate the use of different data structures, algorithms, and algorithmic strategies (paradigms) to
solve the same problem. Also, they will continue to enhance and refine their understanding of the
impact that algorithmic design and use has on society.
Course objectives: Students should be able to explain, evaluate, and apply the specified data
structures and algorithms in a variety of problem-solving contexts. Additionally, they should be able to
formally explain complexity analysis and the importance of tractability including approaches for handling
intractable problems. Finally, they should also be able to summarize formal models of computation,
grammars, and languages including the definition of a computer as a Turing Machine and the
undecidability of the Halting problem..
35
■ Transitive closure (e.g., Warshall’s)
■ Topological sort
○ Matching
■ Efficient String Matching (e.g., Boyer-Moore, Knuth-Morris-Pratt)
■ Longest common subsequence matching
■ Regular expression matching
● AL-Complexity (3 hours)
○ Asymptotic Complexity Analysis
○ Foundational Complexity Classes
■ O(2n) Exponential and O(n!) Factorial
○ Empirical Measurements of Performance
○ Tractability and Intractability
● AL-Strategies (3 hours)
○ Brute Force (e.g., traveling salesperson, knapsack)
○ Decrease-and-Conquer (e.g., topological sort
○ Divide-and-Conquer (e.g., Strassen’s algorithm)
○ Greedy (e.g., Dijkstra’s, Kruskal’s)
○ Transform-and-Conquer/Reduction (e.g., heapsort, trees (2-3, AVL, Red-Black))
■ Dynamic Programming (e.g. Warshall’s, Floyd’s, Bellman-Ford)
○ Handling Exponential Growth (e.g., heuristic A*, branch-and-bound, backtracking)
Course objectives: Students should be able to explain, evaluate, and apply the specified data
structures and algorithms in a variety of problem-solving contexts. Additionally, they should be able to
formally explain complexity analysis and the importance of tractability including approaches for handling
intractable problems.
Course objectives: Students should be able to explain, evaluate, and apply models of computation,
grammars, and languages. Additionally, they should be able to explain formal proofs that demonstrate
the capability and limitations of various automata. Students should be able to relate the complexity of
Random Access Models of Computation to Turing Machine models. Finally, students should be able to
summarize decidability and reduction proofs.
Committee
36
Members:
● Cathy Bareiss, Bethel University, Mishawaka, MN, USA
● Tom Blanchet, Sci Tec., Boulder, CO, USA
● Doug Lea, State University of New York at Oswego, Oswego, NY, USA
● Sara Miner More, John Hopkins University, Baltimore, MD, USA
● Mia Minnes, University of California San Diego, San Diego, CA, USA
● Atri Rudra, University at Buffalo, Buffalo, NY, USA
● Christian Servin, El Paso Community College, El Paso, TX, USA
37
Architecture and Organization (AR)
Preamble
Computing professionals spend considerable time writing efficient code to solve a particular problem in
an application domain. As the shift from sequential to parallel processing occurs, a deeper understanding
of the underlying computer architectures is necessary. Architecture can no longer be viewed as a black
box where principles from one architecture can be applied to another. Instead, programmers should look
inside the black box and use particular components to enhance system performance and energy
efficiency.
The Architecture and Organization (AR) knowledge area aims to develop a deeper understanding of the
hardware environments upon which almost all computing is based, and the relevant interfaces provided
to higher software layers. The target hardware comprises low-end embedded system processors up to
high-end enterprise multiprocessors.
The topics in this knowledge area will benefit students by enabling them to appreciate the fundamental
architectural principles of modern computer systems, including the challenge of harnessing parallelism
to sustain performance and energy improvements into the future. This KA will help computer science
students depart from the black box approach and become more aware of the underlying computer system
and the efficiencies specific architectures can achieve.
Core Hours
38
Digital Logic and Digital Systems 2 + 1 (SF)
Functional Organization 2
Heterogeneous Architectures 2
Quantum Architectures 2
Total 9 16
The hours shared with OS include overlapping topics and are counted here.
Knowledge Units
39
Illustrative Learning Outcomes:
KA Core:
1. Discuss the progression of computer technology components from vacuum tubes to VLSI, from
mainframe computer architectures to the organization of warehouse-scale computers.
2. Describe parallelism and data dependencies between and within components in a modern
heterogeneous computer architecture.
3. Explain how the relationship between parallelism and power consumption.
4. Construct the design of basic building blocks for a computer: arithmetic-logic unit (gate-level),
registers (gate-level), central processing unit (register transfer-level), and memory (register transfer-
level).
5. Evaluate simple building blocks (e.g., arithmetic-logic unit, registers, movement between registers)
of a simple computer design.
6. Analyze the timing behavior of a pipelined processor, identifying data dependency issues.
40
1. von Neumann machine architecture
2. Control unit: instruction fetch, decode, and execution (See also: OS-Principles)
3. Introduction to SIMD vs MIMD and the Flynn taxonomy (See also: PDC-Programs, OS-Scheduling,
OS-Process)
4. Shared memory multiprocessors/multicore organization (See also: PDC-Programs, OS-Scheduling)
KA Core:
5. Instruction set architecture (ISA) (e.g., x86, ARM and RISC-V)
a. Fixed vs variable-width instruction sets
b. Instruction formats
c. Data manipulation, control, I/O
d. Addressing modes
e. Machine language programming
f. Assembly language programming
6. Subroutine call and return mechanisms (See also: FPL-Translation, OS-Principles)
7. I/O and interrupts (See also: OS-Principles)
8. Heap, static, stack, and code segments (See also: FPL-Translation, OS-Process)
KA Core:
4. Discuss how instructions are represented at the machine level and in the context of a symbolic
assembler.
5. Map an example of high-level language patterns into assembly/machine language notations.
6. Contrast different instruction formats considering aspects such as addresses per instruction and
variable-length vs fixed-length formats.
7. Analyze a subroutine diagram to comment on how subroutine calls are handled at the assembly
level.
8. Describe basic concepts of interrupts and I/O operations.
9. Write a simple assembly language program for string/array processing and manipulation.
43
Illustrative Learning Outcomes:
KA Core:
1. Discuss performance and energy efficiency evaluation metrics.
2. Analyze a speculative execution diagram and write about the decisions that can be made.
3. Create a GPU performance-watt benchmarking diagram.
4. Write a multithreaded program that adds (in parallel) elements of two integer vectors.
5. Recommend a set of design choices for alternative computer architectures.
6. Enumerate key concepts associated with dynamic voltage and frequency scaling.
7. Measure energy savings improvement for an 8-bit integer quantization compared to a 32-bit
quantization.
44
5. Enumerate key differences in architectural design principles between a vector and scalar-based
processing unit.
6. List the advantages and disadvantages of PIM architectures.
45
3. Single qubit gates for the circuit model of quantum computation: X, Z, H.
4. Two qubit gates and tensor products. Working with matrices.
5. The No-Cloning Theorem. The Quantum Teleportation protocol.
6. Algorithms (See also: AL-Foundational)
a. Simple quantum algorithms: Bernstein-Vazirani, Simon’s algorithm.
b. Implementing Deutsch-Josza with Mach-Zehnder Interferometers.
c. Quantum factoring (Shor’s Algorithm)
d. Quantum search (Grover’s Algorithm)
7. Implementation aspects (See also: SPD-Interactive)
a. The physical implementation of qubits
b. Classical control of a Quantum Processing Unit (QPU)
c. Error mitigation and control. NISQ and beyond.
d. Measurement approaches
8. Emerging Applications
a. Post-quantum encryption
b. The Quantum Internet
c. Adiabatic quantum computation (AQC) and quantum annealing
Professional Dispositions
Mathematics Requirements
Course objectives: Students should understand the fundamentals of modern computer architectures,
including the challenges associated with memory caches, memory management and pipelining.
Prerequisites:
● MSF-Discrete
47
● AR-Security (4 hours)
● AR-Quantum (4 hours)
Course objectives: Students should understand how computer architectures evolved into today’s
heterogeneous systems and to what extent choices made in the past can influence the design of future
high-performance computing systems.
Prerequisites:
● MSF-Discrete
Course objectives: Students should understand the advanced architectural aspects of modern
computer systems, including heterogeneous architectures and the required hardware and software
interfaces to improve the performance and energy footprint of applications.
Prerequisites:
● MSF-Discrete, MSF-Statistics
Committee
Chair: Marcelo Pias, Federal University of Rio Grande (FURG), Rio Grande-RS, Brazil
Members:
● Brett A. Becker, University College Dublin, Dublin, Ireland
● Mohamed Zahran, New York University, New York, NY, USA
● Monica D. Anderson, University of Alabama, Tuscaloosa, AL, USA
● Qiao Xiang, Xiamen University, Xiamen, China
● Adrian German, Indiana University, Bloomington, IN, USA
48
Data Management (DM)
Preamble
Since the mid-1970s, the study of Data Management (DM) has meant an almost exclusive study of
relational database systems. Depending on institutional context, students have studied, in varying
proportions:
- Data modeling and database design: for example, E-R Data model, relational model, normalization
theory
- Query construction: e.g., relational algebra, SQL
- Query processing: e.g., indices (B+tree, hash), algorithms (e.g., external sorting, select, project,
join), query optimization (transformations, index selection)
- DBMS internals: e.g., concurrency/locking, transaction management, buffer management
Today's graduates are expected to possess DBMS user (as opposed to implementor) skills. These
primarily include data modeling and query construction; ability to take an unorganized collection of data,
organize it using a DBMS, and access/update the collection via queries.
Additionally, students need to study:
- The role data plays in an organization. This includes:
o The Data Life Cycle: Creation-Processing-Review/Reporting-Retention/Retrieval-Destruction.
o The social/legal aspects of data collection: e.g., scale, data privacy, database privacy
(compliance) by design, de-identification, ownership, reliability, database security, and intended
and unintended applications.
- Emerging and advanced technologies that are augmenting/replacing traditional relational systems,
particularly those used to support (big) data analytics, including NoSQL (e.g., JSON, XML, key-
value store databases), cloud databases, MapReduce, and dataframes.
- We recognize the existing and emerging roles for those involved with data management, which
include:
● Product feature engineers: those who use both SQL and NoSQL operational databases.
● Analytical engineers/data engineers: those who write analytical SQL, Python, and Scala code to
build data assets for business groups.
● Business analysts: those who build/manage data most frequently with Excel spreadsheets.
● Data infrastructure engineers: those who implement a data management system in a variety of
data applications (e.g., OLTP).
● “Everyone:” those who produce or consume data need to understand the associated social,
ethical, and professional issues.
One role that transcends all the above categories is that of data custodian. Previously, data was seen
as a resource to be managed (Information Systems Management) just like other enterprise resources.
Today, data is seen in a larger context. Data about customers can now be seen as belonging to (or in
some national contexts, as owned by) those customers. There is now an accepted understanding that
the safe and ethical storage, and use, of institutional data is part of being a responsible data custodian.
49
Furthermore, we acknowledge the tension between a curricular focus on professional preparation
versus the study of a knowledge area as a scientific endeavor. This is particularly true with Data
Management. For example, proving (or at least knowing) the completeness of Armstrong’s Axioms is
fundamental in functional dependency theory. However, most computer science graduates will never
utilize this concept during their professional careers. The same can be said for many other topics in the
Data Management canon. Conversely, if our graduates can only normalize data into Boyce-Codd
normal form (using an automated tool) and write SQL queries, without understanding the role that
indices play in efficient query execution, we have done them and society a disservice.
To this end, the number of CS Core hours is relatively small relative to the KA Core hours. This
approach is designed to allow institutions with differing contexts to customize their curricula
appropriately. An institution that focuses on OLTP implementation, for example, would prioritize efficient
storage and data access, while an institution that focuses on product features would prioritize
programmatic access to extant databases.
However an institution manages this tension we wish to give voice to one of the ironies of computer
science curricula. Students typically spend much of their educational life reading (and writing) data from
a file or interactively, while outside of the academy the predominant data comes from databases
accessed programmatically. Perhaps in the not-too-distant future students will learn programmatic
database access early on and then continue this practice as they progress through their curriculum.
Finally, we understand that while the Data Management KA may be orthogonal to the SEC (Security)
and SEP (Society, Ethics, and the Profession) KAs, it is also ground zero for these (and other)
knowledge areas. When designing persistent data stores, the question of what should be stored must
be examined from both legal and ethical perspectives. Are there privacy concerns? And just as
importantly, how well protected is the data?
Core Hours
50
Data Modeling 2 3
Relational Databases 1 3
Query Construction 2 4
Query Processing 4
DBMS Internals 4
NoSQL Systems 2
Data Analytics 3
Total 10 26
The CS Core hours in Data Security & Privacy are shared with SEC include overlapping topics and are
counted here.
Knowledge Units
51
1. Purpose and advantages of database systems
2. Components of database systems
3. Design of core DBMS functions (e.g., query mechanisms, transaction management, buffer
management, access methods)
4. Database architecture, data independence, and data abstraction
5. Transaction management
6. Normalization
7. Approaches for managing large volumes of data (e.g., NoSQL database systems, use of
MapReduce) (See also: PDC-Algorithms)
8. How to support CRUD-only applications
9. Distributed databases/cloud-based systems
10. Structured, semi-structured, and unstructured data
11. Use of a declarative query language
KA Core:
12. Systems supporting structured and/or stream content
KA Core:
3. Conceptual models (e.g., entity-relationship, UML diagrams)
4. Semi-structured data models (expressed using DTD, XML, or JSON Schema, for example)
Non-core:
5. Spreadsheet models
6. Object-oriented models (See also: FPL-OOP)
a. GraphQL
7. New features in SQL
52
8. Specialized Data Modeling topics
a. Time series data (aggregation, and join)
b. Graph data (link traversal)
c. Techniques for avoiding inefficient raw data access (e.g., “avg daily price”): materialized views
and special data structures (e.g., Hyperloglog, bitmap)
d. Geo-Spatial data (e.g., GIS databases) (See also: SPD-Interactive)
KA Core:
3. Describe the components of the E-R (or some other non-relational) data model.
4. Model a given environment using a conceptual data model.
5. Model a given environment using the document-based or key-value store-based data model.
KA Core:
3. Mapping conceptual schema to a relational schema
4. Physical database design: file and storage structures (See also: OS-Files)
5. Introduction to Functional dependency theory
6. Normalization Theory
a. Decomposition of a schema; lossless-join and dependency-preservation properties of a
decomposition
b. Normal forms (BCNF)
c. Denormalization (for efficiency)
Non-core:
7. Functional dependency theory
a. Closure of a set of attributes
b. Canonical Cover
8. Normalization theory
a. Multi-valued dependency (4NF)
b. Join dependency (PJNF, 5NF)
c. Representation theory
KA Core:
4. Compose a relational schema from a conceptual schema which contains 1:1, 1:n, and n:m
relationships.
5. Map appropriate file structure to relations and indices.
6. Describe how functional dependency theory generalizes the notion of key.
7. Defend a given decomposition as lossless and or dependency preserving.
8. Detect which normal form a given decomposition yields.
9. Comment on reasons for denormalizing a relation.
KA Core:
2. Relational Algebra
3. SQL
a. Data definition including integrity and other constraints specification
b. Update sublanguage
Non-core:
4. Relational Calculus
5. QBE and 4th-generation environments
6. Different ways to invoke non-procedural queries in conventional languages
7. Introduction to other major query languages (e.g., XPATH, SPARQL)
8. Stored procedures
KA Core:
4. Define, in SQL, a relation schema, including all integrity constraints and delete/update triggers.
54
5. Compose an SQL query to update a tuple in a relation.
55
8. For a given scenario decide on which indices to support for the efficient execution of a set of
queries.
9. Describe how DBMSs leverage parallelism to speed up query processing by dividing the work
across multiple processors or nodes.
Non-core:
5. Concurrency Control:
a. Optimistic concurrency control
b. Timestamp concurrency control
6. Recovery Manager
a. Write-Ahead logging
b. ARIES recovery system (Analysis, REDO, UNDO)
Non-core:
3. Storage systems (e.g., Key-Value systems, Data Lakes)
56
4. Distribution Models (Sharding and Replication) (See also: PDC-Communication)
5. Graph Databases
6. Consistency Models (Update and Read, Quorum consistency, CAP theorem) (See also: PDC-
Communication)
7. Processing model (e.g., Map-Reduce, multi-stage map-reduce, incremental map-reduce) (See also:
PDC-Communication)
8. Case Studies: Cloud storage system (e.g., S3); Graph databases; “When not to use NoSQL” (See
also: SPD-Web)
KA Core:DM
5. Need for, and different approaches to securing data at rest, in transit, and during processing (See
also: SEC-Foundations, SEC-Crypto)
6. Database auditing and its role in digital forensics (See also: SEC-Forensics)
7. Data inferencing and preventing attacks (See also: SEC-Crypto)
8. Laws and regulations governing data security and data privacy (See also: SEP-Security, SEP-
Privacy, SEC-Foundations, SEC-Governance)
Non-core:
9. Typical risk factors and prevention measures for ensuring data integrity (See also: SEC-
Governance)
10. Ransomware and prevention of data loss and destruction (See also: SEC-Coding, SEC-Forensics)
57
3. Describe legal and ethical considerations of end-to-end data security and privacy.
KA Core:
4. Develop a database auditing system given risk considerations.
5. Apply several data exploration approaches to understanding unfamiliar datasets.
58
c. Data replication and weak consistency models (See also: PDC-Coordination)
KA Core:
6. Reliability of data (See also: SEP-Security)
7. Provenance, data lineage, and metadata management (See also: SEP-Professional-Ethics)
8. Data security (See also: DM-Security, SEP-Security)
59
KA Core:
5. Describe the meaning of data provenance and lineage
6. Identify how a database might contribute to data security as well as how it may introduce
insecurities.
Professional Dispositions
● Meticulous: Those who either access or store data collections must be meticulous in fulfilling
data ownership responsibilities.
● Responsible: In conjunction with the professional management of (personal) data, it is equally
important that data be managed responsibly. Protection from unauthorized access as well as
prevention of irresponsible, though legal, use of data is paramount. Furthermore, data
custodians need to protect data not only from outside attack, but from crashes and other
foreseeable dangers.
● Collaborative: Data managers and data users must behave in a collaborative fashion to ensure
that the correct data is accessed, and is used only in an appropriate manner.
● Responsive: The data that gets stored and is accessed is always in response to an institutional
need/request.
Mathematics Requirements
Required:
● Discrete Mathematics
○ Set theory (union, intersection, difference, cross-product) (See also: MSF-Discrete)
Desired:
● Probability and Statistics for those studying DM-Analytics. (See also: MSF-Probability, MSF-
Statistics)
For those implementing a single course on Database Systems, there are a variety of options. As
described in [27], there are four primary perspectives from which to approach databases:
● Database design/modeling
● Database use
● Database administration
● Database development, which includes implementation algorithms
60
Course design proceeds by focusing on topics from each perspective in varying degrees according to
one’s institutional context. For example, in [27], one of the courses described can be characterized as
design/modeling (20%), use (20%), development/internals (30%), and administration/tuning/advanced
topics (30%). The topics might include:
● DM-SEP (3 hours)
● DM-Data (1 hour)
● DM-Core (3 hours)
● DM-Modeling (5 hours)
● DM-Relational (4 hours)
● DM-Querying (6 hours)
● DM-Processing (5 hours)
● DM-Internals (5 hours)
● DM-NoSQL (4 hours)
● DM-Security (3 hours)
● DM-Distributed (2 hours)
Possibly, the more interesting question is how to cover the CS Core concepts in the absence of a
dedicated database course. Perhaps the key to accomplishing this is to normalize database access.
Starting with the introductory course students could be accessing a database versus file I/O or
interactive data entry, to acquire the data needed for introductory-level programming. As students
progress through their curriculum, additional CS Core topics can be introduced. For example,
introductory students would be given the code to access the database along with the SQL query. By the
intermediate level, they could be writing their own queries. Finally, in a Software Engineering or
capstone course, they are practicing database design. One advantage of this databases across the
curriculum approach is that allows for the inclusion of database-related SEP topics to also be spread
across the curriculum.
In a similar vein one might have a whole course on the Role of Data from either a Security (SEC)
perspective, or an Ethics (SEP) perspective.
Committee
Members:
● Sherif Aly, The American University in Cairo, Cairo, Egypt
● Sara More, Johns Hopkins University, Baltimore, MD, USA
● Mohamed Mokbel, University of Minnesota, Minneapolis, MN, USA
● Rajendra K. Raj, Rochester Institute of Technology, Rochester, NY, USA
● Avi Silberschatz, Yale University, New Haven, CT, USA
● Min Wei, Microsoft, Seattle, WA, USA
● Qiao Xiang, Xiamen University, Xiamen, China
61
Foundations of Programming Languages (FPL)
Preamble
The foundations of programming languages are rooted in discrete mathematics, logic and formal
languages, and provide a basis for the understanding of complex modern programming languages.
Although programming languages vary according to the language paradigm and the problem domain
and evolve in response to both societal needs and technological advancement, they share an
underlying abstract model of computation and program development. This remains true even as
processor hardware and their interface with programming tools become increasingly intertwined and
progressively more complex. An understanding of the common abstractions and programming
paradigms enables faster learning of programming languages.
The Foundations of Programming Languages knowledge area is concerned with articulating the
underlying concepts and principles of programming languages, the formal specification of a
programming language and the behavior of a program, explaining how programming languages are
implemented, comparing the strengths and weaknesses of various programming paradigms, and
describing how programming languages interface with entities such as operating systems and
hardware. The concepts covered here are applicable to several languages and an understanding of
these principles assists a learner to move readily from one language to another, as well as select a
programming paradigm and language that best suits the problem at hand.
Programming languages are the medium through which programmers precisely describe concepts,
formulate algorithms, and reason about solutions. Over the course of a career, a computer scientist will
learn and work with many different languages, separately or together. Software developers must
understand different programming models, programming features and constructs, and underlying to
make informed design choices among languages that support multiple complementary approaches. It
would be useful to know how programming language features are defined, composed, and
implemented to improve execution efficiency and long-term maintenance of developed software. Also
useful is a basic knowledge of language translation, program analysis, run-time behavior, memory
management and interplay of concurrent processes communicating with each other through message-
passing, shared memory, and synchronization. Finally, some developers and researchers will need to
design new languages, an exercise which requires greater familiarity with basic principles.
62
relevance of topics over the past decade. The inclusion of new topics was driven by their current
prominence in the programming language landscape, or the anticipated impact of emerging areas on
the profession in general. Specifically, the changes are:
● Object-Oriented Programming -4 CS Core hours
● Functional Programming -2 CS Core hours
● Event-Driven and Reactive Programming +1 CS Core hour
● Parallel and Distributed Computing +3 CS Core hours
● Type Systems -1 CS Core hour
● Program Representation -1 CS Core hour
In addition, some knowledge units from CS2013 were renamed to reflect their content more accurately:
● Static Analysis was renamed Program Analysis and Analyzers
● Concurrency and Parallelism was renamed Parallel and Distributed Computing
● Program Representation was renamed Program Abstraction and Representation
● Runtime Systems was renamed Runtime Behavior and Systems
● Basic Type Systems and Type Systems were merged into a single topic and named Type
Systems
Six new knowledge units have been added to reflect their continuing and growing importance as we
look toward the 2030s:
● Shell Scripting +2 CS Core hours
● Systems Execution and Memory Model +3 CS Core hours
● Formal Development Methodologies
● Design Principles of Programming Languages
● Fundamentals of Programming Languages and Society, Ethics, and the Profession
Notes:
● Several topics within this knowledge area either build on or overlap with content covered in
other knowledge areas such as the Software Development Fundamentals Knowledge Area in a
curriculum’s introductory courses. Curricula will differ on which topics are integrated in this
fashion and which are delayed until later courses on software development and programming
languages.
● Different programming paradigms correspond to different problem domains. Most languages
have evolved to integrate more than one programming paradigm such as imperative with object-
oriented, functional programming with object-oriented, logic programming with object-oriented,
and event and reactive modeling with object-oriented programming. Hence, the emphasis is not
on just one programming paradigm but on a balance of all major programming paradigms.
● While the number of CS Core and KA Core hours is identified for each major programming
paradigm (object-oriented, functional, logic), the distribution of hours across the paradigms may
differ depending on the curriculum and programming languages students have been exposed to
leading up to coverage of this knowledge area. This document assumes that students have
exposure to an object-oriented programming language leading into this knowledge area.
● Imperative programming is not listed as a separate paradigm to be examined. Instead it is
treated as a subset of the object-oriented paradigm.
63
● With multicore computing, cloud computing, and computer networking becoming commonly
available in the market, it has become critical to understand the integration of “distribution,
concurrency, parallelism” along with other programming paradigms as a core area. This
paradigm is integrated with almost all other major programming paradigms.
● With ubiquitous computing and real-time temporal computing applications increasing in daily
human life within domains such as health, transportation, smart homes, it has become important
to cover the software development aspects of event-driven and reactive programming as well as
parallel and distributed computing. Some of the topics covered will require and overlap with
concepts in knowledge areas such as Architecture and Organization, Operating Systems, and
Systems Fundamentals.
● Some topics from the Parallel and Distributed Computing knowledge unit are likely to be
integrated within the curriculum with topics from the Parallel and Distributed Programming
kKnowledge area.
● There is an increasing interest in formal methods to prove program correctness and other
properties. To support this, additional coverage of topics related to formal methods is included,
but all these topics are identified as non-core.
● When introducing these topics, it is also important that an instructor provides context for this
material including why we have an interest in programming languages and what they do for us
in terms of providing a human readable version of instructions for a computer to execute.
Core Hours
64
Formal Development Methodologies
Design Principles of Programming Languages
Society, Ethics, and the Profession Included in SEP hours
Total 21 19
Compared to CS2013 which had a total of 24 CS Core hours (Tier-1 hours plus 80% of Tier-2 hours),
and 4 KA Core hours (20% of Tier-2 hours), the current recommendation has a total of 24 CS core
hours (of which 3 are shared with other knowledge areas, and counted toward the CS Core hour total in
that knowledge area) and 20 KA core hours (of which 1 is shared with another knowledge area and
counted there).
Knowledge Units
KA Core:
11. Collection classes, iterators, and other common library components.
12. Metaprogramming and reflection.
65
3. Build a simple class hierarchy utilizing subclassing that allows code to be reused for distinct
subclasses.
4. Predict and validate control flow in a program using dynamic dispatch.
5. Compare and contrast how computational solutions to a problem differ in procedural, functional,
and object-oriented approaches.
6. Compare and contrast mechanisms to define and protect data elements within procedural,
functional, and object-oriented approaches.
7. Compare and contrast the benefits and costs/impact of using inheritance (subclasses) and
composition (in particular how to base composition on higher order functions).
8. Explain the relationship between object-oriented inheritance (code-sharing and overriding) and
subtyping (the idea of a subtype being usable in a context that expects the supertype).
9. Use object-oriented encapsulation mechanisms such as interfaces and private members.
10. Define and use iterators and other operations on aggregates, including operations that take
functions as arguments, in multiple programming languages, selecting the most natural idioms for
each language. (See also: FPL-Functional)
KA Core:
11. Use collection classes and iterators effectively to solve a problem.
KA Core:
5. Metaprogramming and reflection.
6. Function closures (functions using variables in the enclosing lexical environment):
a. Basic meaning and definition - creating closures at run-time by capturing the environment.
b. Canonical idioms: call-backs, arguments to iterators, reusable code via function arguments.
c. Using a closure to encapsulate data in its environment.
d. Lazy versus eager evaluation.
Non-core:
7. Graph reduction machine and call-by-need.
66
8. Implementing lazy evaluation.
9. Integration with logic programming paradigm using concepts such as equational logic, narrowing,
residuation and semantic unification. (See also: FPL-Logic)
10. Integration with other programming paradigms such as imperative and object-oriented.
KA Core:
5. Explain a simple example of lambda expression being implemented using a virtual machine, such
as a SECD machine, showing storage and reclaim of the environment.
6. Correctly interpret variables and lexical scope in a program using function closures.
7. Use functional encapsulation mechanisms such as closures and modular interfaces.
8. Compare and contrast stateful vs stateless execution.
9. Define and use iterators and other operations on aggregates, including operations that take
functions as arguments, in multiple programming languages, selecting the most natural idioms for
each language. (See also: FPL-OOP)
Non-core:
10. Illustrate graph reduction using a λ-expression using a shared subexpression.
11. Illustrate the execution of a simple nested λ-expression using an abstract machine, such as an ABC
machine.
12. Illustrate narrowing, residuation and semantic unification using simple illustrative examples.
13. Illustrate the concurrency constructs using simple programming examples of known concepts such
as a buffer being read and written concurrently or sequentially. (See also: FPL-OOP)
Non-core:
67
9. Memory overhead of variable copying in handling iterative programs.
10. Programming constructs to store partial computation and pruning search trees.
11. Mixing functional programming and logic programming using concepts such as equational logic,
narrowing, residuation and semantic unification. (See also: FPL-Functional)
12. Higher-order, constraint, and inductive logic programming. (See also: AI-LRR)
13. Integration with other programming paradigms such as object-oriented programming.
14. Advance programming constructs such as difference-lists, creating user defined data structures, set
of, etc.
Non-core:
6. Illustrate computation of simple programs such as Fibonacci and show overhead of recomputation,
and then show how to improve execution overhead.
KA Core:
5. Using a reactive framework:
a. Defining event handlers/listeners.
b. Parameterization of event senders and event arguments.
c. Externally generated events and program-generated events.
6. Separation of model, view, and controller.
7. Event-driven and reactive programs as state-transition systems.
KA Core:
3. Define and use a reactive framework.
4. Describe an interactive system in terms of a model, a view, and a controller.
69
a. Order-based properties:
i. Commutativity.
ii. Independence.
b. Consistency-based properties:
i. Atomicity.
ii. Consensus.
4. Execution control: (See also: PDC-Coordination, SF-Foundations)
a. Async await.
b. Promises.
c. Threads.
5. Communication and coordination: (See also: OS-Process, PDC-Communication, PDC-
Coordination)
a. Mutexes.
b. Message-passing.
c. Shared memory.
d. Cobegin-coend.
e. Monitors.
f. Channels.
g. Threads.
h. Guards.
KA Core:
6. Futures.
7. Language support for data parallelism such as forall, loop unrolling, map/reduce.
8. Effect of memory-consistency models on language semantics and correct code generation.
9. Representational State Transfer Application Programming Interfaces (REST APIs).
10. Technologies and approaches: cloud computing, high performance computing, quantum computing,
ubiquitous computing
11. Overheads of message-passing
12. Granularity of program for efficient exploitation of concurrency.
13. Concurrency and other programming paradigms (e.g., functional).
70
KA Core:
7. Explain how REST API's integrate applications and automate processes.
8. Explain benefits, constraints and challenges related to distributed and parallel computing.
KA Core:
7. Type equivalence: structural vs name equivalence.
8. Complementary benefits of static and dynamic typing:
a. Errors early vs errors late/avoided.
71
b. Enforce invariants during code development and code maintenance vs postpone typing
decisions while prototyping and conveniently allow flexible coding patterns such as
heterogeneous collections.
c. Typing rules:
i. Rules for function, product, and sum types.
d. Avoid misuse of code vs allow more code reuse.
e. Detect incomplete programs vs allow incomplete programs to run.
f. Relationship to static analysis.
g. Decidability.
Non-core:
9. Compositional type constructors, such as product types (for aggregates), sum types (for unions),
function types, quantified types, and recursive types.
10. Type checking.
11. Subtyping: (See also: FPL-OOP)
a. Subtype polymorphism; implicit upcasts in typed languages.
b. Notion of behavioral replacement: subtypes acting like supertypes.
c. Relationship between subtyping and inheritance.
12. Type safety as preservation plus progress.
13. Type inference.
14. Static overloading.
15. Propositions as types (implication as a function, conjunction as a product, disjunction as a sum).
(See also: FPL-Formalism)
16. Dependent types (universal quantification as dependent function, existential quantification as
dependent product). (See also: FPL-Formalism)
KA Core:
7. Explain how typing rules define the set of operations that are legal for a type.
8. List the type rules governing the use of a particular compound type.
9. Explain why undecidability requires type systems to conservatively approximate program behavior.
10. Define and use program pieces (such as functions, classes, methods) that use generic types,
including for collections.
72
11. Discuss the differences among generics, subtyping, and overloading.
12. Explain multiple benefits and limitations of static typing in writing, maintaining, and debugging
software.
Non-core:
13. Define a type system precisely and compositionally.
14. For various foundational type constructors, identify the values they describe and the invariants they
enforce.
15. Precisely describe the invariants preserved by a sound type system.
16. Prove type safety for a simple language in terms of preservation and progress theorems.
17. Implement a unification-based type-inference algorithm for a simple language.
18. Explain how static overloading and associated resolution algorithms influence the dynamic behavior
of programs.
KA Core:
9. Run-time representation of core language constructs such as objects (method tables) and first-class
functions (closures).
10. Secure compiler development. (See also: SEC-Foundations, SEC-Coding)
KA-Core:
7. Discuss the benefits and limitations of garbage collection, including the notion of reachability.
74
2. Programs that take (other) programs as input such as interpreters, compilers, type-checkers,
documentation generators.
3. Components of a language:
a. Definitions of alphabets, delimiters, sentences, syntax and semantics.
b. Syntax vs semantics.
4. Program as a set of non-ambiguous meaningful sentences.
5. Basic programming abstractions: constants, variables, declarations (including nested declarations),
command, expression, assignment, selection, definite and indefinite iteration, iterators, function,
procedure, modules, exception handling. (See also: SDF-Fundamentals)
6. Mutable vs immutable variables: advantages and disadvantages of reusing existing memory
location vs advantages of copying and keeping old values; storing partial computation vs
recomputation.
7. Types of variables: static, local, nonlocal, global; need and issues with nonlocal and global
variables.
8. Scope rules: static vs dynamic; visibility of variables; side-effects.
9. Side-effects induced by nonlocal variables, global variables and aliased variables.
Non-core:
10. L-values and R-values: mapping mutable variable-name to L-values; mapping immutable variable-
names to R-values.
11. Environment vs store and their properties.
12. Data and control abstraction.
13. Mechanisms for information exchange between program units such as procedures, functions and
modules: nonlocal variables, global variables, parameter-passing, import-export between modules.
14. Data structures to represent code for execution, translation, or transmission.
15. Low level instruction representation such as virtual machine instructions, assembly language, and
binary representation. (See also: AR-Representation, AR-Assembly)
16. Lambda calculus, variable binding, and variable renaming. (See also: AL-Models, FPL-Formalism)
17. Types of semantics: operational, axiomatic, denotational, behavioral; define and use abstract
syntax trees; contrast with concrete syntax.
76
FPL-Analysis: Program Analysis and Analyzers
Non-core:
4. Relevant program representations, such as basic blocks, control-flow graphs, def-use chains, and
static single assignment.
5. Undecidability and consequences for program analysis.
6. Flow-insensitive analysis, such as type-checking and scalable pointer and alias analysis.
7. Flow-sensitive analysis, such as forward and backward dataflow analyses.
8. Path-sensitive analysis, such as software model checking and software verification.
9. Tools and frameworks for implementing analyzers.
10. Role of static analysis in program optimization and data dependency analysis during exploitation of
concurrency. (See also: FPL-Code)
11. Role of program analysis in (partial) verification and bug-finding. (See also: FPL-Code)
12. Parallelization:
a. Analysis for auto-parallelization.
b. Analysis for detecting concurrency bugs.
77
4. Discuss why separate compilation limits optimization because of unknown effects of calls.
5. Discuss opportunities for optimization introduced by naive translation and approaches for achieving.
optimization, such as instruction selection, instruction scheduling, register allocation, and peephole
optimization.
78
Non-core:
1. Encapsulation mechanisms.
2. Lazy evaluation and infinite streams.
3. Compare and contrast lazy evaluation vs eager evaluation.
4. Unification vs assertion vs expression evaluation.
5. Control abstractions: exception handling, continuations, monads.
6. Object-oriented abstractions: multiple inheritance, mixins, traits, multimethods.
7. Metaprogramming: macros, generative programming, model-based development.
8. String manipulation via pattern-matching (regular expressions).
9. Dynamic code evaluation ("eval").
10. Language support for checking assertions, invariants, and pre/post-conditions.
11. Domain specific languages, such as database languages, data science languages, embedded
computing languages, synchronous languages, hardware interface languages.
12. Massive parallel high performance computing models and languages.
80
9. Understanding of situations where formal methods can be effectively applied and how to structure
development to maximize their value.
81
4. Etymology of terms such as “class”, “master”, “slave” in programming languages.
5. Increasing accessibility by supporting multiple languages within applications (UTF).
Professional Dispositions
1. Professional: Students must demonstrate and apply the highest standards when using
programming languages and formal methods to build safe systems that are fit for their purpose.
2. Meticulous: Attention to detail is essential when using programming languages and applying
formal methods.
3. Inventive: Programming and approaches to formal proofs is inherently a creative process, students
must demonstrate innovative approaches to problem solving. Students are accountable for their
choices regarding the way a problem is solved.
4. Proactive: Programmers are responsible for anticipating all forms of user input and system
behavior and to design solutions that address each one.
5. Persistent: Students must demonstrate perseverance since the correct approach is not always
self-evident and a process of refinement may be necessary to reach the solution.
Mathematics Requirements
Required:
● Discrete Mathematics – Boolean algebra, proof techniques, digital logic, sets and set
operations, mapping, functions and relations, states and invariants, graphs and relations, trees,
counting, recurrence relations, finite state machine, regular grammar. (See also: MSF-Discrete)
● Logic – propositional logic (negations, conjunctions, disjunctions, conditionals, biconditionals),
first-order logic, logical reasoning (induction, deduction, abduction). (See also: MSF-Discrete)
● Mathematics – Matrices, probability, statistics. (See also: MSF-Probability, MSF-Statistics)
The second course is an advanced course focused on the implementation of a programming language,
the formal description of a programming language and a formal description of the behavior of a
program.
While these two courses have been the predominant way to cover this knowledge area over the past
decade, it is by no means the only way that this content can be covered. Institutions can, for example,
82
choose to cover only the CS Core content (24 hours) as part of one or spread over multiple courses
(e.g. Software Engineering). Natural combinations are easily identifiable since they are the areas in
which the Foundations of Programming Languages knowledge area overlaps with other knowledge
areas. Such overlaps have been identified throughout this knowledge area.
Prerequisites:
● Discrete Mathematics – Boolean algebra, proof techniques, digital logic, sets and set
operations, mapping, functions and relations, states and invariants, graphs and relations, trees,
counting, recurrence relations, finite state machine, regular grammar. (See also: MSF-Discrete).
Prerequisites:
● Discrete mathematics – Boolean algebra, proof techniques, digital logic, sets and set
operations, mapping, functions and relations, states and invariants, graphs and relations, trees,
counting, recurrence relations, finite state machine, regular grammar. (See also: MSF-Discrete).
● Logic – propositional logic (negations, conjunctions, disjunctions, conditionals, biconditionals),
first-order logic, logical reasoning (induction, deduction, abduction). (See also: MSF-Discrete).
● Introductory programming course. (See also: SDF-Fundamentals).
● Programming proficiency in programming concepts such as: (See also: SDF-Fundamentals):
83
● Type declarations such as basic data types, records, indexed data elements such as arrays
and vectors, and class/subclass declarations, types of variables.
● Scope rules of variables,
● Selection and iteration concepts, function and procedure calls, methods, object creation.
● Data structure concepts such as: (See also: SDF-DataStructures):
● Abstract data types, sequence and string, stack, queues, trees, dictionaries. (See also:
SDF-Data-Structures)
● Pointer-based data structures such as linked lists, trees and shared memory locations. (See
also: SDF-Data-Structures, AL-Foundational)
● Hashing and hash tables. (See also: SDF-Data-Structures, AL-Foundational)
● System fundamentals and computer architecture concepts such as (See also: SF-Foundations):
● Digital circuits design, clocks, bus. (See also: OS-Principles)
● registers, cache, RAM and secondary memory. (See also: OS-Memory)
● CPU and GPU. (See also: AR-Heterogeneity)
● Basic knowledge of operating system concepts such as:
● Interrupts, threads and interrupt-based/thread-based programming. (See also: OS-
Concurrency)
● Scheduling, including prioritization. (See also: OS-Scheduling)
● Memory fragmentation. (See also: OS-Memory)
● Latency.
Committee
Chair: Michael Oudshoorn, High Point University, High Point, NC, USA
Members:
● Annette Bieniusa, TU Kaiserslautern, Kaiserslautern, Germany
● Brijesh Dongol, University of Surrey, Guildford, UK
● Michelle Kuttel, University of Cape Town, Cape Town, South Africa
● Doug Lea, State University of New York at Oswego, Oswego, NY, USA
● James Noble, Victoria University of Wellington, Wellington, New Zealand
● Mark Marron, Microsoft Research, Seattle, WA, USA and University of Kentucky, Lexington, KY,
USA
● Peter-Michael Osera, Grinnell College, Grinnell, IA, USA
● Michelle Mills Strout, University of Arizona, Tucson, AZ, USA
Contributors:
● Alan Dearle, University of St. Andrews, St. Andrews, Scotland
84
Graphics and Interactive Techniques (GIT)
Preamble
Computer graphics is the term used to describe the computer generation and manipulation of images
and can be viewed as the science of enabling visual communication through computation. Its
application domains include animation, Computer Generated Imagery (CGI) and Visual Effects (VFX);
engineering; machine learning; medical imaging; scientific, information, and knowledge visualization;
simulators; special effects; user interfaces; and video games. Traditionally, graphics at the
undergraduate level focused on rendering, linear algebra, physics, the graphics pipeline, interaction,
and phenomenological approaches. Today’s graphics courses increasingly include data science,
physical computing, animation, and haptics. Thus the knowledge area (KA) expanded beyond core
image-based computer graphics. At the advanced level, undergraduate institutions are more likely to
offer one or several courses specializing in a specific graphics knowledge unit (KU) or topic: e.g.,
gaming, animation, visualization, tangible or physical computing, and immersive courses such as
Augmented Reality (AR)/Virtual Reality (VR)/eXtended Reality (XR). There is considerable connection
with other computer science knowledge areas (KAs): Algorithmic Foundations, Architecture and
Organization, Artificial Intelligence; Human-Computer Interaction; Parallel and Distributed Computing;
Specialized Platform Development; Software Engineering, and Society, Ethics, and the Profession.
In order for students to become adept at the use and generation of computer graphics and interactive
techniques, many issues must be addressed, such as human perception and cognition, data and image
file formats, display specifications and protocols, hardware interfaces, and application program
interfaces (APIs). Unlike other knowledge areas, KUs within Graphics and Interactive Techniques may
be included in a variety of elective courses. Alternatively, graphics topics may be introduced in an
applied project in courses primarily covering human computer interaction, embedded systems, web
development, introductory programming courses, etc. Undergraduate computer science students who
study the KUs specified below through a balance of theory and applied instruction will be able to
understand, evaluate, and/or implement the related graphics and interactive techniques as users and
developers. Because technology changes rapidly, the Graphics and Interactive Techniques
subcommittee attempted to avoid being overly prescriptive. Any examples of APIs, programs, and
languages should be considered as appropriate examples in 2023. In effect, this is a snapshot in time.
Graphics as a KA has expanded and become pervasive since the CS2013 report. AR/VR/XR, artificial
intelligence, computer vision, data science, machine learning, and interfaces driven by embedded
sensors in everything from cars to coffee makers use graphics and interactive techniques. The now
ubiquitous smartphone has made the majority of the world’s population regular users and creators of
graphics, digital images, and the interactive techniques to manipulate them. Animations, games,
visualizations, and immersive applications that ran on desktops in 2013, now can run on mobile
devices. The amount of stored digital data grew exponentially since 2013, and both data and
visualizations are now published by myriad sources including news media and scientific organizations.
Revenue from mobile video games now exceeds that of music and movies combined [39]. CGI and
VFX are employed in almost all films, animations, TV productions, advertising, and business graphics.
85
The number of people who create graphics have skyrocketed, as have the number of applications and
generative tools used to produce graphics.
It is critical that students and faculty confront the ethical issues, questions, and conundrums that have
arisen and will continue to arise in and because of applications in computer graphics. Today’s
headlines unfortunately already provide examples of inequity and/or wrong-doing in autonomous
navigation, deepfakes, computational photography, generative images, and facial recognition.
86
crown splash” are related, but different. Depending on the simulation goals, covered topics may
vary as shown.
● Particle systems
○ Integration methods (Forward Euler, Midpoint, Leapfrog)
● Rigid Body Dynamics
○ Particle systems
○ Collision Detection
○ Triangle/point
○ Edge/edge
● Cloth
○ Particle systems
○ Mass/spring networks
○ Collision Detection
● Particle-Based Water
○ Integration methods
○ Smoother Particle Hydrodynamics (SPH) Kernels
○ Signed Distance Function-Based Collisions
● Grid-Based Smoke and Fire
○ Semi-Lagrangian Advection
○ Pressure Projection
● Grid and Particle-Based Water
○ Particle-Based Water
● Grid-Based Smoke and Fire
○ Semi-Lagrangian Advection
○ Pressure Projection
● Grid and Particle-Based Water
○ Particle-Based Water
○ Grid-Based Smoke, and Fire
● GIT-Immersion: Immersion. Immersion includes Augmented Reality (AR), Virtual Reality (VR),
and Mixed Reality (MR).
● GIT-Interaction: Interaction. Interactive computer graphics is a requisite part of real-time
applications ranging from the utilitarian-like word processors to virtual and/or augmented reality
applications.
● GIT-Image: Image Processing. Image Processing consists of the analysis and processing of
images for multiple purposes, but most frequently to improve image quality and to manipulate
imagery. It lies at the cornerstone of computer vision.
● GIT-Physical: Tangible/Physical Computing. Tangible/Physical Computing refers to
microcontroller-based interactive systems that detect and respond to sensor input.
● GIT-SEP: Society, Ethics and the Profession.
87
Changes since CS2013
In an effort to align CS2013’s Graphics and Visualizations areas with ACM Special Interest Group on
Graphic and Interactive Techniques (SIGGRAPH) and to reflect the natural expansion of the field to
include haptic and physical computing in addition to images, we have renamed it Graphics and
Interactive Techniques (GIT). To capture the expanded footprint of the KA, the following KUs have
been added to the original list consisting of Fundamental Concepts, Visualization, Basic Rendering
(renamed Rendering), Geometric Modeling, Advanced Rendering (renamed Shading), and Computer
Animation:
● Immersion (MR, AR, VR)
● Interaction
● Image Processing
● Tangible/Physical Computing
● Simulation
Core Hours
Fundamental Concepts 4 3
Visualization 6
Geometric Modeling 6
Computer Animation 6
Simulation 6
Interaction 4
Image Processing 6
Tangible/Physical Computing 6
Total 4 70
88
Knowledge Units
KA Core:
89
10. Applied interactive graphics (e.g., processing, python,)
11. Display characteristics (protocols and ports
CS Core:
1. Identify common uses of digital presentation to humans (e.g., computer graphics, sound).
2. Describe how analog signals can be reasonably represented by discrete samples, for example, how
images can be represented by pixels.
3. Compute the memory requirement for storing a color image given its resolution.
4. Create a graphic depicting how the limits of human perception affect choices about the digital
representation of analog signals.
5. Indicate when and why you should use each of the following common file formats: JPG, PNG, MP3,
MP4, and GIF?
6. Describe color models and their use in graphics display devices.
7. Compute the memory requirements for a multi-second movie (lasting n seconds) displaying at a
specific framerate (f frames per second) at a specified resolution (r pixels per frame)
8. Compare and contrast digital video to analog video.
9. Describe the basic process of producing continuous motion from a sequence of discrete frames
(sometimes called “flicker fusion”).
10. Describe a possible visual misrepresentation that could result from digitally sampling an analog
world.
11. Compute memory space requirements based on resolution and color coding.
12. Compute time requirements based on refresh rates and rasterization techniques.
KA Core:
13. Design a user interface and an alternative for persons with color perception deficiency.
14. Construct a simple graphical user interface using a graphics library.
GIT-Visualization: Visualization
KA Core:
1. Scientific Data Visualization and Information Visualization
2. Visualization techniques
a. Statistical visualization (e.g., scatterplots, bar graphs, histograms, line graphs, pie charts, trees
and graphs)
b. Text visualization
c. Geospatial visualization
d. 2D/3D scalar fields
e. Vector fields
f. Direct volume rendering
3. Visualization pipeline
a. Structuring data
b. Mapping data to visual representations (e.g., scales, grammar of graphics)
c. View transformations (e.g., pan, zoom, filter, select)
90
4. Common data formats (e.g., HDF, netCDF, geotiff, GeoJSON, shape files, raw binary, JSON, CSV,
plain text)
5. High-dimensional data handling techniques
a. Statistical (e.g., averaging, clustering, filtering)
b. Perceptual (e.g., multi-dimensional vis, parallel coordinates, trellis plots)
6. Perceptual and cognitive foundations that drive visual abstractions.
a. Human optical system
b. Color theory
c. Gestalt theories
7. Design and evaluation of visualizations
a. Purpose (e.g., analysis, communication, aesthetics)
b. Accessibility
c. Appropriateness of encodings
d. Misleading visualizations
93
4. Implement a non-trivial shading algorithm (e.g., toon shading, cascaded shadow maps) under a
rasterization API.
5. State how a particular artistic technique might be implemented in a renderer.
6. Describe how one might recognize the shading techniques used to create a particular image.
7. Write a program that implements any of the specified graphics techniques using a primitive graphics
system at the individual pixel level.
8. Write a ray tracer for scenes using a simple (e.g., Phong’s) Bidirectional Reflection Distribution
Function (BRDF) plus reflection and refraction.
GIT-Simulation: Simulation
KA Core:
1. Collision detection and response
a. Signed Distance Fields
b. Sphere/sphere
c. Triangle/point
d. Edge/edge
2. Procedural animation using noise
3. Particle systems
a. Integration methods (e.g., forward Euler, midpoint, leapfrog)
b. Mass/spring networks
c. Position-based dynamics
d. Rules (e.g., boids, crowds)
e. Rigid bodies
4. Grid-based fluids
a. Semi-Lagrangian advection
b. Pressure projection
5. Heightfields
a. Terrain: transport, erosion
b. Water: ripple, shallow water.
6. Rule-based systems (e.g., L-systems, space-colonizing systems, Game of Life)
GIT-Immersion: Immersion
KA Core: (See also: SPD-Game, SPD-Mobile, HCI-Design)
1. Immersion levels (i.e., Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR))
2. Definitions of and distinctions between immersion and presence
3. 360 Video
4. Stereoscopic display
a. Head-mounted displays
95
b. Stereo glasses
5. Viewer tracking
a. Inside out and outside In
b. Head/Body/Hand/tracking
6. Time-critical rendering to achieve optimal Motion To Photon (MTP) latency
a. Multiple Levels Of Details (LOD)
b. Image-based VR
c. Branching movies
7. Distributed VR, collaboration over computer network
8. Presence and factors that impact level of immersion
9. 3D interaction
10. Applications in medicine, simulation, training, and visualization
11. Safety in immersive applications
a. Motion sickness
b. VR obscures the real world, which increases the potential for falls and physical accidents
GIT-Interaction: Interaction
KA Core:
1. Event Driven Programming (See also: FPL-Event-Driven)
a. Mouse or touch events
b. Keyboard events
c. Voice input
d. Sensors
e. Message passing communication
f. Network events
2. Graphical User Interface (Single Channel)
a. Window
b. Icons
c. Menus
d. Pointing Devices
96
3. Accessibility (See also: SEP-DEIA)
Non-core:
4. Gestural Interfaces (See also: SPD-Game)
a. Touch screen gestures
b. Hand and body gestures
5. Haptic Interfaces
a. External actuators
b. Gloves
c. Exoskeletons
6. Multimodal Interfaces
7. Head-worn Interfaces
a. Brain-computer interfaces, e.g., ElectroEncephaloGraphy (EEG) electrodes and Multi-Electrode
Arrays (MEAs)
b. Headsets with embedded eye tracking
c. AR glasses
8. Natural Language Interfaces (See also: AI-NLP)
None-core:
4. Assess the consistency or lack of consistency in cross-platform touch screen gestures.
5. Design and create an application that provides haptic feedback.
6. Write a program that is controlled by gestures.
Professional Dispositions
● Self-directed: Graphics hardware and software evolves rapidly. Students need to understand the
importance of being a life-long learner.
● Collaborative: Graphics developers typically work in diverse teams composed of people with
disparate subject matter expertise. Students should understand the value of being a good team
member, and their teamwork skills should be cultivated and evaluated with constructive feedback.
● Effective communicator: Communication is critical. Students technical communication—verbal,
written, and in code—should be practiced and evaluated.
● Creative: Creative problem solving lies at the core of computer graphics.
Mathematics Requirements
Required:
1. Coordinate geometry
2. Trigonometry
3. MSF-Linear*
a. Points (coordinate systems & homogeneous coordinates), vectors, and matrices
b. Vector operations: addition, scaling, dot and cross products
c. Matrix operations: addition, multiplication, determinants
d. Affine transformations
4. MSF-Calculus*
a. Continuity
*Note, if students enroll in a graphics class without linear algebra or calculus, graphics faculty can teach
what is needed. To wit, many graphics textbooks cover the requisite mathematics in the appendix.
Desirable:
1. MSF-Linear
a. Eigenvectors and Eigen decomposition
b. Gaussian elimination and lower upper factorization
c. Singular value decomposition
2. MSF-Calculus
a. Quaternions
b. Differentiation
c. Vector calculus
d. Tensors
e. Differential geometry
3. MSF-Probability
100
4. MSF-Statistics
5. MSF-Discrete
a. Numerical methods for simulation
101
User-Centered Development to include the following:
● GIT-Fundamentals (4 hours)
● GIT-Rendering (6 hours)
● GIT-Interaction (3 hours)
● HCI-User (8 hours)
● HCI-Accessibility (3 hours)
● HCI-SEP (4 hours)
● SE-Construction (4 hours)
● SPD-Web, SPD-Game, SPD-Mobile (8 hours)
Students should be able to develop applications that are usable and useful for people.
Graphical user interface (GUI) designs will be implemented and analyzed using rapid
prototyping.
102
● HCI-User (3 hours)
● HCI-Design (3 hours)
● SEP-Privacy, SEP-DEIA, and SEP-Professional-Ethics (3 hours)
Prerequisites:
● AL-Foundational
● AL-Strategies
● SDF-Algorithms
● SDF-Data-Structures
● SDF-Practices
● MSF-Probability
● MSF-Statistics
Course objectives: Students should understand how to select a dataset; ensure the data are accurate
and appropriate; design, develop and test a usable visualization program that depicts the data; and be
able to read and evaluate existing visualizations.
103
Course objectives: Students should understand and be able to create short animations employing the
principles of animation.
Committee
Chair: Susan Reiser, University of North Carolina Asheville, Asheville, NC, USA
Members:
● Erik Brunvand, University of Utah, Salt Lake City, UT, USA
● Kel Elkins, NASA/GSFC Scientific Visualization Studio, Greenbelt, MD, USA
● Jeff Lait, SideFX, Toronto, Canada
● Amruth Kumar, Ramapo College, Mahwah, NJ, USA
● Paul Mihail, Valdosta State University, Valdosta, GA, USA
● Tabitha Peck, Davidson College, Davidson, NC, USA
● Ken Schmidt, NOAA NCEI, Asheville, NC, USA
● Dave Shreiner, UnityTechnologies & Sonoma State University, San Francisco, CA, USA
Contributors:
● Ginger Alford, Southern Methodist University, University Park, TX, USA
● Christopher Andrews, Middlebury College, Middlebury, VT, USA
● A. J. Christensen, NASA/GSFC Scientific Visualization Studio – SSAI, Champaign, IL, USA
● Roger Eastman, University of Maryland, College Park, MD, USA
● Ted Kim, Yale University, New Haven, CT, USA
● Barbara Mones, University of Washington, Seattle, WA, USA
● Greg Shirah, NASA/GSFC Scientific Visualization Studio, Greenbelt, MD, USA
● Beatriz Sousa Santos, University of Aveiro, Portugal
● Anthony Steed, University College, London, UK
104
Human-Computer Interaction (HCI)
Preamble
Computational systems not only enable users to solve problems, but also foster social connectedness
and support a broad variety of human endeavors. Thus, these systems should work well with their
users and solve problems in ways that respect individual dignity, social justice, and human values and
creativity. Human-computer interaction (HCI) addresses those issues from an interdisciplinary
perspective that includes computer science, psychology, business strategy, and design principles.
Each user is different and, from the perspective of HCI, the design of every system that interacts with
people should anticipate and respect that diversity. This includes not only accessibility, but also cultural
and societal norms, neural diversity, modality, and the responses the system elicits in its users. An
effective computational system should evoke trust while it treats its users fairly, respects their privacy,
provides security, and abides by ethical principles.
These goals require design-centric engineering that begins with intention and with the understanding
that design is an iterative process, one that requires repeated evaluation of its usability and its impact
on its users. Moreover, technology evokes user responses, not only by its output, but also by the
modalities with which it senses and communicates. This knowledge area heightens the awareness of
these issues and should influence every computer scientist.
Core Hours
105
Knowledge Unit CS Core KA Core
Understanding the User 2 5
Accountability and Responsibility in Design 2 2
Accessibility and Inclusive Design 2 2
Evaluating the Design 1 2
System Design 1 5
Society, Ethics, and the Profession Included in SEP hours
Total Hours 8 16
The hours shared with SEP include overlapping topics and are counted there.
Knowledge Units
HCI-User: Understanding the User: Individual goals and interactions with others
CS Core:
1. User-centered design and evaluation methods. (See also: SEP-Context, SEP-Ethical-Analysis,
SEP-Professional-Ethics)
a. “You are not the users”
b. User needs-finding
c. Formative studies
d. Interviews
e. Surveys
f. Usability tests
KA Core:
2. User-centered design methodology. (See also: SE-Tools)
a. Personas/persona spectrum
b. User stories/storytelling and techniques for gathering stories
c. Empathy maps
d. Needs assessment (techniques for uncovering needs and gathering requirements - e.g.,
interviews, surveys, ethnographic and contextual enquiry) (See also: SE-Requirements)
e. Journey maps
f. Evaluating the design (See also: HCI-Evaluation)
g. Interfacing with stakeholders, as a team
h. Risks associated with physical, distributed, hybrid and virtual teams
3. Physical and cognitive characteristics of the user
a. Physical capabilities that inform interaction design (e.g., color perception, ergonomics)
b. Cognitive models that inform interaction design (e.g., attention, perception and recognition,
movement, memory)
c. Topics in social/behavioral psychology (e.g., cognitive biases, change blindness)
4. Designing for diverse user populations. (See also: SEP-DEIA, HCI-Accessibility)
106
a. How differences (e.g., in race, ability, age, gender, culture, experience, and education)
impact user experiences and needs
b. Internationalization
c. Designing for users from other cultures
d. Cross-cultural design
e. Challenges to effective design evaluation. (e.g., sampling, generalization; disability and
disabled experiences)
f. Universal design
5. Collaboration and communication (See also: AI-SEP, SE-Teamwork, SEP-Communication, SPD-
Game)
a. Understanding the user in a multi-user context
b. Synchronous group communication (e.g., chat rooms, conferencing, online games)
c. Asynchronous group communication (e.g., email, forums, social networks)
d. Social media, social computing, and social network analysis
e. Online collaboration
f. Social coordination and online communities
g. Avatars, characters, and virtual worlds
Non-core:
6. Multi-user systems
KA Core:
2. Compare and contrast the needs of users with those of designers.
3. Identify the representative users of a design and discuss who else could be impacted by it.
4. Describe empathy and evaluation as elements of the design process.
5. Carry out and document an analysis of users and their needs.
6. Construct a user story from a needs assessment.
7. Redesign an existing solution to a population whose needs differ from those of the initial target
population.
8. Contrast the different needs-finding methods for a given design problem.
9. Reflect on whether your design would benefit from low-tech or no-tech components.
Non-core:
10. Recognize the implications of designing for a multi-user system/context.
107
b. Inclusivity (See also: SEP-DEIA)
c. Safety, security and privacy (See also: SEP-Security, SEC-Foundations)
d. Harm and disparate impact (See also: SEP-DEIA)
2. Ethics in design methods and solutions (See also: SEP-Ethical-Analysis, SEP-Context, SEP-
Intellectual Property)
a. The role of artificial intelligence (See also: AI-SEP)
b. Responsibilities for considering stakeholder impact and human factors (See also: SEP-
Professional-Ethics,)
c. Role of design to meet user needs
3. Requirements in design (See also: SEP-Professional-Ethics)
a. Ownership responsibility
b. legal frameworks, compliance requirements
c. consideration beyond immediate user needs, including via iterative reconstruction of
problem analysis and “digital well-being” features
KA Core:
4. Value-sensitive design (See also: SEP-Ethical-Analysis, SEP-Context, SEP-Communication)
a. identify direct and indirect stakeholders
b. Determine and include diverse stakeholder values and value systems.
5. Persuasion through design (See also: SEP-Communication)
a. Assess the persuasive content of a design
b. Employ persuasion as a design goal
c. Distinguish persuasive interfaces from manipulative interfaces
KA Core:
2. Identify the potential human factor elements in a design.
3. Identify and understand direct and indirect stakeholders.
4. Develop scenarios that consider the entire lifespan of a design, beyond the immediately planned
uses that anticipate direct and indirect stakeholders.
5. Identify and critique the potential factors in a design that impact direct and indirect stakeholders and
broader society (e.g., transparency, sustainability of the system, trust, artificial intelligence).
6. Assess the persuasive content of a design and its intent relative to user interests.
7. Critique the outcomes of a design given its intent.
8. Understand the impact of design decisions.
108
1. Background (See also: SEP-DEIA, SEP-Security)
a. Societal and legal support for and obligations to people with disabilities
b. Accessible design benefits everyone
2. Techniques
a. Accessibility standards (e.g., Web Content Accessibility Guidelines) (See also: SPD-Web)
3. Technologies (See also: SE-Tools)
a. Features and products that enable accessibility and support inclusive development by
designers and engineers
4. IDFs (Inclusive Design Frameworks) (See also: SEP-DEIA)
a. Recognizing differences
5. Universal design
KA Core:
6. Background
a. Demographics and populations (permanent, temporary and situational disability)
b. International perspectives on disability (See also: SEP-DEIA)
c. Attitudes towards people with disabilities (See also: SEP-DEIA)
7. Techniques
a. UX (user experience) design and research
b. Software engineering practices that enable inclusion and accessibility. (See also: SEP-DEIA)
8. Technologies
a. Examples of accessibility-enabling features, such as conformance to screen readers
9. Inclusive Design Frameworks
a. Creating inclusive processes such as participatory design
b. Designing for larger impact
Non-core:
10. Background (See also: SEP-DEIA)
a. Unlearning and questioning
b. Disability studies
11. Technologies: the Return On Investment (ROI) of inclusion
12. Inclusive Design Frameworks: user-sensitive inclusive design (See also: SEP-DEIA)
13. Critical approaches to HCI (e.g., inclusivity) (See also: SEP-DEIA)
109
KA Core:
5. Apply inclusive frameworks to design, such as universal design and usability and ability-based
design, and demonstrate accessible design of visual, voice-based, and touch-based UIs.
6. Demonstrate understanding of laws and regulations applicable to accessible design.
7. Demonstrate understanding of what is appropriate and inappropriate high level of skill during
interaction with individuals from diverse populations.
8. Analyze web pages and mobile apps for current standards of accessibility.
Non-core:
9. Biases towards disability, race, and gender have historically, either intentionally or unintentionally,
informed technology design.
a. Find examples.
b. Consider how those experiences (learnings?) might inform design.
10. Conceptualize user experience research to identify user needs and generate design insights.
KA Core:
2. Methods for evaluation with users (See also: SE-Validation)
a. Qualitative methods (qualitative coding and thematic analysis)
b. Quantitative methods (statistical tests)
c. Mixed methods (e.g., observation, think-aloud, interview, survey, experiment)
d. Presentation requirements (e.g., reports, personas)
e. User-centered testing
f. Heuristic evaluation
g. Challenges and shortcomings to effective evaluation (e.g., sampling, generalization)
3. Study planning
a. How to set study goals
b. Hypothesis design
c. Approvals from Institutional Research Boards and ethics committees (See also: SEP-Ethical-
Analysis, SEP-Security, SEP-Privacy)
d. How to pre-register a study
e. Within-subjects vs between-subjects design
4. Implications and impacts of design with respect to the environment, material, society, security,
privacy, ethics, and broader impacts. (See also: SEC-Foundations)
a. The environment
110
b. Material
c. Society
d. Security
e. Privacy
f. Ethics
g. Broader impacts
Non-core:
5. Techniques and tools for quantitative analysis
a. Statistical packages
b. Visualization tools
c. Statistical tests (e.g., ANOVA, t-tests, post-hoc analysis, parametric vs non-parametric tests)
d. Data exploration and visual analytics; how to calculate effect size.
6. Data management
a. Data storage and data sharing (open science)
b. Sensitivity and identifiability.
KA Core:
2. Select appropriate formative or summative evaluation methods at different points throughout the
development of a design
3. Discuss the benefits of using both qualitative and quantitative methods for evaluation
4. Evaluate the implications and broader impacts of a given design
5. Plan a usability evaluation for a given user interface, and justify its study goals, hypothesis design,
and study design
6. Conduct a usability evaluation of a given user interface and draw defensible conclusions given the
study design
Non-core:
7. Select and run appropriate statistical tests on provided study data to test for significance in the
results
8. Pre-register a study design, with planned statistical tests
KA Core:
4. Design patterns and guidelines
a. Software architecture patterns
b. Cross-platform design
c. Synchronization considerations
5. Design processes (See also: SEP:Communication)
a. Participatory design
b. Co-design
c. Double-diamond
d. Convergence and divergence
6. Interaction techniques (See also: GIT-Interaction)
a. Input and output vectors (e.g., gesture, pose, touch, voice, force)
b. Graphical user interfaces
c. Controllers
d. Haptics
e. Hardware design
f. Error handling
7. Visual UI design (See also: GIT-Visualization)
a. Color
b. Layout
c. Gestalt principles
Non-core:
8. Immersive environments (See also: GIT-Immersion)
a. Virtual reality
b. Augmented reality, mixed reality
c. XR (XReality which encompasses them)
d. Spatial audio
9. 3D printing and fabrication
10. Asynchronous interaction models
11. Creativity support tools
12. Voice UI designs
112
CS Core:
1. Propose system designs tailored to a specified appropriate mode of interaction.
2. Follow an iterative design and development process that incorporates
a. Understanding the user
b. Developing an increment
c. Evaluating the increment
d. Feeding those results into a subsequent iteration
3. Explain the impact of changing constraints and design tradeoffs (e.g., hardware, user, security.) on
system design
KA Core:
4. Evaluate architectural design approaches in the context of project goals.
5. Identify synchronization challenges as part of the user experience in distributed environments.
6. Evaluate and compare the privacy implications behind different input techniques for a given
scenario
7. Explain the rationale behind a UI design based on visual design principles
Non-core:
8. Evaluate the privacy implications within a VR/AR/MR scenario
KA Core:
6. Participatory and inclusive design processes
7. Evaluating the design: Implications and impacts of design: with respect to the environment,
material, society, security, privacy, ethics, and broader impacts (See also: SEC-Foundations, SEP-
Privacy)
Non-core:
8. VR/AR/MR scenarios
113
KA Core:
2. Critique a recent example of a non-inclusive design choice, its societal implications, and propose
potential design improvements
3. Evaluating the design: Identify the implications and broader impacts of a given design.
Non-core:
4. Evaluate the privacy implications within a VR/AR/MR scenario
Professional Dispositions
Mathematics Requirements
Required:
● Basic statistics (MSF-Statistics) to support the evaluation and interpretation of results, including
central tendency, variability, frequency distribution
114
● HCI-User: Understanding the User (3 hours)
● SEP-Privacy, SEP-Ethical-Analysis (4 hours)
Prerequisites:
● CS2
● MSF-Linear
Course objectives: Students should understand how to select a dataset; ensure the data are accurate
and appropriate; and design, develop and test a visualization program that depicts the data and is
usable.
Committee
Chair: Susan L. Epstein, Hunter College and The Graduate Center of The City University of New York,
NY, USA
Members:
● Sherif Aly, The American University of Cairo, Cairo, Egypt
● Jeremiah Blanchard, University of Florida, Gainesville, FL, USA
● Zoya Bylinskii, Adobe Research, Cambridge, MA, USA
● Paul Gestwicki, Ball State University, Muncie, IN, USA
● Susan Reiser, University of North Carolina at Asheville, Asheville, NC, USA
● Amanda M. Holland-Minkley, Washington and Jefferson College, Washington, PA, USA
● Ajit Narayanan, Google, Chennai, India
● Nathalie Riche, Microsoft, Redmond, WA, USA
● Kristen Shinohara, Rochester Institute of Technology, Rochester, NY, USA
● Olivier St-Cyr, University of Toronto, Toronto, Canada
115
Mathematical and Statistical Foundations (MSF)
Preamble
A strong mathematical foundation remains a bedrock of computer science education and infuses the
practice of computing whether in developing algorithms, designing systems, modeling real-world
phenomena, or computing with data. This Mathematical and Statistical Foundations (MSF) knowledge
area – the successor to the ACM CS2013 [6] curriculum's "Discrete Structures" area – seeks to identify
the mathematical and statistical material that undergirds modern computer science. The change of
name corresponds to a realization both that the broader name better describes the combination of
topics from the 2013 report and from those required for the recently growing areas of computer science,
such as artificial intelligence, machine learning, data science, and quantum computing, many of which
have continuous mathematics as their foundations.
Core Hours
116
degree must require.” Instead, we outline two sets of core requirements, a CS Core set suited to hours-
limited majors and a more expansive set of CS Core plus KA Core to align with technically focused
programs. The principle here is that, in light of the additional foundational mathematics needed for AI,
data science and quantum computing, programs ought to consider as much as possible from the more
expansive CS+KA version unless there are sound institutional reasons for alternative requirements.
Note: the hours in a row (example: linear algebra) add up to 40 (= 5 + 35), reflecting a standard course;
shorter combined courses may be created, for example, by including probability in discrete
mathematics (29 hours of discrete mathematics, 11 hours of probability).
Discrete Mathematics 29 11
Probability 11 29
Statistics 10 30
Linear Algebra 5 35
Calculus 0 40
Total 55 145
KA Core: The KA Core hours can be read as the remaining hours available to flesh out each topic into
a standard 40-hour course. Note that the calculus hours roughly correspond to the typical Calculus-I
course now standard across the world. Based on our survey, most programs already require Calculus-I.
However, we have left out Calculus-II (an additional 40 hours) and leave it to programs to decide
whether Calculus-II should be added to program requirements. Programs could choose to require a
more rigorous calculus-based probability or statistics sequence, or non-calculus-based versions.
Similarly, linear algebra can be taught as an applied course without a calculus prerequisite or as a more
advanced course.
Knowledge Units
117
MSF-Discrete: Discrete Mathematics
CS Core:
1. Sets, relations, functions, cardinality
2. Recursive mathematical definitions
3. Proof techniques (induction, proof by contradiction)
4. Permutations, combinations, counting, pigeonhole principle
5. Modular arithmetic
6. Logic: truth tables, connectives (operators), inference rules, formulas, normal forms, simple
predicate logic
7. Graphs: basic definitions
8. Order notation
118
a. Apply counting arguments, including sum and product rules, inclusion-exclusion principle and
arithmetic/geometric progressions.
b. Apply the pigeonhole principle in the context of a formal proof.
c. Compute permutations and combinations of a set, and interpret the meaning in the context of
the particular application.
d. Map real-world applications to appropriate counting formalisms, such as determining the
number of ways to arrange people around a table, subject to constraints on the seating
arrangement, or the number of ways to determine certain hands in cards (e.g., a full house).
5. Modular arithmetic
a. Perform computations involving modular arithmetic.
b. Explain the notion of greatest common divisor, and apply Euclid's algorithm to compute it.
6. Logic
a. Convert logical statements from informal language to propositional and predicate logic
expressions.
b. Apply formal methods of symbolic propositional and predicate logic, such as calculating validity
of formulae, computing normal forms, or negating a logical statement.
c. Use the rules of inference to construct proofs in propositional and predicate logic.
d. Describe how symbolic logic can be used to model real-life situations or applications, including
those arising in computing contexts such as software analysis (e.g., program correctness),
database queries, and algorithms.
e. Apply formal logic proofs and/or informal, but rigorous, logical reasoning to real problems, such
as predicting the behavior of software or solving problems such as puzzles.
f. Describe the strengths and limitations of propositional and predicate logic.
g. Explain what it means for a proof in propositional (or predicate) logic to be valid.
7. Graphs
a. Illustrate by example the basic terminology of graph theory, and some of the properties and
special cases of types of graphs, including trees.
b. Demonstrate different traversal methods for trees and graphs, including pre-, post-, and in-order
traversal of trees, along with breadth-first and depth-first search for graphs.
c. Model a variety of real-world problems in computer science using appropriate forms of graphs
and trees, such as representing a network topology, the organization of a hierarchical file
system, or a social network.
d. Show how concepts from graphs and trees appear in data structures, algorithms, proof
techniques (structural induction), and counting.
KA Core:
The recommended topics are the same between CS core and KA-core, but with far more hours, the
KA-core can cover these topics in depth and might include more computing-related applications.
MSF-Probability: Probability
CS Core:
1. Basic notions: sample spaces, events, probability, conditional probability, Bayes’ rule
2. Discrete random variables and distributions
3. Continuous random variables and distributions
119
4. Expectation, variance, law of large numbers, central limit theorem
5. Conditional distributions and expectation
6. Applications to computing, the difference between probability and statistics (as subjects)
KA Core:
The recommended topics are the same between CS core and KA-core, but with far more hours, the
KA-core can cover these topics in depth and might include more computing-related applications.
MSF-Statistics: Statistics
CS Core:
1. Basic definitions and concepts: populations, samples, measures of central tendency, variance
2. Univariate data: point estimation, confidence intervals
120
KA Core:
3. Multivariate data: estimation, correlation, regression
4. Data transformation: dimension reduction, smoothing
5. Statistical models and algorithms
6. Hypothesis testing
KA Core:
3. Sampling, bias, adequacy of samples, Bayesian vs frequentist interpretations
4. Multivariate data: estimation, correlation, regression
a. Formulate the multivariate maximum likelihood estimation problem as a least-squares problem
b. Interpret the geometric properties of maximum likelihood estimates
c. Derive and calculate the maximum likelihood solution for linear regression
d. Derive and calculate the maximum a posteriori estimate for linear regression
e. Implement both maximum likelihood and maximum a posteriori estimates in the context of a
polynomial regression problem
f. Formulate and understand the concept of data correlation (e.g., in 2D)
5. Data transformation: dimension reduction, smoothing
a. Formulate and derive Principal Component Analysis (PCA) as a least-squares problem
b. Geometrically interpret PCA (when solved as a least-squares problem)
c. Describe when PCA works well (one can relate back to correlated data)
d. Geometrically interpret the linear regression solution (maximum likelihood)
6. Statistical models and algorithms
a. Apply PCA to dimensionality reduction problems
b. Describe the trade-off between compression and reconstruction power
c. Apply linear regression to curve-fitting problems
d. Explain the concept of overfitting
e. Discuss and apply cross-validation in the context of overfitting and model selection (e.g., degree
of polynomials in a regression context)
121
MSF-Linear: Linear Algebra
CS Core:
1. Vectors: definitions, vector operations, geometric interpretation, angles: Matrices: definition, matrix
operations, meaning of Ax=b.
KA Core:
2. Matrices, matrix-vector equation, geometric interpretation, geometric transformations with matrices
3. Solving equations, row-reduction
4. Linear independence, span, basis
5. Orthogonality, projection, least-squares, orthogonal bases
6. Linear combinations of polynomials, Bezier curves
7. Eigenvectors and eigenvalues
8. Applications to computer science: Principal Components Analysis (PCA), Singular Value
Decomposition (SVD), page-rank, graphics
KA Core:
2. Matrices, matrix-vector equation, geometric interpretation, geometric transformations with matrices
a. Perform common matrix operations, such as addition, scalar multiplication, multiplication, and
transposition
b. Relate a matrix to a homogeneous system of linear equations
c. Recognize when two matrices can be multiplied
d. Relate various matrix transformations to geometric illustrations
3. Solving equations, row-reduction
a. Formulate, solve, apply, and interpret properties of linear systems
b. Perform row operations on a matrix
c. Relate an augmented matrix to a system of linear equations
d. Solve linear systems of equations using the language of matrices
e. Translate word problems into linear equations
f. Perform Gaussian elimination
4. Linear independence, span, basis
a. Define subspace of a vector space
b. List examples of subspaces of a vector space
c. Recognize and use basic properties of subspaces and vector spaces
d. Determine whether or not particular subsets of a vector space are subspaces
e. Discuss the existence of a basis of an abstract vector space
f. Describe coordinates of a vector relative to a given basis
122
g. Determine a basis and the dimension of a finite-dimensional space
h. Discuss spanning sets for vectors in Rn
i. Discuss linear independence for vectors in Rn
j. Define the dimension of a vector space
5. Orthogonality, projection, least-squares, orthogonal bases
a. Explain the Gram-Schmidt orthogonalization process
b. Define orthogonal projections
c. Define orthogonal complements
d. Compute the orthogonal projection of a vector onto a subspace, given a basis for the subspace
e. Explain how orthogonal projections relate to least square approximations
6. Linear combinations of polynomials, Bezier curves
a. Identify polynomials as generalized vectors
b. Explain linear combinations of basic polynomials
c. Describe orthogonality for polynomials
d. Distinguish between basic polynomials and Bernstein polynomials
e. Apply Bernstein polynomials to Bezier curves
7. Eigenvectors and eigenvalues
a. Find the eigenvalues and eigenvectors of a matrix
b. Define eigenvalues and eigenvectors geometrically
c. Use characteristic polynomials to compute eigenvalues and eigenvectors
d. Use eigenspaces of matrices, when possible, to diagonalize a matrix
e. Perform diagonalization of matrices
f. Explain the significance of eigenvectors and eigenvalues
g. Find the characteristic polynomial of a matrix
h. Use eigenvectors to represent a linear transformation with respect to a particularly nice basis
8. Applications to computer science: PCA, SVD, page-rank, graphics
a. Explain the geometric properties of PCA
b. Relate PCA to dimensionality reduction
c. Relate PCA to solving least-squares problems
d. Relate PCA to solving eigenvector problems
e. Apply PCA to reducing the dimensionality of a high-dimensional dataset (e.g., images)
f. Explain the page-rank algorithm and understand how it relates to eigenvector problems
g. Explain the geometric differences between SVD and PCA
h. Apply SVD to a concrete example (e.g., movie rankings)
MSF-Calculus
KA Core:
1. Sequences, series, limits
2. Single-variable derivatives: definition, computation rules (chain rule etc.), derivatives of important
functions, applications
3. Single-variable integration: definition, computation rules, integrals of important functions,
fundamental theorem of calculus, definite vs indefinite, applications (including in probability)
4. Parametric and polar representations
5. Taylor series
123
6. Multivariate calculus: partial derivatives, gradient, chain-rule, vector valued functions,
7. Optimization: convexity, global vs local minima, gradient descent, constrained optimization and
Lagrange multipliers.
8. Ordinary Differential Equations (ODEs): definition, Euler method, applications to simulation, Monte
Carlo integration
9. CS applications: gradient descent for machine learning, forward and inverse kinematics,
applications of calculus to probability
Note: the calculus topics listed above are aligned with computer science goals rather than with
traditional calculus courses. For example, multivariate calculus is often a course by itself but computer
science undergraduates only need parts of it for machine learning.
124
a. Apply the Euler method to integration
b. Apply the Euler method to a single-variable differential equation
c. Apply the Euler method to multiple variables in an ODE
Professional Dispositions
Mathematics Requirements
The most important topics expected from students entering a computing program typically correspond
to pre-calculus courses in high school..
Required:
● Algebra and numeracy:
o Numeracy: numbers, operations, types of numbers, fluency with arithmetic, exponent
notation, rough orders of magnitude, fractions and decimals.
o Algebra: rules of exponents, solving linear or quadratic equations with one or two variables,
factoring, algebraic manipulation of expressions with multiple variables.
● Precalculus:
o Coordinate geometry: distances between points, areas of common shapes
o Functions: function notation, drawing and interpreting graphs of functions
o Exponentials and logarithms: a general familiarity with the functions and their graphs
o Trigonometry: familiarity with basic trigonometric functions and the unit circle
125
Every department faces constraints in delivering content, which precludes merely requiring a long list of
courses covering every single desired topic. These constraints include content-area ownership, faculty
size, student preparation, and limits on the number of departmental courses a curriculum can require.
We list below some options for offering mathematical foundations, combinations of which might best fit
any particular institution:
● Traditional course offerings. With this approach, a computer science department can require
students to take courses provided by mathematics departments in any of the five broad
mathematical areas listed above.
● A “Continuous Structures” analog of Discrete Structures. Many computer science departments
now offer courses that prepare students mathematically for AI and machine learning. Such courses
can combine just enough calculus, optimization, linear algebra and probability; yet others may split
linear algebra into its own course. These courses have the advantage of motivating students with
computing applications, and including programming as pedagogy for mathematical concepts.
● Integration into application courses. An application course, such as machine learning, can be
spread across two courses, with the course sequence including the needed mathematical
preparation taught just-in-time, or a single machine learning course can balance preparatory
material with new topics. This may have the advantage of mitigating turf issues and helping
students see applications immediately after encountering mathematics.
● Specific course adaptations. For nearly a century, physics and engineering needs have driven
the structure of calculus, linear algebra, and probability. Computer science departments can
collaborate with their colleagues in mathematics departments to restructure mathematics-offered
sections in these areas that are driven by computer science applications. For example, calculus
could be reorganized to fit the needs of computing programs into two calculus courses, leaving a
later third calculus course for engineering and physics students.
Committee
Chair: Rahul Simha, The George Washington University, Washington DC, USA
Members:
● Richard Blumenthal, Regis University, Denver, CO, USA
● Marc Deisenroth, University College London, London, UK
● MIkey Goldweber, Denison University, Granville, OH, USA
● David Liben-Nowell, Carleton College, Northfield, MN, USA
● Jodi Tims, Northeastern University, Boston, MA, USA
Networking and communication play a central role in interconnected computer systems that are
transforming the daily lives of billions of people. The public Internet provides connectivity for networked
126
applications that serve ever-increasing numbers of individuals and organizations around the world.
Complementing the public sector, major proprietary networks leverage their global footprints to support
cost-effective distributed computing, storage, and content delivery. Advances in satellite networks expand
connectivity to rural areas. Device-to-device communication underlies the emerging Internet of Things.
This knowledge area deals with key concepts in networking and communication, as well as their
representative instantiations in the Internet and other computer networks. Besides the basic principles of
switching and layering, the area at its core provides knowledge on naming, addressing, reliability, error
control, flow control, congestion control, domain hierarchy, routing, forwarding, modulation, encoding,
framing, and access control. The area also covers knowledge units in network security and mobility, such
as security threats, countermeasures, device-to-device communication, and multi-hop wireless
networking. In addition to the fundamental principles, the area includes their specific realization of the
Internet as well as hands-on skills in the implementation of networking and communication concepts.
Finally, the area comprises emerging topics such as network virtualization and quantum networking.
As the main learning outcome, learners develop a thorough understanding of the role and operation of
networking and communication in networked computer systems. They learn how network structure and
communication protocols affect the behavior of distributed applications. The area educates on not only
key principles but also their specific instantiations in the Internet and equips the student with hands-on
implementation skills. While computer-system, networking, and communication technologies are
advancing at a fast pace, the gained fundamental knowledge enables the student to readily apply the
concepts in new technological settings.
Core Hours
127
Reliability Support 5.75 + 0.25 (SF)
Single-Hop Communication 3
Mobility Support 4
Emerging Topics 4
Total 7 24
Knowledge Units
NC-Fundamentals: Fundamentals
CS Core:
1. Importance of networking in contemporary computing, and associated challenges. (See also: SEP-
Context, SEP-Privacy)
2. Organization of the Internet (e.g. users, Internet Service Providers, autonomous systems, content
providers, content delivery networks).
3. Switching techniques (e.g., circuit and packet).
4. Layers and their roles (application, transport, network, datalink, and physical).
5. Layering principles (e.g. encapsulation and hourglass model). (See also: SF-Foundations)
6. Network elements (e.g. routers, switches, hubs, access points, and hosts).
7. Basic queueing concepts (e.g. relationship with latency, congestion, service levels, etc.).
NC-Mobility: Mobility
KA Core:
1. Principles of cellular communication (e.g. 4G, 5G).
2. Principles of Wireless LANs (mainly 802.11).
3. Device to device communication (e.g. IoT communication).
4. Multi-hop wireless networks (e.g. ad hoc networks, opportunistic, delay tolerant).
130
KA Core:
1. Describe some aspects of cellular communication such as registration.
2. Describe how 802.11 supports mobile users.
3. Describe practical uses of device to device communication, as well as multihop.
4. Describe one type of mobile network such as ad hoc.
Professional Dispositions
● Meticulous: Students must be particular about the specifics of understanding and creating
networking protocols.
● Collaborative: Students must work together to develop multiple components that interact together,
and respond to failures and threats.
● Proactive: Students must be able to predict failures, threats, and how to deal with them while
avoiding reactive modes of operation only.
● Professional: Students must comply with the needs of the community and their expectations from
a networked environment, and the demands of regulatory bodies.
● Responsive: Students must act swiftly to changes in requirements in network configurations and
changing user requirements.
● Adaptive: Students need to reconfigure systems under varying modes of operation.
Mathematics Requirements
Required:
● MSF-Probability.
● MSF-Statistics.
● MSF-Discrete.
● MSF-Linear Simple queuing theory concepts.
131
Course Packaging Suggestions
Coverage of the concepts of networking including but not limited to types of applications used by the
network, reliability, routing and forwarding, single hop communication, security, and other emerging
topics.
Note: both courses cover the same KU’s but with different allocation of hours for each KU.
Course objectives: By the end of this course, learners should be able to understand many of the
fundamental concepts associated with networking, learn about many types of networked applications,
and develop at least one, understand basic routing and forwarding, single hop communications, and
deal with some issues pertaining to mobility, security, and emerging areas, all with embedded social,
ethical, and issues pertaining to the profession.
Introductory Course:
● NC-Fundamentals (8 hours)
● NC-Applications (12 hours)
● NC-Reliability (6 hours)
● NC-Routing (4 hours)
● NC-SingleHop (3 hours)
● NC-Mobility (3 hours)
● NC-Security (3 hours)
● SEP-Context (1 hour)
● NC-Emerging (2 hours)
Course objectives: By the end of this course, learners would have obtained a refresher about some of
the fundamental issues of networking, networked applications, reliability, and routing and forwarding,
and indulged in more details into single hop communications, mobility, security, and more emerging
topics in the area, all with embedded social, ethical, and issues pertaining to the profession.
Advanced Course:
● NC-Fundamentals (3 hours)
● NC-Applications (4 hours)
● NC-Reliability(7 hours)
● NC-Routing (6 hours)
● NC-SingleHop (5 hours)
● NC-Mobility (5 hours)
● NC-Security (5 hours)
● SEP-Privacy, SEP-Security, SEP-Sustainability (2 hours)
● NC-Emerging (5 hours)
Committee
132
Chair: Sherif G. Aly, The American University in Cairo, Cairo, Egypt
Members:
● Khaled Harras, Carnegie Mellon University, Pittsburgh, PA, USA
● Moustafa Youssef, The American University in Cairo, Cairo, Egypt
● Sergey Gorinsky, IMDEA Networks Institute, Madrid, Spain
● Qiao Xiang, Xiamen University, Xiamen, China
Contributors:
● Alex (Xi) Chen: Huawei, Montreal, Canada
133
Operating Systems (OS)
Preamble
The operating system is a collection of services needed to safely interface the hardware with
applications. Core topics focus on the mechanisms and policies needed to virtualize computation,
memory, and Input/Output (I/O). Overarching themes that are reused at many levels in computer
systems are well illustrated in operating systems (e.g., polling vs interrupts, caching, flexibility vs costs,
scheduling approaches to processes, page replacement, etc.). The Operating Systems knowledge area
contains the key underlying concepts for other knowledge areas – trust boundaries, concurrency,
persistence, and safe extensibility.
Core Hours
Concurrency 2 1
Scheduling 2
Process Model 2
134
Knowledge Units CS Core KA Core
Concurrency 2 1
Virtualization 1
Fault Tolerance 1
Total 8 14 (+ 2
counted in
AR)
Knowledge Units
CS Core:
1. Operating systems mediate between general purpose hardware and application-specific software.
2. Universal operating system functions (e.g., process, user and device interfaces, persistence of
data).
3. Extended and/or specialized operating system functions (e.g., embedded systems, server types
such as file, web, multimedia, boot loaders and boot security).
4. Design issues (e.g., efficiency, robustness, flexibility, portability, security, compatibility, power,
safety, tradeoffs between error checking and performance, flexibility and performance, and security
and performance). (See also: SEC-Engineering)
5. Influences of security, networking, multimedia, parallel and distributed computing.
6. Overarching concern of security/protection: Neglecting to consider security at every layer creates
an opportunity to inappropriately access resources.
Example concepts:
a. Unauthorized access to files on an unencrypted drive can be achieved by moving the media to
another computer,
b. Operating systems enforced security can be defeated by infiltrating the boot layer before the
operating system is loaded,
c. Process isolation can be subverted by inadequate authorization checking at API boundaries,
135
d. Vulnerabilities in system firmware can provide attack vectors that bypass the operating system
entirely,
e. Improper isolation of virtual machine memory, computing, and hardware can expose the host
system to attacks from guest systems, and
f. The operating system may need to mitigate exploitation of hardware and firmware
vulnerabilities, leading to potential performance reductions (e.g., Spectre and Meltdown
mitigations).
7. Exposure of operating systems functions in shells and systems programming. (See also: FPL-
Scripting)
CS Core:
1. Operating system software design and approaches (e.g., monolithic, layered, modular, micro-
kernel, unikernel).
2. Abstractions, processes, and resources.
3. Concept of system calls and links to application program interfaces (e.g., Win32, Java, Posix). (See
also: AR-Assembly)
4. The evolution of the link between hardware architecture and the operating system functions.
5. Protection of resources means protecting some machine instructions/functions. (See also: AR-
Assembly)
Example concepts:
a. Applications cannot arbitrarily access memory locations or file storage device addresses, and
b. Protection of coprocessors and network devices.
6. Leveraging interrupts from hardware level: service routines and implementations. (See also: AR-
Assembly)
Example concepts:
a. Timer interrupts for implementing time slices, and
b. I/O interrupts for putting blocking threads to sleep without polling.
7. Concept of user/system state and protection, transition to kernel mode using system calls. (See
also: AR-Assembly)
8. Mechanism for invoking system calls, the corresponding mode and context switch and return from
interrupt. (See also: AR-Assembly)
9. Performance costs of context switches and associated cache flushes when performing process
switches in Spectre-mitigated environments.
136
Illustrative Learning Outcomes:
CS Core:
1. Understand how the application of software design approaches to operating systems
design/implementation (e.g., layered, modular, etc.) affects the robustness and maintainability of an
operating system.
2. Categorize system calls by purpose.
3. Understand dynamics of invoking a system call (passing parameters, mode change, etc.).
4. Evaluate whether a function can be implemented in the application layer or can only be
accomplished by system calls.
5. Apply OS techniques for isolation, protection and throughput across OS functions (e.g., starvation
similarities in process scheduling, disk request scheduling, semaphores, etc.) and beyond.
6. Understand how the separation into kernel and user mode affects safety and performance.
7. Understand the advantages and disadvantages of using interrupt processing in enabling
multiprogramming.
8. Analyze for potential vectors of attack via the operating systems and the security features designed
to guard against them.
OS-Concurrency: Concurrency
CS Core:
1. Thread abstraction relative to concurrency.
2. Race conditions, critical regions (role of interrupts if needed). (See also: PDC-Programs)
3. Deadlocks and starvation. (See also: PDC-Coordination)
4. Multiprocessor issues (spin-locks, reentrancy).
5. Multiprocess concurrency vs multithreading.
KA Core:
6. Thread creation, states, structures. (See also: SF-Foundations)
7. Thread APIs.
8. Deadlocks and starvation (necessary conditions/mitigations). (See also: PDC-Coordination)
9. Implementing thread safe code (semaphores, mutex locks, condition variables). (See also: AR-
Performance-Energy, SF-Evaluation, PDC-Evaluation)
10. Race conditions in shared memory. (See also: PDC-Coordination)
Non-Core:
11. Managing atomic access to OS objects (e.g., big kernel lock vs many small locks vs lockless data
structures like lists).
KA Core:
5. Policy/mechanism separation. (See also: SEC-Governance)
6. Security methods and devices. (See also: SEC-Foundations)
Example concepts:
a. Rings of protection (history from Multics to virtualized x86), and
b. x86_64 rings -1 and -2 (hypervisor and ME/PSP).
7. Protection, access control, and authentication. (See also: SEC-Foundations, SEC-Crypto)
KA Core:
4. Summarize the features and limitations of an operating system that impact protection and security.
OS-Scheduling: Scheduling
KA Core:
1. Preemptive and non-preemptive scheduling.
2. Schedulers and policies (e.g., first come, first serve, shortest job first, priority, round robin,
multilevel). (See also: SF-Resource)
3. Concepts of Symmetric Multi-Processor (SMP) scheduling and cache coherence. (See also: AR-
Memory)
4. Timers (e.g., building many timers out of finite hardware timers). (See also: AR-Assembly)
5. Fairness and starvation.
138
Non-Core:
6. Subtopics of operating systems such as energy-aware scheduling and real-time scheduling. (See
also: AR-Performance-Energy, SPD-Embedded, SPD-Mobile)
7. Cooperative scheduling, such as Linux futexes and userland scheduling.
Non-Core:
7. Explain the ways that the logic embodied in scheduling algorithms are applicable to other operating
systems mechanisms, such as first come first serve or priority to disk I/O, network scheduling,
project scheduling, and problems beyond computing.
139
5. Apply the appropriate interprocess communication mechanism for a specific purpose in a
programmed software artifact.
Non-Core:
8. Virtual memory: leveraging virtual memory hardware for OS services and efficiency.
Non-Core:
6. Explain how hardware is utilized for efficient virtualization.
140
3. Historical and contextual - Persistent storage device management (e.g., magnetic, Solid State
Device (SSD)). (See also: SEP-History)
Non-Core:
4. Device interface abstractions, hardware abstraction layer.
5. Device driver purpose, abstraction, implementation and testing challenges.
6. High-level fault tolerance in device communication.
Non-Core:
6. Describe the complexity and best practices for the creation of device drivers
141
OS-Advanced-Files: Advanced File systems
KA Core:
1. File systems: partitioning, mount/unmount, virtual file systems.
2. In-depth implementation techniques.
3. Memory-mapped files. (See also AR-IO)
4. Special-purpose file systems.
5. Naming, searching, access, backups
6. Journaling and log-structured file systems. (See also SF-Reliability)
Non-Core:
6. Explain purpose and complexity of distributed file systems.
7. List examples of distributed file systems protocols.
8. Explain mechanisms in file systems to improve fault tolerance.
OS-Virtualization: Virtualization
KA Core:
1. Using virtualization and isolation to achieve protection and predictable performance. (See also: SF-
Performance)
2. Advanced paging and virtual memory. (See also: SF-Performance)
3. Virtual file systems and virtual devices.
4. Containers and their comparison to virtual machine.
5. Thrashing (e.g., Popek and Goldberg requirements for recursively virtualizable systems).
Non-core:
6. Types of virtualization (including hardware/software, OS, server, service, network). (See also: SF-
Performance)
7. Portable virtualization; emulation vs isolation. (See also: SF-Performance)
8. Cost of virtualization. (See also: SF-Performance)
142
9. Virtual machines and container escapes, dangers from a security perspective. (See also: SEC-
Engineering)
10. Hypervisors- hardware virtual machine extensions, hosts with kernel support, QEMU KVM
Non-Core:
4. Explain hypervisors and the need for them in conjunction with different types of hypervisors.
Non-Core:
4. Memory/disk management requirements in a real-time environment.
5. Failures, risks, and recovery.
6. Special concerns in real-time systems (safety).
Non-Core:
4. Explain specific real time operating systems features and mechanisms.
Non-Core:
3. Spatial and temporal redundancy. (See also: SF-Reliability)
4. Methods used to implement fault tolerance. (See also: SF-Reliability)
143
5. Error identification and correction mechanisms, checksums of volatile memory in RAM. (See also:
AR-Memory)
6. File system consistency check and recovery.
7. Journaling and log-structured file systems. (See also: SF-Reliability)
8. Use-cases for fault-tolerance (databases, safety-critical). (See also: SF-Reliability)
9. Examples of OS mechanisms for detection, recovery, restart to implement fault tolerance, use of
these techniques for the OS’s own services. (See also: SF-Reliability)
Non-Core:
5. Describe operating systems fault tolerance issues and mechanisms in detail.
Professional Dispositions
● Proactive: Students must anticipate the security and performance implications of how operating
systems components are used.
144
● Meticulous: Students must carefully analyze the implications of operating system mechanisms on
any project.
Mathematics Requirements
Required:
● MSF-Discrete
Course objectives: Students should understand the impact and implications of operating system
resource management in terms of performance and security. They should understand and implement
inter-process communication mechanisms safely. They should be able to differentiate between the use
and evaluation of open source and/or proprietary operating systems. They should understand
virtualization as a feature of safe modern operating system implementation.
Committee
145
● Qiao Xiang, Xiamen University, Xiamen, China
● Mikey Goldweber, Denison University, Granville, OH, USA
● Marcelo Pias, Federal University of Rio Grande (FURG), Rio Grande, RS, Brazil
● Avi Silberschatz, Yale University, New Haven, CT, USA
● Renzo Davoli, University of Bologna, Bologna, Italy
146
Parallel and Distributed Computing (PDC)
Preamble
Parallel and distributed programming arranges, coordinates, and controls multiple computations
occurring at the same time across different places. The ubiquity of parallelism and distribution are
inevitable consequences of increasing numbers of gates in processors, processors in computers, and
computers everywhere that may be used to improve performance compared to sequential programs,
while also coping with the intrinsic interconnectedness of the world, and the possibility that some
components or connections fail or behave maliciously. Parallel and distributed programming removes
the restrictions of sequential programming that require computational steps to occur in a serial order in
a single place, revealing further distinctions, techniques, and analyses applying at each layer of
computing systems.
In most conventional usage, “parallel” programming focuses on establishing and coordinating multiple
activities that may occur at the same time, “distributed” programming focuses on establishing and
coordinating activities that may occur in different places, and “concurrent” programming focuses on
interactions of ongoing activities with each other and the environment. However, all three terms may
apply in most contexts. Parallelism generally implies some form of distribution because multiple
activities occurring without sequential ordering constraints happen in multiple physical places (unless
they rely on context-switching or quantum effects). Conversely, actions in different places need not
bear any particular sequential ordering with respect to each other in the absence of communication
constraints.
Parallel, distributed and concurrent programming techniques form the core of High Performance
Computing (HPC), distributed systems, and increasingly, nearly every computing application. The PDC
knowledge area has evolved from a diverse set of advanced topics into a central body of knowledge
and practice, permeating almost every other aspect of computing. Growth of the field has occurred
irregularly across different subfields of computing, sometimes with different goals, terminology, and
practices, masking the considerable overlap of basic ideas and skills that are the main focus of this
knowledge area. Nearly every problem with a sequential solution also admits parallel and/or distributed
solutions; additional problems and solutions arise only in the context of concurrency. Nearly every
application domain of parallel and distributed computing is a well-developed area of study and/or
engineering too large to enumerate.
Overview
This knowledge area is divided into five knowledge units, each with CS Core and KA Core topics that
extend but do not overlap CS Core coverage that appears in other knowledge areas. The five
knowledge units cover: The nature of parallel and distributed Programs and their execution;
Communication (via channels, memory, or shared data stores), Coordination among parallel
activities to achieve common outcomes; Evaluation with respect to specifications, and Algorithms
across multiple application domains.
147
CS Core topics span approaches to parallel and distributed computing, but restrict coverage to those
that apply to nearly all of them. Learning outcomes include developing small programs (in a choice of
several styles) with multiple activities and analyzing basic properties. The topics and hours do not
include coverage of particular languages, tools, frameworks, systems, and platforms needed as a basis
for implementing and evaluating concepts and skills. The topics also avoid reliance on specifics that
may vary widely (for example GPU programming vs cloud container deployment scripts), Prerequisites
for CS Core coverage include:
● SDF-Fundamentals: programs, executions, specifications, implementations, variables, arrays,
sequential control flow, procedural abstraction and invocation, Input/Output.
● SF-Overview: Layered systems, state machines, reliability.
● AR-Assembly, AR-Memory: von Neumann architecture, memory hierarchy.
● MSF-Discrete: Discrete structures including directed graphs.
KA Core topics in each unit are of the form “one or more of the following” for a la carte topics extending
associated core topics. Any selection of KA-core topics meeting the KA Core hour requirement
constitutes fulfillment of the KA Core. This structure permits variation in coverage depending on the
focus of any given course (see below for examples). Depth of coverage of any KA Core subtopic is
expected to vary according to course goals. For example, shared-memory coordination is a central
topic in multicore programming, but much less so in most heterogeneous systems, and conversely for
bulk data transfer. Similarly, fault tolerance is central to the design of distributed information systems,
but much less so in most data-parallel applications.
Core Hours
Programs 2 2
Communication 2 6
148
Coordination 2 6
Evaluation 1 3
Algorithms 2 9
Total 9 26
Knowledge Units
PDC-Programs:Programs
CS Core:
1. Parallelism
a. Declarative parallelism: Determining which actions may, or must not, be performed in
parallel, at the level of instructions, functions, closures, composite actions, sessions, tasks,
and services is the main idea underlying PDC algorithms; failing to do so is the main source
of errors. (See also: PDC-Algorithms)
b. Defining order: for example, using happens-before relations or series/parallel directed
acyclic graphs representing programs.
c. Independence: determining when ordering does not matter, in terms of commutativity,
dependencies, preconditions.
d. Ensuring ordering among otherwise parallel actions when necessary, including locking, safe
publication; and imposing communication – sending a message happens before receiving it;
conversely relaxing when unnecessary.
2. Distribution
a. Defining places, as devices executing actions, including hardware components, remote
hosts, may also include external, uncontrolled devices, hosts, and users. (See also: AR-IO)
b. One device may time-slice or otherwise emulate multiple parallel actions by fewer
processors by scheduling and virtualization. (See also: OS-Scheduling)
c. Naming or identifying places (e.g., device IDs) and actions as parties (e.g., thread IDs).
d. Activities across places may communicate across media. (See also: PDC-Communication)
3. Starting activities
a. Options that enable actions to be performed (eventually) at places range from hardwiring to
configuration scripts; also establishing communication and resource management; these are
expressed differently across languages and contexts, usually relying on automated
provisioning and management by platforms. (See also: SF-Resources)
b. Procedural: Enabling multiple actions to start at a given program point; for example, starting
new threads, possibly scoping or otherwise organizing them in hierarchical groups.
c. Reactive: Enabling upon an event by installing an event handler, with less control of when
actions begin or end, and may apply even on uniprocessors.
d. Dependent: Enabling upon completion of others; for example, sequencing sets of parallel
actions (See also: PDC-Coordination)
149
e. Granularity: Execution cost of action bodies should outweigh the overhead of arranging
them.
4. Execution Properties
a. Nondeterministic execution of unordered actions.
b. Consistency: Ensuring agreement among parties about values and predicates when
necessary to avoid races, maintain safety and atomicity, or arrive at consensus
c. Fault tolerance: Handling failures in parties or communication, including (Byzantine)
misbehavior due to untrusted parties and protocols, when necessary to maintain progress or
availability. (See also: SF-Reliability)
d. Tradeoffs are one focus of evaluation. (See also: PDC-Evaluation)
KA Core:
5. One or more of the following mappings and mechanisms across layered systems:
a. CPU data- and instruction-level- parallelism. (See also: AR-Organization)
b. SIMD and heterogeneous data parallelism. (See also: AR-Heterogeneity)
c. Multicore scheduled concurrency, tasks, actors. (See also: OS-Scheduling)
d. Clusters, clouds; elastic provisioning. (See also: SPD-Common)
e. Networked distributed systems. (See also: NC-Applications)
f. Emerging technologies such as quantum computing and molecular computing.
PDC-Communication: Communication
CS Core:
1. Media
a. Varieties: channels (message passing or I/O), shared memory, heterogeneous, data stores
b. Reliance on the availability and nature of underlying hardware, connectivity, and protocols;
language support, emulation. (See also: AR-IO)
2. Channels
a. Explicit (usually named) party-to-party communication media
b. APIs: Sockets, architectural, language-based, and toolkit constructs, such as Message
Passing Interface (MPI), and layered constructs such as Remote Procedure Call (RPC).
(See also: NC-Fundamentals)
c. I/O channel APIs
150
3. Memory
a. Shared memory architectures in which parties directly communicate only with memory at
given addresses, with extensions to heterogeneous memory supporting multiple memory
stores with explicit data transfer across them; for example, GPU local and shared memory,
Direct Memory Access (DMA).
b. Memory hierarchies: Multiple layers of sharing domains, scopes and caches; locality:
latency, false-sharing.
c. Consistency properties: Bitwise atomicity limits, coherence, local ordering.
4. Data Stores
a. Cooperatively maintained data structures implementing maps and related ADTs
b. Varieties: Owned, shared, sharded, replicated, immutable, versioned
KA Core:
5. One or more of the following properties and extensions:
a. Topologies: Unicast, Multicast, Mailboxes, Switches; Routing via hardware and software
interconnection networks.
b. Media concurrency properties: Ordering, consistency, idempotency, overlapping
communication with computation.
c. Media performance: Latency, bandwidth (throughput) contention (congestion),
responsiveness (liveness), reliability (error and drop rates), protocol-based progress (acks,
timeouts, mediation).
d. Media security properties: integrity, privacy, authentication, authorization. (See also: SEC-
Secure Coding)
e. Data formats: Marshaling, validation, encryption, compressIon.
f. Channel policies: Endpoints, sessions, buffering, saturation response (waiting vs dropping),
Rate control.
g. Multiplexing and demultiplexing many relatively slow I/O devices or parties; completion-
based and scheduler-based techniques; async-await, select and polling APIs.
h. Formalization and analysis of channel communication; for example, CSP.
i. Applications of queuing theory to model and predict performance.
j. Memory models: sequential and release/acquire consistency.
k. Memory management; including reclamation of shared data; reference counts and
alternatives.
l. Bulk data placement and transfer; reducing message traffic and improving locality;
overlapping data transfer and computation; impact of data layout such as array-of-structs vs
struct-of-arrays.
m. Emulating shared memory: distributed shared memory, Remote Direct Memory Access
(RDMA).
n. Data store consistency: Atomicity, linearizability, transactionality, coherence, causal
ordering, conflict resolution, eventual consistency, blockchains.
o. Faults, partitioning, and partial failures; voting; protocols such as Paxos and Raft.
p. Design tradeoffs among consistency, availability, partition (fault) tolerance; impossibility of
meeting all at once.
q. Security and trust: Byzantine failures, proof of work and alternatives.
151
Illustrative Learning Outcomes
CS Core:
1. Explain the similarities and differences among: (1) Party A sends a message on channel X with
contents 1 received by party B (2) A sets shared variable X to 1, read by B (3) A sets “X=1’ in a
distributed shared map accessed by B.
KA Core:
2. Write a program that distributes different segments of a data set to multiple workers, and collects
results (for the simplest example, summing segments of an array).
3. Write a parallel program that requests data from multiple sites, and summarizes them using some
form of reduction.
4. Compare the performance of buffered versus unbuffered versions of a producer-consumer
program.
5. Determine whether a given communication scheme provides sufficient security properties for a
given usage.
6. Give an example of an ordering of accesses among concurrent activities (e.g., program with a data
race) that is not sequentially consistent.
7. Give an example of a scenario in which blocking message sends can deadlock.
8. Describe at least one design technique for avoiding liveness failures in programs using multiple
locks..
9. Write a program that illustrates memory-access or message reordering.
10. Describe the relative merits of optimistic versus conservative concurrency control under different
rates of contention among updates.
11. Give an example of a scenario in which an attempted optimistic update may never complete.
12. Modify a concurrent system to use a more scalable, reliable or available data store
13. Using an existing platform supporting replicated data stores, write a program that maintains a key-
value mapping even when one or more hosts fail.
PDC-Coordination: Coordination
CS Core:
1. Dependencies
a. Initiation or progress of one activity may be dependent on other activities, so as to avoid
race conditions, ensure termination, or meet other requirements.
b. Ensuring progress by avoiding dependency cycles, using monotonic conditions, removing
inessential dependencies.
2. Control constructs and design patterns
a. Completion-based: Barriers, joins, including termination control.
b. Data-enabled: Queues, producer-consumer designs.
c. Condition-based: Polling, retrying, backoffs, helping, suspension, signaling, timeouts.
d. Reactive: enabling and triggering continuations.
3. Atomicity
a. Atomic instructions, enforced local access orderings.
b. Locks and mutual exclusion; lock granularity.
152
c. Using locks in a given language; maintaining liveness without introducing races.
d. Deadlock avoidance: Ordering, coarsening, randomized retries; backoffs, encapsulation via
lock managers.
e. Common errors: Failing to lock or unlock when necessary, holding locks while invoking
unknown operations.
f. Avoiding locks: replication, read-only, ownership, and non-blocking constructions.
KA Core:
4. One or more of the following properties and extensions:
a. Progress properties including lock-free, wait-free, fairness, priority scheduling; interactions
with consistency, reliability.
b. Performance with respect to contention, granularity, convoying, scaling.
c. Non-blocking data structures and algorithms.
d. Ownership and resource control.
e. Lock variants and alternatives: sequence locks, read-write locks; Read-Copy-Update (RCU),
reentrancy; tickets; controlling spinning versus blocking.
f. Transaction-based control: Optimistic and conservative.
g. Distributed locking: reliability.
h. Alternatives to barriers: Clocks; counters, virtual clocks; dataflow and continuations; futures
and RPC; consensus-based, gathering results with reducers and collectors.
i. Speculation, selection, cancellation; observability and security consequences.
j. Resource control using semaphores and condition variables.
k. Control flow: Scheduling computations, series-parallel loops with (possibly elected) leaders,
pipelines and streams, nested parallelism.
l. Exceptions and failures. Handlers, detection, timeouts, fault tolerance, voting.
KA Core:
3. Write a function that efficiently counts events such as sensor inputs or networking packet
receptions.
4. Write a filter/map/reduce program in multiple styles.
5. Write a program in which the termination of one set of parallel actions is followed by another.
6. Write a program that speculatively searches for a solution by multiple activities, terminating
others when one is found.
7. Write a program in which a numerical exception (such as divide by zero) in one activity causes
termination of others.
8. Write a program for multiple parties to agree upon the current time of day; discuss its limitations
compared to protocols such as network transfer protocol (NTP).
153
9. Write a service that creates a thread (or other procedural form of activation) to return a
requested web page to each new client.
PDC-Evaluation: Evaluation
CS Core:
1. Safety and liveness requirements in terms of temporal logic constructs to express “always” and
“eventually”. (See also: FPL-Parallel)
2. Identifying, testing for, and repairing violations, including common forms of errors such as failure to
ensure necessary ordering (race errors), atomicity (including check-then-act errors), and
termination (livelock).
3. Performance requirements metrics for throughput, responsiveness, latency, availability, energy
consumption, scalability, resource usage, communication costs, waiting and rate control, fairness;
service level agreements. (See also: SF-Performance)
4. Performance impact of design and implementation choices, including granularity, overhead,
consensus costs, and energy consumption. (See also: SEP-Sustainability)
5. Estimating scalability limitations, for example using Amdahl’s Law or Universal Scalability Law. (See
also: SF-Evaluation)
KA Core:
6. One or more of the following methods and tools:
a. Extensions to formal sequential requirements such as linearizability.
b. Protocol, session, and transactional specifications.
c. Use of tools such as Unified Modelling Language (UML), Temporal Logic of Actions (TLA),
program logics.
d. Security analysis: safety and liveness in the presence of hostile or buggy behaviors by other
parties; required properties of communication mechanisms (for example lack of cross-layer
leakage), input screening, rate limiting. (See also: SEC-Foundations)
e. Static analysis applied to correctness, throughput, latency, resources, energy. (See also
SEP-Sustainability)
f. Directed Acyclic Graph (DAG) model analysis of algorithmic efficiency (work, span, critical
paths)
g. Testing and debugging; tools such as race detectors, fuzzers, lock dependency checkers,
unit/stress/torture tests, visualizations, continuous integration, continuous deployment, and
test generators,
h. Measuring and comparing throughput, overhead, waiting, contention, communication, data
movement, locality, resource usage, behavior in the presence of excessive numbers of
events, clients, or threads. (See also SF-Evaluation)
i. Application domain specific analyses and evaluation techniques.
154
3. Specify a set of invariants that must hold at each bulk-parallel step of a computation.
4. Write a test program that can reveal a data race error; for example, missing an update when two
activities both try to increment a variable.
5. In a given context, explain the extent to which introducing parallelism in an otherwise sequential
program would be expected to improve throughput and/or reduce latency, and how it may impact
energy efficiency.
6. Show how scaling and efficiency change for sample problems without and with the assumption of
problem size changing with the number of processors; further explain whether and how scalability
would change under relaxations of sequential dependencies.
KA Core:
7. Specify and measure behavior when a service is requested by unexpectedly many clients
8. Identify and repair a performance problem due to sequential bottlenecks
9. Empirically compare throughput of two implementations of a common design (perhaps using an
existing test harness framework).
10. Identify and repair a performance problem due to communication or data latency.
11. Identify and repair a performance problem due to communication or data latency.
12. Identify and repair a performance problem due to resource management overhead.
13. Identify and repair a reliability or availability problem.
PDC-Algorithms: Algorithms
CS Core:
1. Expressing and implementing algorithms in given languages and frameworks, to initiate activities
(for example threads), use shared memory constructs, and channel, socket, and/or remote
procedure call APIs. (See also: FPL-Parallel).
a. Data parallel examples including map/reduce.
b. Using channel, socket, and/or RPC APIs in a given language, with program control for
sending (usually procedural) vs receiving. (usually reactive or RPC-based).
c. Using locks, barriers, and/or synchronizers to; maintain liveness without introducing races.
2. Survey of common application domains across multicore, reactive, data parallel, cluster, cloud,
open distributed systems and frameworks (with reference to the following table):
155
Data GPU, SIMD, Heterogeneous Linear algebra, Throughput,
parallel accelerators, memory graphics, data energy
hybrid analysis
KA Core:
3. One of more of the following algorithmic domains. (See also: AL-Strategies):
a. Linear algebra: Vector and matrix operations, numerical precision/stability, applications in
data analytics and machine learning
b. Data processing: sorting, searching and retrieval, concurrent data structures
c. Graphs, search, and combinatorics: Marking, edge-parallelization, bounding, speculation,
network-based analytics
d. Modeling and simulation: differential equations; randomization, N-body problems, genetic
algorithms
e. Computational logic: satisfiability (SAT), concurrent logic programming
f. Graphics and computational geometry: Transforms, rendering, ray-tracing
g. Resource management: Allocating, placing, recycling and scheduling processors, memory,
channels, and hosts; exclusive vs shared resources; static, dynamic and elastic algorithms;
Real-time constraints; Batching, prioritization, partitioning; decentralization via work-stealing
and related techniques;
h. Services: Implementing web APIs, electronic currency, transaction systems, multiplayer
games.
156
KA Core:
7. Design, implement, analyze, and evaluate a component or application for X operating in a given
context, where X is in one of the listed domains; for example, a genetic algorithm for factory floor
design.
8. Critique the design and implementation of an existing component or application, or one developed
by classmates.
9. Compare the performance and energy efficiency of multiple implementations of a similar design; for
example, multicore versus clustered versus GPU.
Professional Dispositions
● Meticulous: Students’ attention to detail is essential when applying constructs with non-obvious
correctness conditions.
● Persistent: Students must be prepared to try alternative approaches when solutions are not self-
evident.
Mathematics Requirements
Required:
● MSF-Discrete – Logic, discrete structures including directed graphs.
Desired:
● MSF-Linear
● MSF-Calculus – Differential equations
The CS Core requirements need not be provided by a single course. They may be included across
courses primarily devoted to software development, programming languages, systems, data
management, networking, computer architecture, and/or algorithms.
Alternatively, the CS Core provides a basis for courses focusing on parallel and/or distributed
computing. At one extreme, it is possible to offer a single broadly constructed course covering all PDC
KA Core topics to varying depths. At the other extreme, it is possible to infuse PDC KA Core coverage
across the curriculum with courses that cover parallel and distributed approaches alongside sequential
ones for nearly every topic in computing. More conventional choices include courses that focus on one
or a few categories (such as multicore or cluster), and algorithmic domains (such as linear algebra, or
resource management). Such courses may go into further depth than listed in one or more KUs, and
include additional software development experience, but include only CS-Core-level coverage of other
topics.
As an example, a course mainly focusing on multicores could extend CS Core topics as follows:
1. Programs: KA Core on threads, tasks, instruction-level parallelism
2. Communication: KA Core on multicore architectures, memory, concurrent data stores
157
3. Coordination: KA Core on blocking and non-blocking synchronization, speculation, cancellation,
futures, and divide-and-conquer data parallelism
4. Evaluation: KA Core on performance analysis
5. Algorithms: project-based KA Core coverage of data processing and resource management.
More extensive examples and guidance for courses focusing on HPC are provided by the NSF/IEEE-
TCPP Curriculum Initiative on Parallel and Distributed Computing [12].
Committee
Chair: Doug Lea, State University of New York at Oswego, Oswego, NY, USA
Members:
● Sherif Aly, American University of Cairo, Cairo, Egypt
● Michael Oudshoorn, High Point University, High Point, NC, USA
● Qiao Xiang, Xiamen University, Xiamen, China
● Dan Grossman, University of Washington, Seattle, WA, USA
● Sebastian Burckhardt, Microsoft Research, Redmond WA, USA
● Vivek Sarkar, Georgia Tech, Atlanta, GA, USA
● Maurice Herlihy, Brown University, Providence, RI, USA
● Sheikh Ghafoor, Tennessee Tech, Cookeville, TN, USA
● Chip Weems, University of Massachusetts, Amherst, MA, USA
Contributors:
● Paul McKenney, Meta, Beaverton, OR, USA
● Peter Buhr, University of Waterloo, Waterloo, Ontario, Canada
158
Software Development Fundamentals (SDF)
Preamble
Fluency in the process of software development is fundamental to the study of computer science. In
order to use computers to solve problems most effectively, students must be competent at reading and
writing programs. Beyond programming skills, however, they must be able to select and use
appropriate data structures and algorithms, and use modern development and testing tools.
The SDF knowledge area brings together fundamental concepts and skills related to software
development, focusing on concepts and skills that should be taught early in a computer science
program, typically in the first year. This includes fundamental programming concepts and their effective
use in writing programs, use of fundamental data structures which may be provided by the
programming language, basics of programming practices for writing good quality programs, reading
and understanding programs, and some understanding of the impact of algorithms on the performance
of the programs. The 43 hours of material in this knowledge area may be augmented with core material
from other knowledge areas as a student progresses to mid- and upper-level courses.
This knowledge area assumes a contemporary programming language with built-in support for common
data types including associative data types like dictionaries/maps as the vehicle for introducing
students to programming (e.g. Python, Java). However, this is not to discourage the use of older or
lower-level languages for SDF — the knowledge units below can be suitably adapted for the actual
language used.
The emergence of generative AI and Large Language Models (LLMs), which can generate programs
for many programming tasks, will undoubtedly affect the programming profession and consequently the
teaching of many CS topics. However, to be able to effectively use generative AI in programming tasks,
a programmer must have a good understanding of programs, and hence must still learn the foundations
of programming and develop basic programming skills - which is the aim of SDF. Consequently, we feel
that the desired outcomes for SDF should remain the same, though different instructors may now give
more emphasis to program understanding, documenting, specifications, analysis, and testing. (This is
similar to teaching students multiplication and tables, addition, etc. even though calculators can do all
this).
Overview
This Knowledge Area has five Knowledge Units. These are:
159
language constructs as well as modularity constructs. It also aims to familiarize students with
the concept of common libraries and frameworks, including those to facilitate API-based access
to resources.
2. SDF-Data-Structures: Fundamental Data Structures: This knowledge unit aims to develop core
concepts relating to Data Structures and associated operations. Students should understand the
important data structures available in the programming language or as libraries, and how to use
them effectively, including choosing appropriate data structures while designing solutions for a
given problem.
3. SDF-Algorithms: Algorithms: This knowledge unit aims to develop the foundations of
algorithms and their analysis. The KU should also empower students in selecting suitable
algorithms for building modest-complexity applications.
4. SDF-Practices: Software Development Practices: This knowledge unit develops the core
concepts relating to modern software development practices. It aims to develop student
understanding and basic competencies in program testing, enhancing the readability of
programs, and using modern methods and tools including some general-purpose IDE.
5. SDF-SEP: Society, Ethics, and the Profession: This knowledge unit aims to develop an initial
understanding of some of the ethical issues related to programming, professional values
programmers need to have, and the responsibility to society that programmers have. This
knowledge unit is a part of the SEP Knowledge Area.
Core Hours
Algorithms 3 + 3 (AL)
Total 43
Knowledge Units
160
1. Basic concepts such as variables, primitive data types, expressions and their evaluation.
2. How imperative programs work: state and state transitions on execution of statements, flow of
control.
3. Basic constructs such as assignment statements, conditional and iterative statements, basic
I/O.
4. Key modularity constructs such as functions (and methods and classes, if supported in the
language) and related concepts like parameter passing, scope, abstraction, data encapsulation.
(See also: FPL-OOP)
5. Input and output using files and APIs.
6. Structured data types available in the chosen programming language like sequences (e.g.,
arrays, lists), associative containers (e.g., dictionaries, maps), others (e.g., sets, tuples) and
when and how to use them. (See also: AL-Foundational)
7. Libraries and frameworks provided by the language (when/where applicable).
8. Recursion.
9. Dealing with runtime errors in programs (e.g., exception handling).
10. Basic concepts of programming errors, testing, and debugging. (See also: SE-Construction)
(See also: SEC-Coding)
11. Documenting/commenting code at the program and module level.(See also: SE-Construction)
12. Develop a security mindset. (See also: SEC-Foundations)
Illustrative Learning Outcomes
CS Core:
In these learning outcomes, the term "develop" means "design, write, test and debug".
1. Develop programs that use the fundamental programming constructs: assignment and
expressions, basic I/O, conditional and iterative statements.
2. Develop programs using functions with parameter passing.
3. Develop programs that effectively use the different structured data types provided in the
language like arrays/lists, dictionaries, and sets.
4. Develop programs that use file I/O to provide data persistence across multiple executions.
5. Develop programs that use language-provided libraries and frameworks (where applicable).
6. Develop programs that use APIs to access or update data (e.g., from the web).
7. Develop programs that create simple classes and instantiate objects of those classes (if
supported by the language).
8. Explain the concept of recursion, and identify when and how to use it effectively.
9. Develop recursive functions.
10. Develop programs that can handle runtime errors.
11. Read a given program and explain what it does.
12. Write comments for a program or a module specifying what it does.
13. Trace the flow of control during the execution of a program.
14. Use appropriate terminology to identify elements of a program (e.g., identifier, operator,
operand).
161
SDF-Data-Structures: Fundamental Data Structures
CS Core: (See also: AL-Foundational)
1. Standard abstract data types such as lists, stacks, queues, sets, and maps/dictionaries, and
operations on them.
2. Selecting and using appropriate data structures.
3. Performance implications of choice of data structure(s).
4. Strings and string processing.
Illustrative Learning Outcomes
CS Core:
1. Write programs that use each of the key abstract data types provided in the language (e.g.,
arrays, tuples/records/structs, lists, stacks, queues, and associative data types like sets,
dictionaries/maps).
2. Select the appropriate data structure for a given problem.
3. Explain how the performance of a program may change when using different data structures or
operations.
4. Write programs that work with text by using string processing capabilities provided by the
language.
SDF-Algorithms: Algorithms
CS Core: (See also: AL-Foundational, AL-Complexity)
1. Concept of algorithm and notion of algorithm efficiency.
2. Some common algorithms (e.g., sorting, searching, tree traversal, graph traversal).
3. Impact of algorithms on time-space efficiency of programs.
Illustrative Learning Outcomes
CS Core:
1. Explain the role of algorithms for writing programs.
2. Demonstrate how a problem may be solved by different algorithms, each with different
properties.
3. Explain some common algorithms (e.g., sorting, searching, tree traversal, graph traversal).
4. Explain the impact on space/time performance of some algorithms.
162
CS Core:
1. Develop tests for modules, and apply a variety of strategies to design test cases.
2. Explain some limitations of testing programs.
3. Build, execute and debug programs using a modern IDE and associated tools such as visual
debuggers.
4. Apply basic programming style guidelines to aid readability of programs such as comments,
indentation, proper naming of variables, etc.
5. Write specifications of a module as module comment describing its functionality.
Professional Dispositions
● Self-Directed: Students must seek out solutions to issues on their own (e.g., using technical
forums, FAQs, discussions). Resolving issues is an important part of becoming proficient in
programming.
● Experimental: Students must experiment with language features to understand them and to
quickly prototype solutions. This helps in learning about programming language features.
● Technical curiosity: Students must develop interest in understanding how programs are
executed, how programs and data are stored in memory, etc. This will help build better mental
models of the underlying execution system on which programs run.
● Adaptable: Students must be willing to learn and use different tools and technologies that
facilitate software development. Tools are commonly used while programming and new tools
often emerge - using tools effectively and learning the use of new tools will help.
● Persistent: Students must continue efforts until, for example, a bug is identified, a program is
made robust and handles all situations, etc. This will help as programming requires effort and
ability to persevere till a program works satisfactorily.
163
● Meticulous: Students must pay attention to detail and use orderly processes while
programming. The underlying machine is unforgiving and there is no room for even small errors
in the programs as they can cause major failures.
Mathematics Requirements
As SDF focuses on the first year and is foundational, it assumes only basic mathematical knowledge
that students acquire in school, in particular Sets, Relations, Functions, and Logic. (See also: MSF-
Discrete)
The SDF KA will generally be covered in introductory courses, often called CS1 and CS2. How much of
the SDF KA can be covered in CS1 and how much is to be left for CS2 is likely to depend on the choice
of programming language for CS1. For languages like Python or Java, CS1 can cover all the
Programming Concepts and Development Methods KAs, and some of the Data Structures KA. It is
desirable that they be further strengthened in CS2. The topics under algorithms KA and some topics
under data structures KA can be covered in CS2. In case CS1 uses a language with fewer in-built data
structures, then much of the Data Structures KA and some aspects of the programming KA may also
need to be covered in CS2. With the former approach, the introductory course in programming can
include the following:
1. SDF-Fundamentals (20 hours)
2. SDF-Data-Structures (12 hours)
3. SDF-Algorithms (6 hours)
4. SDF-Practices (5 hours)
5. SDF-SEP
Prerequisites: High school mathematics in particular Sets, Relations, Functions, and Logic. (See also:
MSF-Discrete)
Course objectives: Students should be able to:
● Design, code, test, and debug a modest sized program that effectively uses functional
abstraction.
● Select and use the appropriate language provided data structure for a given problem (like:
arrays, tuples/records/structs, lists, stacks, queues, and associative data types like sets,
dictionaries/maps.)
● Design, code, test, and debug a modest-sized object-oriented program using classes and
objects.
● Design, code, test, and debug a modest-sized program that uses language provided libraries
and frameworks (including accessing data from the web through APIs).
● Read and explain given code including tracing the flow of control during execution.
● Write specifications of a program or a module in natural language explaining what it does.
164
● Build, execute and debug programs using a modern IDE and associated tools such as visual
debuggers.
● Explain the key concepts relating to programming like parameter passing, recursion, runtime
exceptions and exception handling.
Committee
As far back as the early 1970s, British computer scientist Brian Randell allegedly said, “Software
engineering is the multi-person construction of multi-version programs.” This is an essential insight:
while programming is the skill that governs our ability to write a program, software engineering is
distinct in two dimensions: time and people.
First, a software engineering project is a team endeavor; being a solitary programming expert is
insufficient. Skilled software engineers must demonstrate expertise in communication and collaboration.
Programming may be an individual activity, but software engineering is a collaborative one, deeply tied
to issues of professionalism, teamwork, and communication.
Second, a software engineering project is usually “multi-version.” It has an expected lifespan; it needs
to function properly for months, years, or decades. Features may be added or removed to meet product
requirements. The engineering team itself will likely change. The technological context will shift, as our
computing platforms evolve, programming languages change, dependencies upgrade, etc. This
exposure to matters of time and change is novel when compared to a programming project: it isn’t
enough to build a thing that works, instead it must work and stay working. Many of the most challenging
topics in tech share “time will lead to change” as a root cause: backward compatibility, version skew,
dependency management, schema changes, protocol evolution.
Software engineering presents a particularly difficult challenge for learning in an academic setting.
Given that the major differences between programming and software engineering are time and
teamwork, it is hard to generate lessons that require successful teamwork and that faithfully present the
165
challenges of time. Additionally, some topics in software engineering will be more authentic and more
relevant if and when our learners experience collaborative and long-term software engineering projects
in vivo rather than in the classroom. Regardless of whether that happens as an internship, involvement
in an open source project, or full-time engineering role, a month of full-time hands-on experience has
more available hours than the average software engineering course.
Thus, a software engineering curriculum must focus on concepts needed by a majority of new-grad
hires, and that either are novel for those who are trained primarily as programmers, or that are abstract
concepts that may not get explicitly stated/shared on the job. Such topics include, but are not limited to:
● Testing
● Teamwork, collaboration
● Communication
● Design
● Maintenance and evolution
● Software engineering tools
Some such material is reasonably suited to a standard lecture or lecture+lab course. Discussing
theoretical underpinnings of version control systems, or branching strategies in such systems, can be
an effective way to familiarize students with those ideas. Similarly, a theoretical discussion can highlight
the difference between static and dynamic analysis tools, or may motivate discussion of diamond
dependency problems in dependency networks.
On the other hand, many of the fundamental topics of software engineering are best experienced in a
hands-on fashion. Historically, project-oriented courses have been a common vehicle for such learning.
We believe that such experience is valuable but also bears some interesting risks: students may form
erroneous notions about the difficulty/complexity of collaboration if their only exposure is a single
project with teams formed of other novice software engineers. It falls to instructors to decide on the right
balance between theoretical material and hands-on projects - neither is a perfect vehicle for this
challenging material. We strongly encourage instructors of project courses to aim for iteration and fast
feedback - a few simple tasks repeated, as in an Agile-structured project, is better than singular high-
friction introductions to many types of tasks. Projects with real-world industry partners and clients are
also particularly encouraged. If long-running project courses are not an option, anything that can
expose learners to the collaborative and long-term aspects of software engineering is valuable: adding
features to an existing codebase, collaborating on distinct parts of a larger whole, pairing up to write an
encoder and decoder, etc.
All evidence suggests that the role of software in our society will continue to grow for the foreseeable
future. Additionally, the era of “two programmers in a garage” seems to have drawn to a close. Most
important software these days is a team effort, building on existing code and leveraging existing
functionality. The study of software engineering skills is a deeply important counterpoint to the everyday
experience of computing students - we must impress on them the reality that few software projects are
managed by writing from scratch as a solo endeavor. Communication, teamwork, planning, testing, and
tooling are far more important as our students move on from the classroom and make their mark on the
wider world.
166
Although most CS graduates will go on to an industry position that requires this material, the CS Core
topics presented here are of value regardless of whether graduates go on to industry or academia.
Overview
1. SE-Teamwork: Because of the nature of learning programming, most students in introductory
SE have little or no exposure to the collaborative nature of SE. Practice (for instance in project
work) may help, but lecture and discussion time spent on the value of clear, effective, and
efficient communication and collaboration is essential for Software Engineering.
2. SE-Tools: Industry reliance on SE tools has exploded in the past generation, with version
control becoming ubiquitous, testing frameworks growing in popularity, increased reliance on
static and dynamic analysis in practice, and near-ubiquitous use of continuous integration
systems. Increasingly powerful IDEs provide code searching and indexing capabilities, as well
as small scale refactoring tools and integration with other SE tools. An understanding of the
nature of these tools is broadly valuable - especially version control systems.
3. SE-Requirements: Knowing how to build something is of little help if we do not know what to
build. Product Requirements (aka Requirements Engineering, Product Design, Product
Requirements solicitation, Product Requirements Documents, etc.) introduces students to the
processes surrounding the specification of the broad requirements governing development of a
new product or feature.
4. SE-Design: While Product Requirements focuses on the user-facing functionality of a software
system, Software Design focuses on the engineer-facing design of internal software
components. This encompasses large design concerns such as software architecture, as well
as small-scale design choices like API design.
5. SE-Construction: Software Construction focuses on practices that influence the direct
production of software: use of tests, test driven development, coding style. More advanced
topics extend into secure coding, dependency injection, work prioritization, etc.
6. SE-Validation: Software Verification and Validation focuses on how to improve the value of
testing - understand the role of testing, failure modes, and differences between good tests and
poor ones.
7. SE-Refactoring: Refactoring and Code Evolution focuses on refactoring and maintenance
strategies, incorporating code health, use of tools, and backwards compatibility considerations.
8. SE-Reliability: Software Reliability aims to improve understanding of and attention to error
cases, failure modes, redundancy, and reasoning about fault tolerance.
167
9. SE-Formal: Formal Methods provides mathematically rigorous mechanisms to apply to
software, from specification to verification. (Prerequisites: Substantial dependence on core
material from the Discrete Structures area, particularly knowledge units DS/Basic Logic and
DS/Proof Techniques.)
Core Hours
168
Knowledge Units CS Core KA Core
Teamwork 2 + 3 (SEP) 2
Software Reliability 2
Formal Methods
Total 6 21
Note: We have specifically highlighted Teamwork and Product Requirements as two Knowledge Units where SEP
lessons are most directly obvious and applicable. Issues like impact on society, interaction with others, and social
power disparities are pervasive in Software Engineering and should be woven into as many practical lessons as
possible.
Knowledge Units
SE-Teamwork: Teamwork
CS Core:
1. Effective communication, including oral and written, as well as formal (email, docs, comments,
presentations) and informal (team chat, meetings) (See also: SEP-Communication)
2. Common causes of team conflict, and approaches for conflict resolution
3. Cooperative programming
a. Pair programming or Swarming
b. Code review
c. Collaboration through version control
4. Roles and responsibilities in a software team (See also: SEP-Professional-Ethics)
a. Advantages of teamwork
b. Risks and complexity of such collaboration
5. Team processes
a. Responsibilities for tasks, effort estimation, meeting structure, work schedule
6. Importance of team diversity and inclusivity (See also: SEP-Communication)
169
KA Core:
7. Interfacing with stakeholders, as a team
a. Management & other non-technical teams
b. Customers
c. Users
8. Risks associated with physical, distributed, hybrid and virtual teams
a. Including communication, perception, structure, points of failure, mitigation and recovery,
etc.
KA Core:
9. Reference the importance of, and strategies to, as a team, interface with stakeholders outside the
team on both technical and non-technical levels.
10. Enumerate the risks associated with physical, distributed, hybrid and virtual teams and possible
points of failure and how to mitigate against and recover/learn from failures.
KA Core:
4. Describe how available static and dynamic test tools can be integrated into the software
development environment.
5. Understand the use of CI systems as a ground-truth for the state of the team’s shared code (build
and test success).
6. Describe the issues that are important in selecting a set of tools for the development of a particular
software system, including tools for requirements tracking, design modeling, implementation, build
automation, and testing.
7. Demonstrate the capability to use software tools in support of the development of a software
product of medium size.
Non-core:
7. Prototyping
a. A tool for both eliciting and validating/confirming requirements
171
8. Product evolution
a. When requirements change, how to understand what effect that has and what changes need to
be made
9. Effort estimation
a. Learning techniques for better estimating the effort required to complete a task
b. Practicing estimation and comparing to how long tasks actually take
c. Effort estimation is quite difficult, so students are likely to be way off in many cases, but seeing
the process play out with their own work is valuable
172
c. Identifying component boundaries and dependencies
3. Programming in the large vs programming in the small (See also: SF-Reliability)
4. Code smells and other indications of code quality, distinct from correctness.(See also: SEC-
Engineering)
KA Core:
5. API design principles
a. Consistency
i. Consistent APIs are easier to learn and less error-prone
ii. Consistency is both internal (between different portions of the API) and external (following
common API patterns)
b. Composability
c. Documenting contracts
i. API operations should describe their effect on the system, but not generally their
implementation
ii. Preconditions, postconditions, and invariants
d. Expandability
e. Error reporting
i. Errors should be clear, predictable, and actionable
ii. Input that does not match the contract should produce an error
iii. Errors that can be reliably managed without reporting should be managed
6. Identifying and codifying data invariants and time invariants
7. Structural and behavioral models of software designs
8. Data design (See also: DM-Modeling)
a. Data structures
b. Storage systems
9. Requirement traceability
a. Understanding which requirements are satisfied by a design
Non-Core:
10. Design modeling, for instance with class diagrams, entity relationship diagrams, or sequence
diagrams
11. Measurement and analysis of design quality
12. Principles of secure design and coding (See also: SEC-Engineering)
a. Principle of least privilege
b. Principle of fail-safe defaults
c. Principle of psychological acceptability
13. Evaluating design tradeoffs (e.g., efficiency vs reliability, security vs usability)
173
3. Adapt a flawed system design to better follow principles such as separation of concerns or
information hiding.
4. Identify the dependencies among a set of software components in an architectural design.
KA Core:
5. Design an API for a single component of a large software system, including identifying and
documenting each operation’s invariants, contract, and error conditions.
6. Evaluate an API description in terms of consistency, composability, and expandability.
7. Expand an existing design to include a new piece of functionality.
8. Design a set of data structures to implement a provided API surface.
9. Identify which requirements are satisfied by a provided software design.
Non-Core:
10. Translate a natural language software design into class diagrams.
11. Adapt a flawed system design to better follow the principles of least privilege and fail-safe defaults.
12. Contrast two software designs across different qualities, such as efficiency or usability.
174
a. Defensive coding practices
b. Secure coding practices and principles
c. Using exception handling mechanisms to make programs more robust, fault-tolerant
5. Debugging (See also: SDF-Practices)
6. Logging
7. Use of libraries and frameworks developed by others (See also: SDF-Practices)
Non-Core:
8. Larger-scale testing
a. Test doubles (stubs, mocks, fakes)
b. Dependency injection
9. Work sequencing, including dependency identification, milestones, and risk retirement
a. Dependency identification: Identifying the dependencies between different tasks
b. Milestones: A collection of tasks that serve as a marker of progress when completed. Ideally,
the milestone encompasses a useful unit of functionality.
c. Risk retirement: Identifying what elements of a project are risky and prioritizing completing tasks
that address those risks
10. Potential security problems in programs (See also: SEC-Coding)
a. Buffer and other types of overflows
b. Race conditions
c. Improper initialization, including choice of privileges
d. Input validation
11. Documentation (autogenerated)
12. Development context: “green field” vs existing code base
a. Change impact analysis
b. Change actualization
13. Release management
14. DevOps practices
KA Core:
3. Describe techniques, coding idioms and mechanisms for implementing designs to achieve desired
properties such as reliability, efficiency, and robustness.
4. Write robust code using exception handling mechanisms.
5. Describe secure coding and defensive coding practices.
6. Select and use a defined coding standard in a small software project.
7. Compare and contrast integration strategies including top-down, bottom-up, and sandwich
integration.
8. Describe the process of analyzing and implementing changes to code base developed for a specific
project.
175
9. Describe the process of analyzing and implementing changes to a large existing code base.
Non-Core:
10. Rewrite a simple program to remove common vulnerabilities, such as buffer overflows, integer
overflows and race conditions.
11. Write a software component that performs some non-trivial task and is resilient to input and run-
time errors.
KA Core:
6. Test planning and generation
a. Test case generation, from formal models, specifications, etc.
b. Test coverage
i. Test matrices
ii. Code coverage (how much of the code is tested)
iii. Environment coverage (how many hardware architectures, OSes, browsers, etc. are tested)
c. Test data and inputs
7. Test development
a. Test-driven development
b. Object oriented testing, mocking, and dependency injection
c. Black-box and white-box testing techniques
d. Test tooling, including code coverage, static analysis, and fuzzing
8. Verification and validation in the development cycle
176
a. Code reviews
b. Test automation, including automation of tooling
c. Pre-commit and post-commit testing
d. Trade-offs between test coverage and throughput/latency of testing
e. Defect tracking and prioritization
i. Reproducibility of reported defects
9. Domain specific verification and validation challenges
a. Performance testing and benchmarking
b. Asynchrony, parallelism, and concurrency
c. Safety-critical
d. Numeric
Non-Core:
10. Verification and validation tooling and automation
a. Static analysis
b. Code coverage
c. Fuzzing
d. Dynamic analysis and fault containment (sanitizers, etc.)
e. Fault logging and fault tracking
11. Test planning and generation
a. Fault estimation and testing termination including defect seeding
b. Use of random and pseudo random numbers in testing
12. Performance testing and benchmarking
a. Throughput and latency
b. Degradation under load (stress testing, FIFO vs LIFO handling of requests)
c. Speedup and scaling
i. Amadhl's law
ii. Gustafson's law
iii. Soft and weak scaling
d. Identifying and measuring figures of merits
e. Common performance bottlenecks
i. Compute-bound
ii. Memory-bandwidth bound
iii. Latency-bound
f. Statistical methods and best practices for benchmarking
i. Estimation of uncertainty
ii. Confidence intervals
g. Analysis and presentation (graphs, etc.)
h. Timing techniques
13. Testing asynchronous, parallel, and concurrent systems
14. Verification and validation of non-code artifacts (documentation, training materials)
177
2. Distinguish between program validation and verification.
3. Describe different objectives of testing.
4. Compare and contrast the different types and levels of testing (regression, unit, integration,
systems, and acceptance).
KA Core:
5. Describe techniques for creating a test plan and generating test cases.
6. Create a test plan for a medium-size code segment which includes a test matrix and generation of
test data and inputs.
7. Implement a test plan for a medium-size code segment.
8. Identify the fundamental principles of test-driven development methods and explain the role of
automated testing in these methods.
9. Discuss issues involving the testing of object-oriented software.
10. Describe mocking and dependency injection and their application.
11. Undertake, as part of a team activity, a code review of a medium-size code segment.
12. Describe the role that tools can play in the validation of software.
13. Automate testing in a small software project.
14. Explain the roles, pros, and cons of pre-commit and post-commit testing.
15. Discuss the tradeoffs between test coverage and test throughput/latency and how this can impact
verification.
16. Use a defect tracking tool to manage software defects in a small software project.
17. Discuss the limitations of testing in certain domains.
Non-Core:
18. Describe and compare different tools for verification and validation.
19. Automate the use of different tools in a small software project.
20. Explain how and when random numbers should be used in testing.
21. Describe approaches for fault estimation.
22. Estimate the number of faults in a small software application based on fault density and fault
seeding.
23. Describe throughput and latency and provide examples of each.
24. Explain speedup and the different forms of scaling and how they are computed.
25. Describe common performance bottlenecks.
26. Describe statistical methods and best practices for benchmarking software.
27. Explain techniques for and challenges with measuring time when constructing a benchmark.
28. Identify the figures of merit, construct and run a benchmark, and statistically analyze and visualize
the results for a small software project.
29. Describe techniques and issues with testing asynchronous, concurrent, and parallel software.
30. Create a test plan for a medium-size code segment which contains asynchronous, concurrent,
and/or parallel code, including a test matrix and generation of test data and inputs.
31. Describe techniques for the verification and validation of non-code artifacts.
KA-Core:
1. Identify both explicit and implicit behavior of an interface, and identify potential risks from Hyrum’s
Law
2. Consider inputs from static analysis tools and/or Software Design principles to identify code in need
of refactoring.
3. Identify changes that can be broadly considered “backward compatible,” potentially with explicit
statements about what usage is or is not supported
4. Refactor the implementation of an interface to improve design, clarity, etc. with minimal/zero impact
on existing users
5. Evaluate whether a proposed change is sufficiently safe given the versioning methodology in use
for a given project
Non-Core:
6. Plan a complex multi-step refactoring to change default behavior of an API safely.
179
1. Concept of reliability as probability of failure or mean time between failures, and faults as cause of
failures
2. Identifying reliability requirements for different kinds of software
3. Software failures caused by defects/bugs, and so for high reliability the goal is to have minimum
defects - by injecting fewer defects (better training, education, planning), and by removing most of
the injected defects (testing, code review, etc.)
4. Software reliability, system reliability and failure behavior
5. Defect injection and removal cycle, and different approaches for defect removal
6. Compare the “error budget” approach to reliability with the “error-free” approach, and identify
domains where each is relevant
Non-Core:
7. Software reliability models
8. Software fault tolerance techniques and models
a. Contextual differences in fault tolerance (e.g., crashing a flight critical system is strongly
avoided, crashing a data processing system before corrupt data is written to storage is highly
valuable)
9. Software reliability engineering practices - including reviews, testing, practical model checking
10. Identification of dependent and independent failure domains, and their impact on system reliability
11. Measurement-based analysis of software reliability - telemetry, monitoring and alerting,
dashboards, release qualification metrics, etc.
Non-Core:
4. Demonstrate the ability to apply multiple methods to develop reliability estimates for a software
system.
5. Identify methods that will lead to the realization of a software architecture that achieves a specified
level of reliability.
6. Identify ways to apply redundancy to achieve fault tolerance.
7. Identify single-point-of-failure (SPF) dependencies in a system design.
Professional Dispositions
Mathematics Requirements
Desirable:
● Introductory statistics (performance comparisons, evaluating experiments, interpreting survey
results, etc.). (See also CS-Core requirements for MSF-Statistics)
Course objectives: Students should be able to perform good quality code review for colleagues
(especially focusing on professional communication and teamwork needs), read and write unit tests,
use basic software tools (IDEs, version control, static analysis tools) and perform basic activities
expected of a new hire on a software team.
Committee
Contributors:
● Hyrum Wright, Google, Pittsburgh, PA, USA
● Olivier Giroux, Apple, Cupertino, CA, USA
● Gennadiy Civil, Google, New York City, NY, USA
182
Security (SEC)
Preamble
Computing supports nearly every facet of modern critical infrastructure: transportation, communication,
healthcare, education, energy generation and distribution, to name a few. With rampant attacks on and
breaches of this infrastructure, computer science graduates have an important role in designing,
implementing, and operating software systems that are robust, safe, and secure.
The Security (SEC) knowledge area (KA) focuses on developing a security mindset into the overall
ethos of computer science graduates so that security is embedded in all of their work products.
Computer science students need to learn about system vulnerabilities and understand threats against
computer systems. The Security title choice was intentional to serve as a one-word umbrella term for
this knowledge area, which also includes concepts to support privacy, cryptography, secure systems,
secure data, and secure code.
The SEC KA relies on shared concepts pervasive in all the other areas of CS2023. It identifies seven
crosscutting concepts of cybersecurity: confidentiality, integrity, availability, risk assessment, systems
thinking, adversarial thinking, and human-centered thinking. The seventh concept, human-centered
thinking, is additional to the six crosscutting concepts originally defined in the Cybersecurity Curricula
2017 (CSEC2017) [26]. This addition reinforces to students that humans are also a link in the overall
chain of security, a theme that is also covered in KAs, such as HCI. Principles of protecting systems
(also in the DM, OS, SDF, SE and SF knowledge areas) include security-by-design, privacy-by-design,
defense-in-depth, and zero-trust.
Another concept is the notion of assurance, which is an attestation that security mechanisms need to
comply with the security policies that have been defined for data, processes, and systems. Assurance
is tied in with the concepts of verification and validation in the SE knowledge area. Considerations of
data privacy and security are shared with the DM (technical aspects) and SEP KAs.
The SEC KA thus sits atop several of the other CS2023 KAs, while including additional concepts not
present in those KAs. The specific dependence on other KAs is stated below, starting with the Core
Hours table. CS2023 treats security as a crucial component of the skillset of any CS graduate, and the
hours needed for security preparation come from all of the other 16 CS2023 KAs.
The Security KA is an updated name for CS2013’s Information Assurance and Security (IAS) KA. Since
2013, Information Assurance and Security has been rebranded as Cybersecurity, which has become a
new computing discipline, with its own curricular guidelines (CSEC 2017) developed by a Joint Task
Force of the ACM, IEEE Computer Society, AIS and IFIP in 2017.
Moreover, since 2013, other curricular recommendations for cybersecurity beyond CS2013 and CSEC
2017 have been made. In the US, the National Security Agency recognizes institutions as Centers of
Academic Excellence (CAE) in Cyber Defense and/or Cyber Operations if their cybersecurity programs
183
meet the respective CAE curriculum requirements. Additionally, the National Initiative for Cybersecurity
Education (NICE) of the US National Institute for Standards and Technologies (NIST) has developed
and revised the Workforce Framework for Cybersecurity (NICE Workforce Framework), which identifies
competencies (knowledge and skills) needed to perform tasks relevant to cybersecurity work roles. The
European Cybersecurity Skills Framework (ECSF) includes a standard ontology to describe
cybersecurity tasks and roles, as well as addressing the cybersecurity personnel shortage in EU
member countries. Similarities and differences of these cybersecurity guidelines, viewed from the CS
perspective, also informed the SEC knowledge area.
Building on CS2013’s recognition of the pervasiveness of security in computer science, the CS2023
SEC knowledge area focuses on ensuring that students develop a security mindset so that they are
prepared for the continual changes occurring in computing. One useful addition is the knowledge unit
for security analysis, design, and engineering to support the concepts of security-by-design and
privacy-by-design.
The importance of computer science in ensuring the protection of future computing systems and
societal critical infrastructure will continue to grow. Consequently, it is imperative that faculty teaching
computer science incorporate the latest advances in security and privacy approaches to keep their
curriculum current.
CS2023’s SEC knowledge area focuses on those aspects of security, privacy, and related concepts
important for computer science students. In comparison, CSEC 2017 characterizes similarities and
differences in the cybersecurity book of knowledge using the disciplinary lenses of computer science,
computer engineering, software engineering, information systems, information technology, and other
disciplines. In short, the major goal of the SEC knowledge area is to ensure computer science
graduates are able to design and develop more secure code, ensure data security and privacy, and can
apply a security mindset to their daily activities.
Protecting what happens within the perimeter of a networked computer system is a core competency of
computer science graduates. Although the computer science and cybersecurity knowledge units have
overlaps, the demands upon cybersecurity graduates typically are to protect the perimeter. CSEC 2017
defines cybersecurity as a highly interdisciplinary field of study that covers eight areas (data, software,
component, connection, system, human, organizational, and societal security) and prepares its
students for both technical and managerial work roles in cybersecurity.
The first five CSEC 2017 areas are technical and have overlaps with the CS2023 SEC KA, but the
intent of coverage is substantively different as CS students bring to bear the core competencies
described in all the 17 CS2023 knowledge areas. For instance, consider the SEC KA’s Secure Coding
knowledge unit. The CS student will need to view this knowledge unit from a computer science lens, as
an extension of the material covered in the SDF, SE and PDC KAs, while the Cybersecurity student will
need to view software security in the overall context of diverse cybersecurity goals. These viewpoints
are not totally distinct and have overlaps, but the lenses used to examine and present the content are
184
different. ThereFare similar commonalities and differences among CS2023 SEC knowledge units and
i
corresponding CSEC 2017 knowledge units.
g
u
F
r
i
Core Hourseg
.
u
D
r
Knowledge a Units CS Core KA Core
e
t
.
Foundationala Security 1 + 7 (DM, FPL, PDC, SDF, 7
D
S
a
e
SE, OS)
t
c
a
Society, Ethics, and the Profession 1 + 4 (SEP) 2
u
S
r
e
Secure Coding
i 2 + 6 (FPL, SDF, SE) 5
c
t
u
Cryptographyy
r
1 + 8 (MSF) 4
–
i
C
Security Analysis, Design, and Engineering 1 + 4 (MSF, SE) 8
t
y
y
b
Digital Forensics 0 6
–
e
C
r
Security Governance
y 0 3
s
b
e
Total hours e 6 35
c
r
u
s
r
Note that the Foundational
e
i
Security knowledge unit here adds only 1 CS Core hour as it relies on CS
Core hours fromct SDF, SE, and OS. Similarly, the Society, Ethics, and the Profession knowledge unit,
u
the Cryptography y knowledge unit, Security Analysis, Design, and Engineering knowledge unit each add
r
1 CS Core hourvi while the Secure Coding knowledge unit adds 2 CS Core hours. In total, the SEC
knowledge areaet adds only 6 CS Core hours to the curriculum.
r
y
s
v
The SEC knowledge u area also includes several KA Core hours from the other knowledge areas, as
e
noted in the many s “see also” references in each knowledge unit below. If one looks at the shared hours
r
in the core and C sreduces duplicative hours, the result is that approximately 28 hours of CS Core hours
S
from the other knowledge
u
2
areas are needed, either to provide the basis for the SEC knowledge area or
s
to complement 0its content that is shown below. Of these, MSF-Discrete, MSF-Probability, and MSF-
C
Statistics are likely
2
S
to be relied upon extensively in all the SEC knowledge units, as are SDF-
Fundamentals, 32SDF-Algorithms, and SDF-Practices. The others are mentioned within each of the SEC
knowledge unitsS 0 described below. As the same content of the different knowledge units might form the
E
basis, the CS Core2 hours from the other knowledge areas overlap and the total of 28 hours eliminates
C
3
obvious duplication.
.
S
(
E
KnowledgeO CUnits
t
.
h
(
e
O
r
t
k 185
h
n
e
o
r
w
k
l
n
SEC-Foundations: Foundational Security
CS Core:
1. Developing a security mindset incorporating crosscutting concepts: confidentiality, integrity,
availability, risk assessment, systems thinking, adversarial thinking, human-centered thinking
2. Basic concepts of authentication and authorization/access control
3. Vulnerabilities, threats, attack surfaces, and attack vectors (See also: OS-Protection)
4. Denial of Service (DoS) and Distributed Denial of Service (DDoS) (See also: OS-Protection)
5. Principles and practices of protection, e.g., least privilege, open design, fail-safe defaults, defense
in depth, and zero trust; and how they can be implemented (See also: OS-Principles, OS-
Protection, SE-Construction, SEP-Security)
6. Optimization considerations between security, privacy, performance, and other design goals (See
also: SDF-Practices, SE-Validation, HCI-Design)
7. Impact of AI on security and privacy: using AI to bolster defenses as well as address increased
adversarial capabilities due to AI (See also: AI-SEP, HCI-Design, HCI-SEP)
KA Core:
8. Access control models (e.g., discretionary, mandatory, role-based, and attribute-based)
9. Security controls
10. Concepts of trust and trustworthiness
11. Applications of a security mindset: web, cloud, and mobile devices (See also: SF-System Design,
SPD-Common)
12. Protecting embedded and cyber-physical systems (See also: SPD-Embedded)
13. Principles of usable security and human-centered computing (See also: HCI-Design, SEP-Security)
14. Security and trust in AI/machine learning systems, e.g., fit for purpose, ethical operating
boundaries, authoritative knowledge sources, verified training data, repeatable system evaluation
tests, system attestation, independent validation/certification; unintended consequences from:
adverse effect (See also: AI-Introduction, AI-ML, AI-SEP, SEP-Security)
15. Security risks in building and operating AI/machine learning systems. e.g., algorithm bias,
knowledge corpus bias, training corpus bias, copyright violation (See also: AI-Introduction, AI-ML,
AI-SEP)
16. Hardware considerations in security, e.g., principles of secure hardware, secure processor
architectures, cryptographic acceleration, compartmentalization, software-hardware interaction (See
also: AR-Assembly, AR-Representation, OS-Purpose)
186
SEC-SEP: Society, Ethics, and the Profession
CS Core:
1. Principles and practices of privacy (See also: SEP-Security)
2. Societal impacts on breakdowns in security and privacy (See also: SEP-Context, SEP-Privacy,
SEP-Security)
3. Applicability of laws and regulations on security and privacy (See also: SEP-Security)
4. Professional ethical considerations when designing secure systems and maintaining privacy; ethical
hacking (See also: SEP-Professional-Ethics, SEP-Privacy, SEP-Security)
KA-Core:
5. Security by design (See also: SF-Security, SF-Design)
6. Privacy by design and privacy engineering (See also: SEP-Privacy, SEP-Security)
7. Security and privacy implications of malicious AI/machine learning actors, e.g., identifying deep
fakes (See also: AI-Introduction, AI-ML, SEP-Privacy, SEP-Security)
8. Societal impacts of Internet of Things (IoT) devices and other emerging technologies on security
and privacy (See also: SEP-Privacy, SEP-Security)
187
14. Malware: varieties, creation, reverse engineering, and defense against them (See also: FPL-
Systems, FPL-Translation)
15. Assurance: testing (including fuzzing and penetration testing), verification, and validation (See also:
OS-Protection, SDF-Fundamentals, SE-Construction, SE-Validation)
16. Static and dynamic analyses (See also: FPL-Analysis, MSF-Protection, PDC-Evaluation, SE-
Validation)
17. Secure compilers and secure code generation (See also: FPL-Runtime, FPL-Translation)
SEC-Crypto: Cryptography
CS Core:
1. Differences between algorithmic, applied, and mathematical views of cryptography.
2. Mathematical preliminaries: modular arithmetic, Euclidean algorithm, probabilistic independence,
linear algebra basics, number theory, finite fields, complexity, asymptotic analysis (See also: MSF-
Discrete, MSF-Linear)
3. Basic cryptography: symmetric key and public key cryptography (See also: AL-Foundational, MSF-
Discrete)
4. Basic cryptographic building blocks, including symmetric encryption, asymmetric encryption,
hashing, and message authentication (See also: MSF-Discrete)
188
5. Classical cryptosystems, such as shift, substitution, transposition ciphers, code books, and
machines (See also: MSF-Discrete)
6. Kerckhoff’s principle and use of vetted libraries (See also: SE-Construction)
7. Usage of cryptography in real-world applications, e.g., electronic cash, secure channels between
clients and servers, secure electronic mail, entity authentication, device pairing, steganography, and
voting systems (See also: NC-Security, GIT-Image)
KA Core:
8. Additional mathematics: primality, factoring and elliptic curve cryptography (See also: MSF-
Discrete)
9. Private-key cryptosystems: substitution-permutation networks, linear cryptanalysis, differential
cryptanalysis, DES, and AES (See also: MSF-Discrete, NC-Security)
10. Public-key cryptosystems: Diffie-Hellman and RSA (See also: MSF-Discrete)
11. Data integrity and authentication: hashing, and digital signatures (See also: MSF-Discrete, DM-
Security)
12. Cryptographic protocols: challenge-response authentication, zero-knowledge protocols,
commitment, oblivious transfer, secure two- or multi-party computation, hash functions, secret
sharing, and applications (See also: MSF-Discrete)
13. Attacker capabilities: chosen-message attack (for signatures), birthday attacks, side channel
attacks, and fault injection attacks (See also: NC-Security)
14. Quantum cryptography; Post Quantum/Quantum resistant cryptography (See also: AL-
Foundational, MSF-Discrete)
15. Blockchain and cryptocurrencies (See also: MSF-Discrete, PDF-Communication)
190
3. Forensics in different situations: operating systems, file systems, application forensics, web
forensics, network forensics, mobile device forensics, use of database auditing (See also: NC-
Security)
4. Attacks on forensics and preventing such attacks
5. Incident handling processes
6. Rules of evidence – general concepts and differences between jurisdictions (See also: SEP-
Security)
7. Legal issues: digital evidence protection and management, chains of custody, reporting, serving as
an expert witness (See also: SEP-Security)
Professional Dispositions
● Meticulous: students need to pay careful attention to details to ensure the protection of real-world
software systems.
● Self-directed: students must be ready to deal with the many novel and easily unforeseeable ways
in which adversaries might launch attacks.
● Collaborative: students must be ready to collaborate with others, as collective knowledge and skills
will be needed to prevent attacks, protect systems and data during attacks, and plan for the future
after the immediate attack has been mitigated.
● Responsible: students need to show responsibility when designing, developing, deploying, and
maintaining secure systems, as their enterprise and society is constantly at risk.
● Accountable: students need to know that as future professionals that they will be held accountable
if a system or data breach were to occur, which should strengthen their resolve to prevent such
breaches from occurring in the first place.
Mathematics Requirements
Required:
● MSF-Discrete
● MSF-Probability
● MSF-Statistics
Desired:
● MSF-Linear
The first suggestion for course packaging is to infuse the CS Core hours of the SEC KA into
appropriate places in other coursework that covers related security topics in the following knowledge
units. As the CS Core Hours of the SEC KA are only 6 hours, coursework covering one or more of the
following knowledge units could accommodate them.
● AI-SEP
● AL-SEP
192
● AR-Assembly
● AR-Memory
● DM-Security
● FPL-Translation
● FPL-Run-Time
● FPL-Analysis
● FPL-Types
● HCI-Design
● HCI-Accountability
● HCI-SEP
● NC-Security
● OS-Protection
● PDC-Communication
● PDC-Coordination
● PDC-Evaluation
● SDF-Fundamentals
● SDF-Practices
● SE-Validation
● SEP-Privacy
● SEP-Security
● SF-Design
● SF-Security
● SPD-Common
● SPD-Mobile
● SPD-Web
The second approach for course packaging is to create an additional full course focused on security
that packages the following, building on the topics already covered in other knowledge areas.
● SEC-Foundations (6 hours)
● SEC-SEP (4 hours)
● SEC-Coding (7 hours)
● SEC-Crypto (5 hours)
● SEC-Engineering (4 hours)
● SEC-Forensics (2 hours)
● SEC-Governance (1 hour)
● AI-SEP (1 hour)
● AR-Assembly (1 hour)
● AR-Memory (1 hour)
● DM-Security (3 hours)
● FPL-Translation (1 hour)
● FPL-Run-Time (1 hour)
● FPL-Analysis (1 hour)
193
● FPL-Types (2 hours)
● HCI-Design (1 hour)
● HCI-Accountability (1 hour)
● HCI-SEP (1 hour)
● NC-Security (2 hours)
● OS-Protection (1 hour)
● PDC-Communication (1 hour)
● PDC-Coordination (1 hour)
● PDC-Evaluation (1 hour)
● SDF-Fundamentals (1 hour)
● SDF-Practices (1 hour)
● SE-Validation: (2 hours)
● SEP-Privacy (1 hour)
● SEP-Security (2 hours)
● SF-Design (2 hours)
● SF-Security (2 hours)
● SPD-Common (2 hours)
● SPD-Mobile (2 hours)
● SPD-Web: Web Platforms (2 hours)
The coverage exceeds 45 lecture hours, and so in a typical course, instructors would need to decide
what topics to emphasize and what not to cover without losing the perspective that the course should
help students develop a security mindset.
Prerequisites: Depends on the selected topics, but appropriate coursework covering MSF, SDF and SE
knowledge units is needed.
Course objectives: Students should develop a security mindset and be ready to apply this mindset to
securing data, software, systems, and applications.
Security Engineering:
● SEC-Foundations (6 hours)
● SEC-SEP (4 hours)
● SEC-Coding (6 hours)
● SEC-Crypto (2 hours)
● SEC-Engineering (10 hours)
● SEC-Forensics (2 hours)
● SEC-Governance (1 hour)
● DM-Security (2 hours)
● NC-Security (3 hours)
● OS-Protection (2 hours)
194
● PDC-Evaluation (2 hours)
● SDF-Fundamentals (1 hour)
● SDF-Practices (1 hour)
● SE-Validation (2 hours)
● SEP-Privacy (1 hour)
● SEP-Security (1 hour)
● SF-Design (2 hours)
● SF-Security (2 hours)
● SPD-Mobile (2 hours)
● SPD-Web (2 hours)
The coverage for all topics is over 45 lecture hours, and so instructors would need to decide what
topics to emphasize and what not to cover without losing the perspective that the course should help
students develop the security engineer’s mindset. Laboratory time related to data and network security,
web platform, secure coding and validation would be valuable aspects of this course.
Prerequisites: Depends on the selected topics, either the first or second packaging suggested above
would be recommended based on degree program needs.
Course objectives: Computer science students should develop the mindset of a security engineer and
be ready to apply this mindset to problems in designing and evaluating the security of a range of
computing systems and information services.
Committee
Members:
● Vijay Anand, University of Missouri – St. Louis, St. Louis, MO, USA
● Diana Burley, American University, Washington, DC, USA
● Sherif Hazem, Central Bank of Egypt, Cairo, Egypt
● Michele Maasberg, United States Naval Academy, Annapolis, MD, USA
● Bruce McMillin, Missouri University of Science and Technology, Rolla, MO, USA
● Sumita Mishra, Rochester Institute of Technology, Rochester, NY, USA
● Nicolas Sklavos, University of Patras, Patras, Greece
● Blair Taylor, Towson University, Towson, MD, USA
● Jim Whitmore, Dickinson College, Carlisle, PA, USA
Contributors:
● Markus Geissler, Cosumnes River College, Sacramento, CA, USA
●
● Michael Huang, Rider University, Lawrenceville, NJ, USA
● Tim Preuss, Minnesota State Community and Technical College, Moorhead, MN, USA
● Daniel Zappala, Brigham Young University, Provo, UT, USA
195
Society, Ethics, and the Profession (SEP)
Preamble
The ACM Code of Ethics and Professional Conduct states: “Computing professionals' actions change
the world. To act responsibly, they should reflect upon the wider impacts of their work, consistently
supporting the public good.” The IEEE Code of Ethics starts by recognizing “... the importance of our
technologies in affecting the quality of life throughout the world.” The AAAI Code of Professional Ethics
and Conduct begins with “Computing professionals, and in particular, AI professionals’ actions change
the world. To act responsibly, they should reflect upon the wider impacts of their work, consistently
supporting the public good.”
While technical issues dominate the computing curriculum, they do not constitute a complete
educational program in the broader context. It is more evident today than ever that students must also
be exposed to the larger societal context of computing to develop an understanding of the critical and
relevant social, ethical, legal, and professional issues and responsibilities at hand. This need to
incorporate the study of these non-technical issues into the ACM curriculum was formally recognized in
1991, as articulated in the following excerpt from CS1991 [3]:
Undergraduates also need to understand the basic cultural, social, legal, and ethical issues
inherent in the discipline of computing. They should understand where the discipline has been,
where it is, and where it is heading. They should also understand their individual roles in this
process, as well as appreciate the philosophical questions, technical problems, and aesthetic
values that play an important part in the development of the discipline.
Students also need to develop the ability to ask serious questions about the social impact of
computing and to evaluate proposed answers to those questions. Future practitioners must be
able to anticipate the impact of introducing a given product into a given environment. Will that
product enhance or degrade the quality of life? What will the impact be upon individuals,
groups, and institutions?
Finally, students need to be aware of the basic legal rights of software and hardware vendors
and users, and they also need to appreciate the ethical values that are the basis for those
rights. Future practitioners must understand the responsibility that they will bear, and the
possible consequences of failure. They must understand their own limitations as well as the
limitations of their tools. All practitioners must make a long-term commitment to remaining
current in their chosen specialties and in the discipline of computing as a whole.
Nonetheless, in recent years myriad high-profile issues affecting society at large have occurred leading
to the conclusion that computer science professionals are not as prepared as they should be.
As technological advances (more specifically, how these advances are used by humans) continue to
significantly impact the way we live and work, the critical importance of social and ethical issues and
professional practice continues to increase in magnitude and consequence. The ways humans use
196
computing products and platforms, while hopefully providing opportunities, also introduce ever more
challenging problems. A recent example is the emergence of generative AI, including large language
models that generate code. A 2020 Communications of the ACM article [29] stated: “... because
computing as a discipline is becoming progressively more entangled within the human and social
lifeworld, computing as an academic discipline must move away from engineering-inspired curricular
models and integrate the analytic lenses supplied by social science theories and methodologies.”
In parallel to, and as part of, the heightened awareness of the social consequences computing has on
the world, computing communities have become much more aware - and active - in areas of diversity,
equity, inclusion and accessibility. These feature in statements and initiatives at ACM [33], IEEE [34],
and AAAI [35] and in their codes of conduct [3-5]. All students deserve an inclusive, diverse, equitable
and accessible learning environment. Computing students also have a unique duty to ensure that when
put to practice, their skills, knowledge, and competencies are applied in ways that work for, and not
against, the principles of diversity, equity, inclusion and accessibility. These principles are inherently a
part of computing, and a new knowledge unit “Diversity, Equity, Inclusion and Accessibility” (SEP-DEIA)
has been added to this knowledge area.
Computer science educators may opt to deliver the material in this knowledge area within the contexts
of traditional technical and theoretical courses, in dedicated courses, and as part of capstone, project,
and professional practice courses. The material in this knowledge area is best covered through a
combination of all the above. It is too commonly held that many topics in this knowledge area may not
readily lend themselves to being covered in other more traditional computer science courses. However,
many of these topics naturally arise in traditional courses, or can be included with minimal effort. The
benefits of exposing students to SEP topics within the context of those traditional courses is invaluable.
Nonetheless institutional challenges will present barriers; for instance, some of these traditional courses
may not be offered at a given institution and, in such cases, it is difficult to cover these topics
appropriately without a dedicated SEP course. If social, ethical, and professional considerations are
covered only in a dedicated course and not in the context of others, it could reinforce the false notion
that technical processes are void of these important aspects, or that they are more isolated than they
are in reality. Because of the broad relevance of these knowledge units, it is important that as many
traditional courses as possible include aspects such as case studies, that analyze ethical, legal, social,
and professional considerations in the context of the technical subject matter of those courses. Courses
in areas such as software engineering, databases, computer graphics, computer networks, information
assurance & security, and introduction to computing, all provide obvious context for analysis of such
issues. However, an ethics-related module could be developed for almost any program. It would be
explicitly against the spirit of these recommendations to have only a dedicated course within a specific
computer science curriculum without great practical reason. Further, these topics should be covered in
courses starting from year 1. Presenting them as advanced topics in later courses only creates an
artificial perception that SEP topics are only important at a certain level or complexity. While it is true
that the importance and consequence of SEP topics increases with level and complexity, introductory
topics are not devoid of SEP topics. Further, many SEP topics are best presented early to lay a
foundation for more intricate topics later in the curriculum.
197
Running through all the topics in this knowledge area is the need to speak to the computing
practitioner’s responsibility to proactively address issues through both ethical and technical actions.
Today it is important not only for the topics in this knowledge area, but for students’ knowledge in
general, that the ethical issues discussed in any course should be directly related to - and arise
naturally from - the subject matter of that course. Examples include a discussion in a database course
of the SEP aspects of data aggregation or data mining; or a discussion in a software engineering
course of the potential conflicts between obligations to the customer and users as well as all others
affected by their work. Computing faculty who are unfamiliar with the content and/or pedagogy of
applied ethics are urged to take advantage of the considerable resources from ACM, IEEE-CS, AAAI,
SIGCAS (ACM Special Interest Group on Computers and Society), and other organizations.
Additionally, it is the educator’s responsibility to impress upon students that this area is just as
important - in ways more important - than technical areas. The societal, ethical, and professional
knowledge gained in studying topics in this knowledge area will be used throughout one’s career and
are transferable between projects, jobs, and often even industries, particularly as one’s career
progresses into project leadership and management.
The ACM Code of Ethics and Professional Conduct [30], the IEEE Code of Ethics [31], and the AAAI
Code of Professional Ethics and Conduct [32] provide guidance that serve as the basis for the conduct
of all computing professionals in their work. The ACM Code emphasizes that ethical reasoning is not an
algorithm to be followed, and computer professionals are expected to consider how their work impacts
the public good as the primary consideration. It falls to computing educators to highlight the domain-
specific role of these topics for our students, but computer science programs should certainly be willing
to lean on complementary courses from the humanities and social sciences.
Most computing educators are not also moral philosophers. Yet CS2023, along with past CS curricular
recommendations, indicate the need for ethical analysis. CS2023 and prior curricular recommendations
are quite clear on the required mathematical foundations that students are expected to gain which are
often delivered by mathematics departments. Yet, the same is not true of moral philosophy. No one
would expect a student to be able to provide a proof by induction until after having successfully
completed a course in discrete mathematics. Yet, the parallel with respect to ethical analyses is
somehow absent. We seemingly do (often) expect our students to perform ethical analysis without
having the appropriate prerequisite knowledge from philosophy. Further, the application of ethical
analysis also underlies every other knowledge unit in this knowledge area. We acknowledge that the
knowledge unit Methods for Ethical Analysis (SEP-Ethical-Analysis) is the only one in this knowledge
area that does not readily lend itself to being taught in the context of other CS2023 knowledge areas.
Suggestions in terms of addressing this appear in the Course Packaging Suggestions.
The lack of prerequisite training in social, ethical, and professional topics has facilitated graduates
operating with a certain ethical egoism (e.g., ‘Here is what I believe/think/feel is right’). Regardless of
how well intentioned, one might conclude that this is what brought us to a point in history where there
are frequent occurrences of unintended consequences of technology, serious data breaches, and
software failures causing economic, emotional and physical harms. Certainly, computing graduates
who have learned how to apply the various ethical frameworks or lenses proposed through the ages
would only serve to improve this situation. In retrospect, to ignore the lessons from moral philosophy,
198
which have been debated and refined for millennia - on what it means to act justly, or work for the
common good - appears as hubris.
A computer science student must not graduate without understanding how society and ethics influence
the computing profession. Nor should it be possible to complete a computer science degree without
learning how computing professionals influence society, the ethical considerations involved in shaping
that impact, and the student-turned-graduate’s role in these relationships, as both computing
professionals and members of society.
Core Hours
199
Knowledge Unit CS Core KA Core
Social Context 3 2
Professional Ethics 2 2
Intellectual Property 1 1
Communication 2 1
Sustainability 1 1
History 1 1
Economies of Computing 0 1
Total 18 14
Knowledge Units
200
2. Impact of computing applications (e.g., social media, artificial intelligence applications) on individual
well-being, and safety of all kinds (e.g., physical, emotional, economic)
3. Consequences of involving computing technologies, particularly artificial intelligence, biometric
technologies, and algorithmic decision-making systems, in civic life (e.g., facial recognition
technology, biometric tags, resource distribution algorithms, policing software) and how human
agency and oversight is crucial
4. How deficits in diversity and accessibility in computing affect society and what steps can be taken to
improve equity in computing
KA Core:
5. Growth and control of the internet, data, computing, and artificial intelligence
6. Often referred to as the digital divide, differences in access to digital technology resources and its
resulting ramifications for gender, class, ethnicity, geography, and/or developing countries,
including consideration of responsibility to those who might be less wealthy, under threat, or who
would struggle to have their voices heard.
7. Accessibility issues, including legal requirements such as Web Content Accessibility Guidelines
(www.w3.org/TR/WCAG21)
8. Context-aware computing
KA Core:
6. Describe the internet’s role in facilitating communication between citizens, governments, and each
other.
7. Analyze the effects of reliance on computing in the implementation of democracy (e.g., delivery of
social services, electronic voting).
8. Describe the impact of a lack of appropriate representation of people from historically minoritized
populations in the computing profession (e.g., industry culture, product diversity).
9. Discuss the implications of context awareness in ubiquitous computing systems.
10. Express how access to the internet and computing technologies affect different societies.
11. Identify why/how internet access can be viewed as a human right.
201
SEP-Ethical-Analysis: Methods for Ethical Analysis
Ethical theories and principles are the foundations of ethical analysis because they are the viewpoints
which can provide guidance along the pathway to a decision. Each theory emphasizes different
assumptions and methods for determining the ethicality of a given action. It is important for students to
recognize that decisions in different contexts may require different ethical theories (including
combinations) to arrive at ethically acceptable outcomes, and what constitutes ‘acceptable’ depends on
a variety of factors such as cultural context. Applying methods for ethical analysis requires both an
understanding of the underlying principles and assumptions guiding a given tool and an awareness of
the social context for that decision. Traditional ethical frameworks (e.g. [36]) as provided by western
philosophy can be useful, but they are not all-inclusive. Effort must be taken to include decolonial,
indigenous, and historically marginalized ethical perspectives whenever possible. No theory will be
universally applicable to all contexts, nor is any single ethical framework the ‘best’. Engagement across
various ethical schools of thought is important for students to develop the critical thinking needed in
judiciously applying methods for ethical analysis of a given situation.
CS Core:
1. Avoiding fallacies and misrepresentation in argumentation
2. Ethical theories and decision-making (philosophical and social frameworks, e.g. [3])
3. Recognition of the role culture plays in our understanding, adoption, design, and use of computing
technology
4. Why ethics is important in computing, and how ethics is similar to, and different from, laws and
social norms
KA Core:
5. Professional checklists
6. Evaluation rubrics
7. Stakeholder analysis
8. Standpoint theory
9. Introduction to ethical frameworks (e.g., consequentialism such as utilitarianism, non-
consequentialism such as duty, rights, or justice, agent-centered such as virtue or feminism,
contractarianism, ethics of care) and their use for analyzing an ethical dilemma
202
KA Core:
7. Distinguish all stakeholder positions in relation to their cultural context in a given situation.
8. Analyze the potential for introducing or perpetuating ethical debt (deferred consideration of ethical
impacts or implications) in technical decisions.
9. Discuss the advantages and disadvantages of traditional ethical frameworks
10. Analyze ethical dilemmas related to the creation and use of technology from multiple perspectives
using ethical frameworks
KA Core:
8. The role of the computing professional and professional societies in public policy
9. Maintaining awareness of consequences
10. Ethical dissent and whistleblowing
11. The relationship between regional culture and ethical dilemmas
12. Dealing with harassment and discrimination
13. Forms of professional credentialing
14. Ergonomics and healthy computing environments
203
15. Time-to-market and cost considerations versus quality professional standards
KA Core:
8. Describe ways in which professionals and professional organizations may contribute to public
policy.
9. Describe the consequences of inappropriate professional behavior.
10. Be familiar with whistleblowing and have access to knowledge to guide one through an incident.
11. Identify examples of how regional culture interplays with ethical dilemmas.
12. Describe forms of harassment and discrimination and avenues of assistance.
13. Assess various forms of professional credentialing.
14. State the relationship between ergonomics in computing environments and people’s health.
15. Describe issues associated with industries’ push to focus on time-to-market versus enforcing
quality professional standards.
204
5. Plagiarism and authorship
KA Core:
6. Philosophical foundations of intellectual property
7. Forms of intellectual property (e.g., copyrights, patents, trade secrets, trademarks) and the rights
they protect
8. Limitations on copyright protections, including fair use and the first sale doctrine
9. Intellectual property laws and treaties that impact the enforcement of copyrights
10. Software piracy and technical methods for enforcing intellectual property rights, such as digital
rights management and closed source software as a trade secret
11. Moral and legal foundations of the open source movement
12. Systems that use others’ data (e.g., large language models)
KA Core:
9. Discuss the philosophical bases of intellectual property in an appropriate context (e.g., country,
etc.).
10. Distinguish the conflicting issues involved in securing software patents.
11. Contrast the protections and obligations of copyright, patent, trade secret, and trademarks.
12. Describe the rationale for the legal protection of intellectual property in the appropriate context
(e.g., country, etc.).
13. Analyze the use of copyrighted work under the concepts of fair use and the first sale doctrine.
14. Identify the goals of the open source movement and its impact on fields beyond computing, such
as the right-to-repair movement.
15. Summarize the global nature of software piracy.
16. Criticize the use of technical measures of digital rights management (e.g., encryption,
watermarking, copy restrictions, and region lockouts) from multiple stakeholder perspectives.
17. Discuss the nature of anti-circumvention laws in the context of copyright protection.
205
SEP-Privacy: Privacy and Civil Liberties
Electronic information sharing highlights the need to balance privacy protections with information
access. The ease of digital access to, in addition to copying and distribution of, many types of data
makes privacy rights and civil liberties more complex, especially given cultural and legal differences in
these areas. Complicating matters further, privacy also has interpersonal, organizational,
professional/business, and governance components. In addition, the interconnected nature of online
communities raises challenges for managing expectations and protections for freedom of expression in
various cultures and nations. Technology companies that provide platforms for user-generated content
are under increasing pressure to perform governance tasks, potentially facing liability for their
decisions.
CS Core:
1. Privacy implications of widespread data collection including but not limited to transactional
databases, data warehouses, surveillance systems, cloud computing, and artificial intelligence
2. Conceptions of anonymity, pseudonymity, and identity
3. Technology-based solutions for privacy protection (e.g., end-to-end encryption and differential
privacy)
4. Civil liberties, privacy rights, and cultural differences
KA Core:
5. Philosophical and legal conceptions of the nature of privacy including the right to privacy
6. Legal foundations of privacy protection in relevant jurisdictions (e.g., GDPR in the EU)
7. Privacy legislation in areas of practice (e.g., HIPAA in the US, AI Act in the EU)
8. Basic Principles of human-subjects research and principles beyond what the law requires (e.g.,
Belmont Report, UN Universal Declaration on Human Rights and how this relates to technology)
9. Freedom of expression and its limitations
10. User-generated content, content moderation, and liability
KA Core:
6. Discuss the philosophical basis for the legal protection of personal privacy in an appropriate
context (e.g., country, etc.).
206
7. Critique the intent, potential value, and implementation of various forms of privacy legislation and
principles beyond what the law requires.
8. Identify strategies to enable appropriate freedom of expression.
SEP-Communication: Communication
Computing is an inherently collaborative and social discipline making communication an essential
aspect of the profession. Much but not all of this communication occurs in a professional setting where
communication styles, expectations, and norms differ from other contexts where similar technology
might be used. Both professional and informal communication conveys information to various
audiences who may have very different goals and needs for that information. Good communication is
also necessary for transparency and trustworthiness. It is also important to note that computing
professionals are not just communicators but are also listeners who must be able to hear and
thoughtfully make use of feedback received from various stakeholders. Effective communication skills
are not something one ‘just knows’ - they are developed and can be learned. Communication skills are
best taught in context throughout the undergraduate curriculum.
CS Core:
1. Oral, written, and electronic team and group communication
2. Technical communication materials (e.g., source code, and documentation, tutorials, reference
materials, API documentation)
3. Communicating with different stakeholders such as customers, leadership, or the general public
4. Team collaboration (including tools) and conflict resolution
5. Accessibility and inclusivity requirements for addressing professional audiences
6. Cultural competence in communication including considering the impact of difference in natural
language
KA Core:
7. Trade-offs in competing factors that affect communication channels and choices
8. Communicating to solve problems or make recommendations in the workplace, such as raising
ethical concerns or addressing accessibility issues
207
5. Identify and describe qualities of effective communication (e.g., virtual, face-to-face, intragroup,
shared documents).
6. Understand how to communicate effectively and appropriately as a member of a team including
conflict resolution techniques.
7. Discuss ways to influence performance and results in diverse and cross-cultural teams.
KA Core:
8. Assess personal strengths and weaknesses to work remotely as part of a team drawing from
diverse backgrounds and experiences.
9. Choose an appropriate way to communicate delicate ethical concerns.
SEP-Sustainability: Sustainability
Sustainability is defined by the United Nations as “development that meets the needs of the present
without compromising the ability of future generations to meet their own needs” [37]. Alternatively, it is
the “balance between the environment, equity and economy” [38]. As computing extends into more and
more aspects of human existence, we are already seeing estimates that double-digit percentages of
global electricity usage are consumed by computing activities, which unchecked will likely grow.
Further, electronics contribute individually to demand for rare earth elements, mineral extraction, and
countless e-waste concerns. Students should gain a background that recognizes these global and
environmental costs and their potential long-term effects on the environment and local communities.
CS Core:
1. Environmental, social, and cultural impacts of implementation decisions (e.g., sustainability goals,
algorithmic bias/outcomes, economic viability, and resource consumption)
2. Local/regional/global social and environmental impacts of computing systems and their use (e.g.,
carbon footprints, resource usage, e-waste) in hardware (e.g., e-waste, data centers, rare element
and resource utilization, recycling) and software (e.g., cloud-based services, blockchain, AI model
training and use), not neglecting the impact of everyday use such as hardware (cheap hardware
replaced frequently) and software (web-browsing, email, and other services with hidden/remote
computational demands).
3. Guidelines for sustainable design standards
KA Core:
4. Systemic effects of complex computing technologies and phenomena (e.g., generative AI, data
centers, social media, offshoring, remote work)
5. Pervasive computing: Information processing that has been integrated into everyday objects and
activities, such as smart energy systems, social networking, and feedback systems to promote
sustainable behavior, transportation, environmental monitoring, citizen science and activism
6. How the sustainability of software systems is interdependent with social systems, including the
knowledge and skills of its users, organizational processes and policies, and its societal context
(e.g., market forces, government policies)
208
Illustrative Learning Outcomes:
CS Core:
1. Identify ways to be a sustainable practitioner in a specific area or with a specific project.
2. Assess the environmental impacts of a given project’s deployment (e.g., energy consumption,
contribution to e-waste, impact of manufacturing).
3. Describe global social and environmental impacts of computer use and disposal.
4. List the sustainable effects of modern practices and activities (e.g., remote work, e-commerce,
cryptocurrencies, AI models, data centers).
KA Core:
5. Describe the environmental impacts of design choices within the field of computing that relate to
algorithm design, operating system design, networking design, database design, etc.
6. Analyze the social and environmental impacts of new system designs.
7. Design guidelines for sustainable IT design or deployment in areas such as smart energy systems,
social networking, transportation, agriculture, supply-chain systems, environmental monitoring, and
citizen activism.
8. Assess computing applications in respect to environmental issues (e.g., energy, pollution, resource
usage, recycling and reuse, food management and production).
209
5. Age III (PC era): PCs, modern computer hardware and software, Moore’s Law
6. Age IV (Internet): Networking, internet architecture, browsers and their evolution, standards, born-
on-the-internet companies, and services (e.g., Google, Amazon, Microsoft, etc.), distributed
computing
7. Age V (Mobile & Cloud): Mobile computing and smartphones, cloud computing and models thereof
(e.g., SaaS), remote servers, security and privacy, social media
8. Age VI (AI): Decision making systems, recommender systems, generative AI and other machine
learning driven tools and technologies
210
4. Automation, AI, and their effects on job markets, developers, and users
5. Ethical concerns surrounding the attention economy and other economies of computing (e.g.
informed consent, data collection, use of verbose legalese in user agreements)
KA Core:
7. Benefits and challenges of existing and proposed computer crime laws
8. Security policies and the challenges of change and compliance
9. Responsibility for security throughout the computing life cycle
10. International and local laws and how they intersect
211
3. Describe the motivation and ramifications of cyber terrorism, data theft, hacktivism, ransomware,
and other attacks.
4. Examine the ethical and legal issues surrounding the misuse of access and various breaches of
security.
5. Discuss the professional's role in security and the trade-offs and challenges involved.
KA Core:
6. Investigate measures that can be taken by both individuals and organizations including
governments to prevent or mitigate the undesirable effects of computer crimes.
7. Design a company-wide security policy, which includes procedures for managing passwords and
employee monitoring.
8. Understand how legislation from one region may affect activities in another (e.g., how EU GDPR
applies globally, when EU persons are involved).
212
1. How identity impacts and is impacted by computing technologies and environments (academic and
professional)
2. The benefits of diverse development teams and the impacts of teams that are not diverse
3. Inclusive language and charged terminology, and why their use matters
4. Inclusive behaviors and why they matter
5. Designing and developing technology with accessibility in mind
6. How computing professionals can influence and impact diversity, equity, inclusion and accessibility,
including but not only through the software they create
KA Core:
7. Experts and their practices that who reflect the identities of the classroom and the world through
practical DEIA principles
8. Historic marginalization due to systemic social mechanisms, technological supremacy and global
infrastructure challenges to diversity, equity, inclusion and accessibility
9. Cross-cultural differences in, and needs for, diversity, equity, inclusion and accessibility
KA Core:
9. Analyze the work of experts who reflect the identities of the classroom and the world.
10. Assess the impact of power and privilege in the computing profession as it relates to culture,
industry, products, and society.
213
11. Develop examples of systemic changes that could positively address diversity, equity, inclusion,
and accessibility in a familiar context (i.e., in an introductory computing course) and an unfamiliar
context and when these might be different, or the same.
12. Compare the demographics of your institution to the overall community demographics. If they
differ, identify factors that contribute to inequitable access, engagement, and achievement among
marginalized groups. If they do not, assess why not.
Professional Dispositions
● Critically Self-reflective: Students should be able to inspect their own actions, thoughts, biases,
privileges, and motives to discover places where professional activity is not up to current standards.
They must strive to understand both conscious and unconscious biases and continuously work to
counteract them.
● Responsive: Students must quickly and accurately respond to changes in the field and adapt in a
professional manner, such as shifting from in-person office work to remote work at home. These
shifts require rethinking one’s entire approach to what is considered “professional”.
● Proactive: Students must be able to identify areas of importance (e.g., in accessibility and inclusion)
and understand how to address them for a more professional working environment.
● Culturally Competent: Students must prioritize cultural competence—the ability to work with people
from cultures different from one’s own—by using inclusive language, watching for and counteracting
conscious and unconscious biases, and encouraging honest and open communication.
● Advocative: Students must think, speak, and act in ways that foster and promote diversity, equity,
inclusion and accessibility in all ways including but not limited to teamwork, communication, and
product development (hardware and software).
● Responsible: Students must act responsibly in all areas of their work towards: all users and
stakeholders including society at large; colleagues and their profession in general.
In computing, societal, and ethical considerations arise in all other knowledge areas and therefore
should arise in the context of other computing courses, not just siloed in an “SEP course”. These topics
should be covered in courses starting from the first year (the only likely exception is SEP-Ethical-
Analysis: Methods for Ethical Analysis) although this could be delivered as part of a first-year course or
via a seminar or an online independent study.
Presenting SEP topics as advanced topics only covered in later courses could create the incorrect
perception that SEP topics are only important at a certain level or complexity. While it is true that the
importance and consequence of SEP topics increases with level and complexity, introductory topics are
not devoid of SEP topics. Further, many SEP topics are best presented early to lay a foundation for
more intricate topics later in the curriculum.
Instructor choice for some of these topics is complex. When SEP topics arise in other courses these
are naturally often taught by the instructor teaching that course, although at times bringing in expert
educators from other disciplines (e.g., law, ethics, etc.) could be advantageous. Stand-alone courses in
214
SEP - should they be needed - are likely best delivered by an interdisciplinary team. However this
brings additional complexity. Regardless, who teaches SEP topics and/or courses warrants careful
consideration.
At a minimum the SEP CS Core learning outcomes are best covered in the context of courses
covering other knowledge areas - ideally the SEP KA Core hours are also, with the likely exception
of SEP-Ethical-Analysis. This knowledge unit (KU) underlies every other KU in the SEP knowledge
area (KA). However, this KU is the only one in the SEP KA that does not readily lend itself to being
taught in the context of other KAs. Delivering these topics warrants even more careful consideration as
to how/where they will be covered, and who will teach them. In conjunction with covering SEP topics as
they occur naturally in other KAs, dedicated SEP courses can add value. However a sole, stand-alone
course in a program where SEP topics are not covered in other courses should be a last resort.
At some institutions, an in-depth dedicated course at the mid- or advanced-level may be offered
covering all recommended topics in both the CS Core and KA Core KUs in close coordination with
learning outcomes best covered in the context of courses covering other KAs. Such a course
could include:
● SEP-Context (5 hours)
● SEP-Ethical-Analysis (3 hours)
● SEP-Professional-Ethics (4 hours)
● SEP-IP (2 hours)
● SEP-Privacy (3 hours)
● SEP-Communication (3 hours)
● SEP-Sustainability (2 hours)
● SEP-History (2 hours)
● SEP-Economies (1 hour)
● SEP-Security (3 hours)
● SEP-DEIA (4 hours)
Skill Statement
A student who completes this course should be able to contribute to systemic change by applying
societal and ethical knowledge using relevant underpinnings and frameworks to their work in the
computing profession in a culturally competent manner including contributing to positive developments
in inclusion, equity, diversity, and accessibility in computing.
At some institutions, a dedicated minimal course may be offered covering the CS Core knowledge
units in close coordination with learning outcomes best covered in the context of courses
covering other knowledge areas. Such a course could include:
● SEP-Context (3 hours)
● SEP-Ethical-Analysis (2 hours)
● SEP-Professional-Ethics (2 hours)
● SEP-IP (1 hour)
● SEP-Privacy (2 hours)
● SEP-Communication (2 hours)
● SEP-Sustainability (1 hour)
215
● SEP-History (1 hour)
● SEP-Security (2 hours)
● SEP-DEIA (2 hours)
Skill Statement
A student who completes this course should be able to apply societal and ethical knowledge to their
work in the computing profession while fostering and contributing to inclusion, equity, diversity, and
accessibility in computing.
Exemplary Materials
● Emanuelle Burton, Judy Goldsmith, Nicholas Mattei, Cory Siler, and Sara-Jo Swiatek. 2023.
Teaching Computer Science Ethics Using Science Fiction. In Proceedings of the 54th ACM
Technical Symposium on Computer Science Education V. 2 (SIGCSE 2023). Association for
Computing Machinery, New York, NY, USA, 1184. https://doi.org/10.1145/3545947.3569618
● Randy Connolly. 2020. Why computing belongs within the social sciences. Commun. ACM 63, 8
(August 2020), 54–59. https://doi.org/10.1145/3383444
● Casey Fiesler. Tech Ethics Curricula: A Collection of Syllabi Used to Teach Ethics in
Technology Across Many Universities
a. https://cfiesler.medium.com/tech-ethics-curricula-a-collection-of-syllabi-3eedfb76be18
b. Tech Ethics Curricula
● Casey Fiesler. Tech Ethics Readings: A Spreadsheet of Readings Used to Teach Ethics in
Technology Tech Ethics Class Readings
● Stanford Embedded EthiCS, Embedding Ethics in Computer Science.
https://embeddedethics.stanford.edu/
● Jeremy, Weinstein, Rob Reich, and Mehran Sahami. System Error: Where Big Tech Went
Wrong and How We Can Reboot. Hodder Paperbacks, 2023.
● Baecker, R. Computers in Society: Modern Perspectives, Oxford University Press. (2019).
● Embedded EthiCS @ Harvard: bringing ethical reasoning into the computer science curriculum.
https://embeddedethics.seas.harvard.edu/about
Committee
216
● Chris Stephenson, Google, Portland, OR, USA
● MaryAnne Egan, Siena College, Loudonville, NY, USA
● Catherine Mooney, University College Dublin, Dublin, Ireland
● Fay Cobb Payton, North Carolina State University, Raleigh, NC, USA
● Keith Quille, Technological University of Dublin, Dublin, Ireland
● Mehran Sahami, Stanford University, Stanford, CA, USA
● Mark Scanlon, University College Dublin, Dublin, Ireland
● Karren Shorofsky, University of San Francisco School of Law, San Francisco, CA, USA
● Andreas Stefik, University of Nevada, Las Vegas, Las Vegas, NV, USA
● Ellen Walker, Hiram College, Cleveland, OH, USA
217
Systems Fundamentals (SF)
Preamble
A computer system is a set of hardware and software infrastructures upon which applications are
constructed. Computer systems have become a pillar of people's daily life. As such, it is essential for
students to learn knowledge about computer systems, grasp the skills to use and design these
systems, and understand the fundamental rationale and principles in computer systems. It could equip
students with the necessary competence for a career related to computer science.
In the curriculum of computer science, the study of computer systems typically spans multiple
knowledge areas, including, but not limited to, operating systems, parallel and distributed systems,
communications networks, computer architecture and organization, and software engineering. The
System Fundamentals knowledge area, as suggested by its name, focuses on the fundamental
concepts and design principles in computer systems that are shared by these courses within their
respective cores. The goal of this knowledge area is to present an integrative view of these
fundamental concepts and design principles in a unified albeit simplified fashion, providing a common
foundation for the different specialized mechanisms and policies appropriate to the particular domain
area. Specifically, the fundamental concepts in this knowledge area include an overview of computer
systems, basic concepts such as state and state transition, resource allocation and scheduling, and so
on. Moreover, this knowledge area introduces basic design principles to improve the reliability,
availability, efficiency, and security of computer systems.
218
10. Deprecated the Parallelism knowledge unit and moved parts of its topic to the Basic Concepts
knowledge unit;
11. Renamed the Reliability through Redundancy knowledge unit to System Reliability;
12. Added the Society, Ethics, and the Profession knowledge unit.
Core Hours
Basic Concepts 4 0
Resource Management 1 1
System Performance 2 2
Performance Evaluation 2 2
System Reliability 2 1
System Security 2 1
System Design 2 1
Total 18 8
Knowledge Units
219
CS Core:
1. Describe the basic building blocks of computers and their role in the historical development of
computer architecture.
2. Design a simple logic circuit using the fundamental building blocks of logic design to solve a simple
problem (e.g., adder).
3. Describe how computing systems are constructed of layers upon layers, based on separation of
concerns, with well-defined interfaces, hiding details of low layers from the higher layers.
4. Describe that hardware, OS, VM, and application are additional layers of interpretation/processing.
5. Describe the mechanisms of how errors are detected, signaled back, and handled through the
layers.
6. Construct a simple program (e.g., a TCP client/server) using methods of layering, error detection
and recovery, and reflection of error status across layers.
7. Identify bugs in a layered program by using tools for program tracing, single stepping, and
debugging.
8. Understand the concept of strong vs weak scaling, i.e., how performance is affected by the scale of
the problem vs the scale of resources to solve the problem. This can be motivated by simple, real-
world examples.
KA Core:
3. Advantages and disadvantages of common scheduling algorithms. (See also: OS-Scheduling)
KA Core:
4. Describe the pros and cons of common scheduling algorithms
221
KA Core:
5. The formula for average memory access time. (See also: AR-Memory)
6. Rationale of virtualization and isolation: protection and predictable performance. (See also: OS-
Virtualization)
7. Levels of indirection, illustrated by virtual memory for managing physical memory resources. (See
also: OS-Virtualization)
8. Methods for implementing virtual memory and virtual machines. (See also: OS-Virtualization)
KA Core:
4. Explain why it is important to isolate and protect the execution of individual programs and
environments that share common underlying resources.
5. Describe how the concept of indirection can create the illusion of a dedicated machine and its
resources even when physically shared among multiple programs and environments.
6. Evaluate the performance of two application instances running on separate virtual machines and
determine the effect of performance isolation.
KA Core:
7. Analytical tools to guide quantitative evaluation
8. Understanding layered systems, workloads, and platforms, their implications for performance, and
the challenges they represent for evaluation
9. Microbenchmark pitfalls
222
Illustrative Learning Outcomes:
CS Core:
1. Explain how the components of system architecture contribute to improving its performance.
2. Explain the circumstances in which a given figure of a system performance metric is useful.
3. Explain the usage and inadequacies of benchmarks as a measure of system performance.
4. Describe Amdahl’s law and discuss its limitations.
5. Apply limit studies or simple calculations to produce order-of-magnitude estimates for a given
performance metric in a given context.
6. Apply software tools to profile and measure program performance.
KA Core:
7. Design and conduct a performance-oriented experiment of a common system (e.g., an OS and
Spark).
8. Design a performance experiment on a layered system to determine the effect of a system
parameter on system performance.
KA Core:
4. Other approaches to reliability (e.g., journaling). (See also: OS-Faults, NC-Reliability, SE-Reliability)
KA Core:
5. Compare different error detection and correction methods for their data overhead, implementation
complexity, and relative execution time for encoding, detecting, and correcting errors.
223
SF-Security: System Security
CS Core:
1. Common system security issues (e.g., viruses, denial-of-service attacks and eavesdropping). (See
also: OS-Protection, NC-Security, SEC-Foundations, SEC-Engineering)
2. Countermeasures (See also: OS-Principles, OS-Protection, NC-Security)
a. Cryptography (See also: SEC-Crypto)
b. Security architecture (See also: SEC-Engineering)
KA Core:
3. Representative countermeasure systems
a. Intrusion detection systems, firewalls. (See also: NC-Security)
b. Antivirus systems
KA Core:
3. Describe representative countermeasure systems.
KA Core:
2. Designs of representative systems (e.g., Apache web server, Spark and Linux).
KA Core:
3. Describe the design of some representative systems.
224
Illustrative Learning Outcomes:
KA Core:
1. Describe the intellectual property rights of computer systems.
2. List representative software licenses and compare their differences.
3. List representative computer crimes.
Professional Dispositions
● Meticulous: Students must pay attention to details of different perspectives when learning about
and evaluating systems.
● Adaptive: Students must be flexible and adaptive when designing systems. Different systems have
different requirements, constraints and working scenarios. As such, they require different designs.
Students must be able to make appropriate design decisions correspondingly.
Mathematics Requirements
Required:
● Discrete Mathematics (See also: MSF-Discrete):
o Sets and relations
o Basic graph theory
o Basic logic
● Linear Algebra (See also: MSF-Linear):
o Basic matrix operations
● Probability and Statistics (See also: MSF-Probability, MSF-Statistics)
o Random variable
o Bayes theorem
o Expectation and Variation
o Cumulative distribution function and probability density function
Desirable:
● Basic queueing theory
● Basic stochastic process
225
● SF-Reliability (4 hours)
● SF-Security (5 hours)
● SF-SEP (1 hour)
● SF-Design (6 hours)
Prerequisites:
● Sets and relations, basic graph theory and basic logic from Discrete Mathematics (See also:
MSF-Discrete)
● Basic matrix operations from Linear Algebra (See also: MSF-Linear)
● Random variable, Bayes theorem, expectation and variation, cumulative distribution function
and probability density function from Probability and Statistics (See also: MSF-Probability, MSF-
Statistics)
Course objectives: Students should be able to (1) understand the fundamental concepts in computer
systems; (2) understand the key design principles, in terms of performance, reliability and security,
when designing computer systems; (3) deploy and evaluate representative complex systems (e.g.,
MySQL and Spark) based on their documentations, and (4) design and implement simple computer
systems (e.g., an interactive program, a simple web server, and a simple data storage system).
Course objectives: Students should be able to (1) have a deeper understanding in the key design
principles of computer system design, (2) map such key principles to the designs of classic systems
(e.g., Linux, SQL and TCP/IP network stack) as well as that of more recent systems (e.g., Hadoop,
Spark and distributed storage systems), and (3) design and implement more complex computer
systems (e.g., a file system and a high-performance web server).
Committee
226
Members:
● Doug Lea, State University of New York at Oswego, Oswego, NY, USA
● Monica D. Anderson, University of Alabama, Tuscaloosa, AL, USA
● Matthias Hauswirth, University of Lugano, Lugano, Switzerland
● Ennan Zhai, Alibaba Group, Hangzhou, China
● Yutong Liu, Shanghai JiaoTong University, Shanghai, China
Contributors:
● Michael S. Kirkpatrick, James Madison University, Harrisonburg, VA, USA
● Linghe Kong, Shanghai JiaoTong University, Shanghai, China
227
Specialized Platform Development (SPD)
Preamble
The Specialized Platform Development (SPD) Knowledge Area (KA) refers to attributes involving the
creation of software targeting non-traditional hardware platforms. Developing for each specialized
platform, for example, robots, mobile systems, web-based systems, and embedded systems typically
involve unique considerations.
Societal and industry needs have created a high demand for developers on specialized platforms, such
as mobile applications, web platforms, robotic platforms, and embedded systems. Some unique
professional abilities relevant to this KA include:
● Creating applications that provide a consistent user experience across various devices, screen
sizes, and operating systems.
● Developing application programming interfaces (APIs) to support the functionality of each
specialized platform.
● Managing challenges related to resource constraints such as computation, memory, storage,
and networking and communication.
● Applying cross-cutting concerns such as optimization, security, better development practices,
and others.
Knowledge Area Name Revision: The knowledge area name has been changed to reflect the
specialized development platforms which serve as the target for software development.
Expansion of Computer Science Core Hours: Reflecting the increased deployment of specialized
hardware platforms, the number of CS Core hours has been increased.
Revised and Introduced Knowledge Units: Reflecting modern computing systems, knowledge units
in Robotics, Embedded Systems, and Society, Ethics, and the Profession (SEP) have been introduced.
Other changes include: 1) renamed Introduction knowledge unit to Common Aspects/Shared Concerns
and 2) renamed Industrial Platforms to Robot Platforms.
Core Hours
228
Web Platforms 5 + (1 HCI)
Total 4 68
Note: The CS Core hours total includes 1 hour shared with SE.
Knowledge Units
229
1. List the constraints of mobile programming.
2. List the characteristics of scripting languages.
3. Describe the three-tier model of web programming.
4. Describe how the state is maintained in web programming.
Non-core:
6. Analyzing requirements for web applications.
7. Computing services (See also: DM-NoSQL)
a. Cloud Hosting.
b. Scalability (e.g., Autoscaling, Clusters).
c. Cost estimation for services.
8. Data management: (See also: DM-Core)
a. Data residency: where the data is located and what paths can be taken to access it.
b. Data integrity: guaranteeing data is accessible and that data is deleted when required.
9. Architecture
a. Monoliths vs Microservices.
b. Micro-frontends.
c. Event-Driven vs RESTful architectures: advantages and disadvantages.
d. Serverless, cloud computing on demand.
10. Storage solutions: (See also: DM-Relational, DM-NoSQL)
a. Relational Databases.
b. NoSQL databases.
230
SPD-Mobile: Mobile Platforms
KA Core:
1. Development with:
a. Mobile programming languages.
b. Mobile programming environments.
2. Mobile platform constraints:
a. User interface design. (See also: HCI-User)
b. Security.
3. Access:
a. Accessing data through APIs. (See also: DM-Querying)
b. Designing API endpoints for mobile apps: pitfalls and design considerations.
c. Network and the web interfaces. (See also: NC-Fundamentals, DM-Modeling)
Non-core:
4. Development:
a. Native versus cross-platform development.
b. Software design/architecture patterns for mobile applications. (See also: SE-Design)
5. Mobile platform constraints:
a. Responsive user interface design. (See also: HCI-Accessibility)
b. Heterogeneity and mobility of devices.
c. Differences in user experiences (e.g., between mobile and web-based applications).
d. Power and performance tradeoff.
6. Mobile computing affordances:
a. Location-aware applications.
b. Sensor-driven computing (e.g., gyroscope, accelerometer, health data from a watch).
c. Telephony and instant messaging.
d. Augmented reality. (See also: GIT-Immersion)
7. Specification and testing. (See also: SDF-Practices, SE-Validation)
8. Asynchronous computing: (See also: PDC-Algorithms)
a. Difference from traditional synchronous programming.
b. Handling success via callbacks.
c. Handling errors asynchronously.
d. Testing asynchronous code and typical problems in testing.
231
SPD-Robot: Robot Platforms
KA Core:
1. Types of robotic platforms and devices. (See also: AI-Robotics)
2. Sensors, embedded computation, and effectors (actuators). (See also: GIT-Physical)
3. Robot-specific languages and libraries. (See also: AI-Robotics)
4. Robotic software architecture (e.g., using the Robot Operating System (ROS)).
5. Robotic platform constraints and design considerations. (See also: AI-Robotics)
6. Interconnections with physical or simulated systems. (See also: GIT-Physical, GIT-Simulation)
7. Robotic Algorithms (See also: AI-Robotics, GIT-Animation)
a. Forward kinematics.
b. Inverse kinematics.
c. Dynamics.
d. Navigation and path planning.
e. Grasping and manipulation.
8. Safety and interaction considerations. (See also: SEP-Professional-Ethics, SEP-Context)
232
f. Energy efficiency.
g. Loosely timed coding and synchronization.
h. Software adapters.
4. Embedded programming.
5. Hard real-time systems vs soft real-time systems: (See also: OS-Real-time)
a. Timeliness.
b. Time synchronization/scheduling.
c. Prioritization.
d. Latency.
e. Compute jitter.
6. Real-time resource management.
7. Memory management:
a. Mapping programming construct (variable) to a memory location. (See also: AR-Memory)
b. Shared memory. (See also: OS-Memory)
c. Manual memory management.
d. Garbage collection. (See also: FPL-Translation)
8. Safety considerations and safety analysis. (See also: SEP-Context, SEP-Professional-Ethics)
9. Sensors and actuators.
10. Analysis and verification.
11. Application design.
234
a. Vocabulary (e.g., game definitions; mechanics-dynamics-aesthetics model; industry
terminology; experience design; models of experience and emotion).
b. Design Thinking and User-Centered Experience Design (e.g., methods of designing games;
iteration, incrementing, and the double-diamond; phases of pre- and post-production; quality
assurance, including alpha and beta testing; stakeholder and customer involvement;
community management). (See also: SE-Design)
c. Genres (e.g., adventure; walking simulator; first-person shooter; real-time strategy;
multiplayer online battle arena (MOBA); role-playing game (rpg)).
d. Audiences and Player Taxonomies (e.g., people who play games; diversity and broadening
participation; pleasures, player types, and preferences; Bartle, yee). (See also: HCI-User)
e. Proliferation of digital game technologies to domains beyond entertainment (e.g., Education
and Training; Serious Games; Virtual Production; eSports; Gamification; Immersive
Experience Design; Creative Industry Practice; Artistic Practice; Procedural Rhetoric). (See
also: AI-SEP)
SPD-SEP/Mobile
Non-core:
1. Privacy and data protection.
2. Accessibility in mobile design.
3. Security and cybersecurity.
4. Social impacts of mobile technology,
5. Ethical use of AI and algorithms.
SPD-SEP/Web
Non-core:
1. Privacy concerns in mobile apps.
2. Designing for inclusivity and accessibility.
3. Ethical use of AI in mobile apps.
4. Sustainable app development and server hosting.
5. Avoiding spam or intrusive notifications.
6. Addressing cyberbullying and harassment.
7. Promoting positive online communities.
8. Monetization and advertising.
9. Ethical use of gamification.
SPD-SEP/Game
Non-core:
1. Intellectual Property Rights in Creative Industries:
a. Intellectual Property Ownership: copyright, trademark; design right, patent, trade secret, civil
versus criminal law; international agreements; procedural content generation and the
implications of generative artificial intelligence.
b. Licensing: Usage and fair usage exceptions; open-source license agreements; proprietary
and bespoke licensing; enforcement.
2. Fair Access to Play:
a. Game Interface Usability: user requirements, affordances, ergonomic design, user research,
experience measurement, and heuristic evaluation methods for games.
b. Game Interface Accessibility: forms of impairment and disability; means to facilitate game
access; universal design; legislated requirements for game platforms; compliance
evaluation; challenging game mechanics and access.
3. Game-Related Health and Safety:
a. Injuries in Play: ways of mitigating common upper body injuries, such as repetitive strain
injury; exercise psychology and physiotherapy in eSports.
237
b. Risk Assessment for Events and Manufacturing: control of substances hazardous to health
(COSHH); fire safety; electrical and electronics safety; risk assessment for games and game
events; risk assessment for manufacturing.
c. Mental Health: motivation to play; gamification and gameful design; game psychology—
internet gaming disorder.
4. Platform Hardware Supply Chain and Sustainability:
a. Platform Lifecycle: platform composition—materials, assembly; mineral excavation and
processing; power usage; recycling; planned obsolescence.
b. Modern Slavery: supply chains; forced labor and civil rights; working conditions; detection
and remission; certification bodies and charitable endeavors.
5. Representation in the Media and Industry:
a. Inclusion: identity and identification; Inclusion of a broad range of characters for diverse
audiences; media representation and its effects; media literacy; content analysis;
stereotyping; sexualization.
b. Equality: histories and controversies, such as gamergate, quality of life in the industry,
professional discourse and conduct in business contexts, pathways to game development
careers, social mobility, the experience of developers from different backgrounds and
identities, gender, and technology.
SPD-SEP/Robotics
Non-core:
1. Fairness, transparency, and accountability in robotic algorithms.
238
2. Mitigating biases in robot decision-making.
3. Public safety in shared spaces with robots.
4. Compliance with data protection laws.
5. Patient consent and trust in medical robots.
SPD-SEP/Interactive
Non-core:
1. Ethical guidelines when using AI models to assist in journalism and content creation.
2. Accountability for AI-generated outputs.
3. Behavior among prompt programmers and AI developers.
4. Trust with the public when using AI models.
Professional Dispositions
● Self-Directed: Students should be able to learn new platforms and languages with a growth-
oriented mindset and thrive in dynamic environments, while continually enhancing skills.
● Inventive: Students should demonstrate excellence in designing software architecture within
unconventional constraints, emphasizing adaptability and creative problem-solving for innovative
solutions.
● Adaptable: Students should adapt to diverse challenges, showing resilience, open-mindedness,
and a proactive approach to changing requirements and constraints.
239
Mathematics Requirements
Required:
● MSF-Discrete
Desired:
● MSF-Calculus
● MSF-Linear
● MSF-Statistics
240
● FPL-Scripting (2 hours)
Course objectives: Students should be able to grasp common aspects of platform development,
acquire foundational knowledge in web development, and attain proficiency in web techniques. They
will apply comprehensive mobile development skills and explore challenges in robotics platforms.
Expertise in developing platforms for embedded systems, along with skills in game development and
creating interactive platforms, will be developed. Students will analyze societal, ethical, and
professional implications of platform development, fostering a well-rounded understanding of this field
within a concise curriculum.
Course objectives: Students should be able to design, develop, and deploy cross-platform mobile
applications using languages like Java, Kotlin, Swift, or React Native. Proficiency in implementing user
experience best practices, exploring cross-platform development tools, and utilizing platform-specific
APIs for seamless integration is emphasized. The course covers security vulnerability identification,
testing methodologies, and distribution/versioning of mobile applications. Students gain insights into
user behavior and application performance through analytics tools. Additionally, they learn version
control, release management, and ethical considerations relevant to mobile development, providing a
well-rounded skill set for successful and responsible mobile application development across diverse
platforms.
Course objectives: Students should be able to gain expertise in designing, developing, and deploying
modern web applications. The curriculum covers key concepts, ensuring proficiency in HTML, CSS,
and JavaScript for responsive and visually appealing pages. Students explore and implement frontend
frameworks (e.g., React, Angular) for efficient development, understand server-side languages (e.g.,
Node.js, Python) for dynamic applications, and design effective architectures prioritizing scalability and
241
security. They learn version control (e.g., GIT), integrate APIs for enhanced functionality, implement
responsive design, optimize for performance, and ensure security through best practices. Testing,
debugging, accessibility, deployment, and staying current with industry trends are also emphasized.
Course objectives: Students should be able to master designing, developing, and deploying
interactive games. The curriculum covers fundamental game design principles, proficiency in languages
like C++, C#, or Python, and utilization of popular engines such as Unity or Unreal. Students gain 3D
modeling and animation skills, implement physics and simulations for realism, and create AI algorithms
for intelligent non-player characters. They design multiplatform games, optimize UI/UX for engagement,
apply game-specific testing and debugging techniques, integrate audio effectively, and explore industry
monetization models. The course emphasizes ethical considerations, ensuring students analyze and
address content, diversity, and inclusivity in game development.
Committee
Chair: Christian Servin, El Paso Community College, El Paso, TX, USA
Members:
● Sherif G. Aly, The American University in Cairo, Cairo, Egypt
● Yoonsik Cheon, The University of Texas at El Paso, El Paso, TX, USA
● Eric Eaton, University of Pennsylvania, Philadelphia, PA, USA
● Claudia L. Guevara, Jochen Schweizer mydays Holding GmbH, Munich, Germany
● Larry Heimann, Carnegie Mellon University, Pittsburgh, PA, USA
● Amruth N. Kumar, Ramapo College of New Jersey, Mahwah, NJ, USA
● R. Tyler Pirtle, Google, USA
● Michael James Scott, Falmouth University, Falmouth, Cornwall, UK
Contributors:
● Sean R. Piotrowski, Rider University, Lawrenceville, NJ, USA
● Mark O’Neil, Blackboard Inc., Newport, NH, USA
● John DiGennaro, Qwickly, Cleveland, OH, USA
● Rory K. Summerley, London South Bank University, London, England, UK
242
Core Topics Table
In the following 17 tables, CS and KA core topics have been listed, one table per knowledge area. For
each topic, desired skill levels have been identified and used to estimate the time needed for the
instruction of CS Core and KA Core topics. The skill levels should be treated as recommended, not
prescriptive. The time needed to cover CS Core and KA Core topics is expressed in terms of
instructional hours. Instructional hours are hours spent in the classroom imparting knowledge
regardless of the pedagogy used. Students are expected to spend additional time after class practicing
related skills and exercising professional dispositions.
For convenience, the tables have been listed under three competency areas: Software, Systems and
Applications. The tables on Society, Ethics and the Profession (SEP) and Mathematical and Statistical
Foundations (MSF) have been listed last as crosscutting topics that apply to all the competency areas.
The core topics in Software Development Fundamentals (SDF) and Algorithmic Foundations (AL)
typically constitute the introductory course sequence in computer science and have been listed first.
AL Algorithmic Foundations 5 32 32
SE Software Engineering 9 6 21
Total 102 72
243
statements, basic I/O
4. Key modularity constructs such as functions
and related concepts like parameter passing,
scope, abstraction, data encapsulation, etc.
5. Input and output using files and APIs
6. Structured data types available in the chosen
programming language like sequences,
associative containers, others and when and
how to use them
7. Libraries and frameworks provided by the
language (when/where applicable)
8. Recursion
244
AL: Algorithmic Foundations
AL- 12b. Sorting O(n log n), (e.g., Quick, Merge, Tim) Apply CS 1
Foundational 2b iv. Foundational Complexity Classes: Log Linear Evaluate
AL- 1c. Divide-and-Conquer Explain
Complexity
AL-Strategies
245
AL- 7. Hash Tables / Maps Explain CS 1
Foundational 7a. Collision resolution and complexity Explain
AL- 1. Abstract Data Types and Operations Apply
Complexity 2b i. Foundational complexity classes: Constant Explain
AL-Strategies 1f. Time vs Space tradeoff Explain 1
246
AL- 1. Complexity Analysis Framework Explain CS 1
Complexity 2. Asymptotic Complexity Analysis Explain
2a. Big O, Big Omega, and Big Theta
2b. Foundational complexity classes
demonstrated by AL-Foundational algorithms (with Evaluate 1
complexity): Constant, Logarithmic, Linear, Log
Linear, Quadratic, and Cubic
AL- 4. Tractability and Intractability Explain 2
Complexity 4a. P, NP and NP-C complexity classes
4b. NP-Complete Problems
(e.g., SAT, Knapsack, TSP)
4c. Reductions
1a. Paradigms: Exhaustive brute force, Explain 1
AL-Strategies 1e. iv. Dynamic Programming
2 vii. Foundational Complexity Classes: Exponential Explain 1
AL- 2b viii. Factorial complexity classes: Factorial O(n!)
Complexity (e.g. All Permutations, Hamiltonian Circuit)
247
FPL: Foundations of Programming Languages
248
c. Use of recursion vs loops vs pipelining
(map/reduce).
3. Processing structured data (e.g., trees) via functions
with cases for each data variant:
a. Functions defined over compound data in terms
of functions applied to the constituent pieces.
b. Persistent data structures.
4. Using higher-order functions (taking, returning, and
storing functions).
249
events.
2. Components of reactive programming: event-source,
event signals, listeners and dispatchers, event
objects, adapters, event-handlers.
3. Stateless and state-transition models of event-based
programming.
4. Canonical uses such as GUIs, mobile devices,
robots, servers.
250
c. Shared memory.
d. Cobegin-coend.
e. Monitors.
f. Channels.
g. Threads.
h. Guards.
6. Futures. Explain KA 2
7. Language support for data parallelism such as forall,
loop unrolling, map/reduce.
8. Effect of memory-consistency models on language
semantics and correct code generation.
9. Representational State Transfer Application
Programming Interfaces (REST APIs).
10. Technologies and approaches: cloud computing,
high performance computing, quantum computing,
ubiquitous computing
11. Overheads of message-passing
12. Granularity of program for efficient exploitation of
concurrency.
13. Concurrency and other programming paradigms
(e.g., functional).
251
d. Generic parameters and typing.
e. Use of generic libraries such as collections.
f. Comparison with ad hoc polymorphism
(overloading) and subtype polymorphism.
g. Prescriptive vs descriptive polymorphism.
h. Implementation models of polymorphic types.
i. Subtyping.
252
d. Translating function/procedure calls and return
from calls, including different parameter-passing
mechanisms using an abstract machine.
6. Memory management:
a. Low level allocation and accessing of high-level
data structures such as basic data types, n-
dimensional array, vector, record, and objects.
b. Return from procedure as automatic
deallocation mechanism for local data elements
in the stack.
c. Manual memory management: allocating, de-
allocating, and reusing heap memory.
d. Automated memory management: garbage
collection as an automated technique using the
notion of reachability.
7. Green computing.
253
syntax and semantics.
b. Syntax vs semantics.
4. Program as a set of non-ambiguous meaningful
sentences.
5. Basic programming abstractions: constants,
variables, declarations (including nested
declarations), command, expression, assignment,
selection, definite and indefinite iteration, iterators,
function, procedure, modules, exception handling.
6. Mutable vs immutable variables: advantages and
disadvantages of reusing existing memory location
vs advantages of copying and keeping old values;
storing partial computation vs recomputation.
7. Types of variables: static, local, nonlocal, global;
need and issues with nonlocal and global variables.
8. Scope rules: static vs dynamic; visibility of variables;
side-effects.
9. Side-effects induced by nonlocal variables, global
variables and aliased variables.
254
4. Software process automation Explain KA 3
5. Design and communication tools (docs, diagrams,
common forms of design diagrams like UML)
6. Tool integration concepts and mechanisms
7. Use of modern IDE facilities - debugging,
refactoring, searching/indexing, etc.
255
3. Testing objectives
4. Test kinds
5. Stylistic differences between tests and production
code
The core topics in Architecture and Organization (AR) and Operating Systems (OS) are typically
covered early in the curriculum and have been listed first. Data Management (DM) and Security (SEC)
topics listed in this section can be applied to all three competency areas.
OS Operating Systems 14 8 14
256
PDC Parallel and Distributed Computing 5 9 26
SF Systems Fundamentals 8 18 8
DM Data Management 12 10 26
SEC Security 6 6 35
Total 67 149
257
AR-Assembly 1. von Neumann machine architecture Explain CS 1
2. Control unit: instruction fetch, decode, and
execution
3. Introduction to SIMD vs MIMD and the Flynn
taxonomy
4. Shared memory multiprocessors/multicore
organization
258
3. I/O devices (e.g., mouse, keyboard, display,
AR-IO camera, sensors, actuators)
4. External storage, physical organization, and drives
5. Bus fundamentals
a. Bus protocols
b. Arbitration
c. Direct-memory access (DMA)
259
b. In-networking computing
c. Embedded systems for emerging
applications
d. Neuromorphic computing
e. Edge computing devices
4. Packaging and integration solutions such as 3DIC
and Chiplets
5. Machine learning in architecture design
a. AI algorithms for workload analysis
b. Optimization of architecture configurations
for performance and power efficiency
260
4. Two qubit gates and tensor products. Working with
matrices.
5. The No-Cloning Theorem. The Quantum
Teleportation protocol.
6. Algorithms
a. Simple quantum algorithms: Bernstein-
Vazirani, Simon’s algorithm.
b. Implementing Deutsch-Josza with Mach-
Zehnder Interferometers.
c. Quantum factoring (Shor’s Algorithm)
d. Quantum search (Grover’s Algorithm)
7. Implementation aspects
a. The physical implementation of qubits
b. Classical control of a Quantum Processing
Unit (QPU)
c. Error mitigation and control. NISQ and
beyond.
8. Emerging Applications
a. Post-quantum encryption
b. The Quantum Internet
c. Adiabatic quantum computation (AQC) and
quantum annealing
261
3. Concept of system calls and links to application
program interfaces
4. The evolution of the link between hardware
architecture and the operating system functions
5. Protection of resources means protecting some
machine instructions/functions
6. Leveraging interrupts from hardware level: service
routines and implementations
7. Concept of user/system state and protection,
transition to kernel mode using system calls
8. Mechanism for invoking of system calls, the
corresponding mode and context switch and return
from interrupt
9. Performance costs of context switches and
associated cache flushes when performing process
switches in Spectre-mitigated environments
262
3. Concepts of Symmetric Multi-Processor (SMP)
multiprocessor scheduling and cache coherence
4. Timers (e.g., building many timers out of finite
hardware timers)
5. Fairness and starvation
263
5. Basic file allocation methods including linked,
allocation table, etc.
6. File system structures comprising file allocation
including various directory structures and methods
for uniquely identifying files (name, identified or
metadata storage location)
7. Allocation/deallocation/storage techniques
(algorithms and data structure) impact on
performance and flexibility (i.e. Internal and
external fragmentation and compaction)
8. Free space management such as using bit tables
vs linking
9. Implementation of directories to segment and track
file location
264
KU Topic Skill Core Hours
Level
265
2. Distributed application paradigms Evaluate
a. Client/server
b. Peer-to-peer
c. Cloud
d. Edge
e. Fog
266
d. Cubic
e. QUIC
4. Ethernet Explain
5. Switching Apply
267
e. Man-in-the-middle
f. Message integrity attacks
g. Routing attacks
h. Traffic analysis
268
Programs a. Declarative parallelism: Determining which
actions may, or must not, be performed in
parallel, at the level of instructions, functions,
closures, composite actions, sessions, tasks,
and services is the main idea underlying PDC
algorithms; failing to do so is the main source
of errors.
b. Defining order: for example, using happens-
before relations or series/parallel directed
acyclic graphs representing programs.
c. Independence: determining when ordering
doesn’t matter, in terms of commutativity,
dependencies, preconditions.
d. Ensuring ordering among otherwise parallel
actions when necessary, including locking,
safe publication; and imposing communication:
– sending a message happens before
receiving it; conversely relaxing when
unnecessary.
2. Distribution
a. Defining places as devices executing actions,
including hardware components, remote hosts,
may also include external, uncontrolled
devices, hosts, and users.
b. One device may time-slice or otherwise
emulate multiple parallel actions by fewer
processors by scheduling and virtualization.
c. Naming or identifying places (e.g., device IDs)
and actions as parties (e.g., thread IDs).
d. Activities across places may communicate
across media.
3. Starting activities
a. Options that enable actions to be performed
(eventually) at places range from hardwiring to
configuration scripts; also establishing
communication and resource management;
these are expressed differently across
languages and contexts, usually relying on
automated provisioning and management by
platforms.
b. Procedural: Enabling multiple actions to start at
a given program point; for example, starting
269
new threads, possibly scoping or otherwise
organizing them in hierarchical groups.
c. Reactive: Enabling upon an event by installing
an event handler, with less control of when
actions begin or end.
d. Dependent: Enabling upon completion of
others; for example, sequencing sets of parallel
actions.
e. Granularity: Execution cost of action bodies
should outweigh the overhead of arranging.
4. Execution Properties
a. Nondeterministic execution of unordered
actions.
b. Consistency: Ensuring agreement among
parties about values and predicates when
necessary to avoid races, maintain safety and
atomicity, or arrive at consensus.
c. Fault tolerance: Handling failures in parties or
communication, including (Byzantine)
misbehavior due to untrusted parties and
protocols, when necessary to maintain
progress or availability.
d. Tradeoffs are one focus of evaluation.
270
b. APIs: sockets, architectural and language-
based constructs, and layered constructs such
as RPC (remote procedure call).
c. IO channel APIs.
3. Memory
a. Shared memory architectures in which parties
directly communicate only with memory at
given addresses, with extensions to
heterogeneous memory supporting multiple
memory stores with explicit data transfer
across them; for example, GPU local and
shared memory, Direct Memory Access (DMA).
b. Memory hierarchies: Multiple layers of sharing
domains, scopes and caches; locality: latency,
false-sharing.
c. Consistency properties: Bitwise atomicity limits,
coherence, local ordering.
4. Data Stores
a. Cooperatively maintained data structures
implementing maps and related ADTs.
b. Varieties: Owned, shared, sharded, replicated,
immutable, versioned.
271
buffering, saturation response (waiting vs
dropping), Rate control.
g. Multiplexing and demultiplexing many relatively
slow I/O devices or parties; completion-based
and scheduler-based techniques; async-await,
select and polling APIs.
h. Formalization and analysis of channel
communication; for example, CSP.
i. Applications of queuing theory to model and
predict performance.
j. Memory models: sequential and
release/acquire consistency.
k. Memory management; including reclamation of
shared data; reference counts and alternatives.
l. Bulk data placement and transfer; reducing
message traffic and improving locality;
overlapping data transfer and computation;
impact of data layout such as array-of-structs
vs struct-of-arrays.
m. Emulating shared memory: distributed shared
memory, Remote Direct Memory Access
(RDMA).
n. Data store consistency: Atomicity,
linearizability, transactionality, coherence,
causal ordering, conflict resolution, eventual
consistency, blockchains.
o. Faults, partitioning, and partial failures; voting;
protocols such as Paxos and Raft.
p. Design tradeoffs among consistency,
availability, partition (fault) tolerance;
impossibility of meeting all at once.
q. Security and trust: Byzantine failures, proof of
work and alternatives.
272
a. Completion-based: Barriers, joins, including
termination control.
b. Data-enabled: Queues, producer-consumer
designs.
c. Condition-based: Polling, retrying, backoffs,
helping, suspension, signaling, timeouts
d. Reactive: enabling and triggering
continuations.
3. Atomicity
a. Atomic instructions, enforced local access
orderings.
b. Locks and mutual exclusion; lock granularity.
c. Deadlock avoidance: Ordering, coarsening,
randomized retries; encapsulation via lock
managers.
d. Common errors: Failing to lock or unlock when
necessary, holding locks while invoking
unknown operations.
e. Avoiding locks: replication, read-only,
ownership, and non-blocking constructions.
273
j. Resource control using Semaphores and
condition variables
k. Control flow: Scheduling computations, Series-
parallel loops with (possibly elected) leaders,
Pipelines and Streams, nested parallelism.
l. Exceptions and failures. Handlers, detection,
timeouts, fault tolerance, voting.
274
f. Directed Acyclic Graph (DAG) model analysis
of algorithmic efficiency (work, span, critical
paths).
g. Testing and debugging; tools such as race
detectors, fuzzers, lock dependency checkers,
unit/stress/torture tests, visualizations,
continuous integration, continuous deployment,
and test generators,
h. Measuring and comparing throughput,
overhead, waiting, contention, communication,
data movement, locality, resource usage,
behavior in the presence of excessive numbers
of events, clients, or threads.
i. Application domain specific analyses and
evaluation techniques.
275
e. Computational Logic: Satisfiability (SAT),
concurrent logic programming.
f. Graphics and computational geometry:
Transforms, rendering, ray-tracing.
g. Resource management: Allocating, placing,
recycling and scheduling processors, memory,
channels, and hosts; exclusive vs shared
resources; static, dynamic and elastic
algorithms; Real-time constraints; Batching,
prioritization, partitioning; decentralization via
work-stealing and related techniques.
h. Services: Implementing web APIs, electronic
currency, transaction systems, multiplayer
games.
276
6. Combinational Logic, Sequential Logic, Registers,
Memories
7. Computers and Network Protocols as examples of
State Machines
8. Sequential vs parallel processing
9. Application-level sequential processing: single thread
10. Simple application-level parallel processing: request
level (web services/client-server/distributed), single
thread per server, multiple threads with multiple
servers, pipelining
277
SF- 2. Workloads and representative benchmarks, and
Evaluation methods of collecting and analyzing performance
figures of merit
3. CPI (Cycles per Instruction) equation as a tool for
understanding tradeoffs in the design of instruction
sets, processor pipelines, and memory system
organizations.
4. Amdahl’s Law: the part of the computation that cannot
be sped up limits the effect of the parts that can
5. Order of magnitude analysis (Big O notation)
6. Analysis of slow and fast paths of a system
7. Events on their effect on performance (e.g.,
instruction stalls, cache misses, page faults)
278
DM: Data Management
279
a. Decomposition of a schema; lossless-join and
dependency-preservation properties of a
decomposition
b. Normal forms (BCNF)
c. Denormalization (for efficiency)
280
c. Denormalization
281
DM-Security 5. Need for, and different approaches to Explain KA 2
securing data at rest, in transit, and during
processing
6. Database auditing and its role in digital
forensics
7. Data inferencing and preventing attacks
8. Laws and regulations governing data
security and data privacy
SEC: Security
282
7. Impact of AI on security and privacy: using AI to
bolster defenses as well as address increased
adversarial capabilities due to AI
283
SEC-Coding 1. Common vulnerabilities and weaknesses Develop CS 2
2. SQL injection and other injection attacks
3. Cross-site scripting techniques and mitigations
4. Input validation and data sanitization
5. Type safety and type-safe languages
6. Buffer overflows, stack smashing, and integer
overflows
7. Security issues due to race conditions
284
9. Private-key cryptosystems: substitution-permutation
networks, linear cryptanalysis, differential
cryptanalysis, DES, AES
10. Public-key cryptosystems: Diffie-Hellman, RSA
11. Data integrity and authentication: hashing, digital
signatures
12. Cryptographic protocols: challenge-response
authentication, zero-knowledge protocols,
commitment, oblivious transfer, secure two- or multi-
party computation, hash functions, secret sharing, and
applications
13. Attacker capabilities: chosen-message attack (for
signatures), birthday attacks, side channel attacks,
fault injection attacks
14. Quantum cryptography; Post Quantum/Quantum
resistant cryptography
15. Blockchain and cryptocurrencies
285
integrity attack; interception; phishing; protocol
analysis; privilege abuse; spoofing; and traffic
injection
9. Attestation of software products with respect to their
specification and adaptiveness
10. Design and development of cyber-physical systems
11. Considerations for trustworthy computing, e.g.,
tamper resistant packaging, trusted boot, trusted
kernel, hardware root of trust, software signing and
verification, hardware-based cryptography,
virtualization, and containers
AI Artificial Intelligence 12 12 18
Total 28 N/A
287
d. The importance of perception and
environmental interactions
e. Learning-based agents
f. Embodied agents
i. sensors, dynamics, effectors
288
a. Minimax search Apply
b. Alpha-beta pruning
i. Ply cutoff
8. Implementation of A* search
9. Constraint Satisfaction
289
3. A simple statistical-based supervised learning such as Apply,
linear regression or decision trees Develop,
a. Focus on how they work without going into Evaluate
mathematical or optimization details; enough
to understand and use existing
implementations correctly
4. The overfitting problem / controlling solution
complexity (regularization, pruning – intuition only)
a. The bias (underfitting) - variance (overfitting)
tradeoff
5. Working with Data
a. Data preprocessing
i. Importance and pitfalls of
preprocessing choices
b. Handling missing values (imputing, flag-as-
missing)
i. Implications of imputing vs flag-as-
missing
c. Encoding categorical variables, encoding real-
valued data
d. Normalization/standardization
e. Emphasis on real data, not textbook examples
6. Representations
a. Hypothesis spaces and complexity
b. Simple basis feature expansion, such as
squaring univariate features
c. Learned feature representations
7. Machine learning evaluation
a. Separation of train, validation, and test sets
b. Performance metrics for classifiers
c. Estimation of test performance on held-out
data
d. Tuning the parameters of a machine learning
model with a validation set
e. Importance of understanding what your model
is actually doing, where its
pitfalls/shortcomings are, and the implications
of its decisions
8. Basic neural networks
a. Fundamentals of understanding how neural
networks work and their training process,
without details of the calculations
290
b. Basic introduction to generative neural
networks (large language models, etc.)
291
b. Privacy
c. Fairness
d. Intellectual property
e. Explainability
292
i. How to deal with underspecified or ill-
posed problems
b. Data availability/scarcity and cleanliness
i. Basic data cleaning and preprocessing
ii. Data set bias
c. Algorithmic bias
d. Evaluation bias
e. Assessment of societal implications of the
application
5. Additional depth on deployed deep generative models
a. Introduction to how deep image generative
models (e.g. as of 2023, DALL-E, Midjourney,
Stable Diffusion, etc.) work, including
discussion of attention
b. Introduction to how large language models
(e.g. as of 2023, ChatGPT, Bard, etc.) work,
including discussion of attention
c. Idea of foundational models, how to use them,
and the benefits / issues with training them
from big data
6. Analysis and discussion of the societal impact of AI
a. Ethics
b. Fairness
c. Trust / explainability
d. Privacy and usage of training data
e. Human autonomy and
oversight/regulations/legal requirements
f. Sustainability
293
8. Spatialization
9. Animation
294
2. Time (motion blur), lens position (focus), and
continuous frequency (color) and their impact on
rendering
3. Shadow mapping
4. Occlusion culling
5. Bidirectional Scattering Distribution function (BSDF)
theory and microfacets
6. Subsurface scattering
7. Area light sources
8. Hierarchical depth buffering
9. Image-based rendering
10. Non-photorealistic rendering
11. GPU architecture
12. Human visual systems including adaptation to light,
sensitivity to noise, and flicker fusion
295
10. Applications in medicine, simulation, training, and
visualization
11. Safety in immersive applications
296
d. needs assessment (techniques for uncovering
needs and gathering requirements - e.g.,
interviews, surveys, ethnographic and contextual
enquiry)
e. journey maps
f. evaluating the design
g. interfacing with stakeholders, as a team
h. risks associated with physical, distributed, hybrid
and virtual teams
3. Physical and cognitive characteristics of the user
a. physical capabilities that inform interaction design
(e.g., color perception, ergonomics)
b. cognitive models that inform interaction design
(e.g., attention, perception and recognition,
movement, memory)
c. topics in social/behavioral psychology (e.g.,
cognitive biases, change blindness)
4. Designing for diverse user populations
a. how differences (e.g., in race, ability, age, gender,
culture, experience, and education) impact user
experiences and needs
b. internationalization, other cultures, and cross-
cultural design
c. designing for users from other cultures
d. cross-cultural design
e. challenges to effective design evaluation (e.g.,
sampling, generalization; disability and disabled
experiences)
f. universal design
5. Collaboration and communication
a. understanding the user in a multi-user context
b. synchronous group communication (e.g., chat
rooms, conferencing, online games)
c. asynchronous group communication (e.g., email,
forums, social networks)
d. social media, social computing, and social network
analysis
e. online collaboration
f. social coordination and online communities
g. avatars, characters, and virtual worlds
297
c. Safety, security and privacy Evaluate
d. Harm and disparate impact ,
2. Ethics in design methods and solutions Develop
a. the role of artificial intelligence
b. responsibilities for considering stakeholder
impact and human factors
c. the role of design to meet user needs.
3. Requirements in design
a. ownership responsibility
b. legal frameworks and compliance requirements
c. consideration beyond immediate user needs
including via iterative reconstruction of problem
analysis and “digital well-being” features
298
b. software engineering practices that enable
inclusion and accessibility.
8. Technologies
a. examples of accessibility-enabling features,
such as conformance to screen readers
9. Inclusive Design Frameworks
a. creating inclusive processes, such as participatory
design; designing for larger impact
b. designing for larger impact
299
e. privacy
f. ethics
g. broader impacts
300
HCI-SEP 1. Universal and user-centered design Explain, CS Share
2. Accountability Apply, d with
3. Accessibility and inclusive design Evaluate SEP
4. Evaluating the design ,
5. System design Develop
SPD-Web Apply KA 5
301
1. Web programming languages (e.g., HTML5,
JavaScript, PHP, CSS)
2. Web platforms, frameworks, or meta-frameworks:
a. Cloud services
b. API, Web Components
3. Software as a Service (SaaS).
4. Web standards such as document object model,
accessibility.
5. Security and Privacy Considerations.
302
b. Resource constraints, such as memory
profiles and deadlines.
c. API for custom architectures:
d. GPU technology. (See also: AR-
Heterogeneity, GIT-Shading)
e. Field Programmable Gate Arrays (FPGA).
f. Cross-platform systems.
2. Embedded Systems:
a. Microcontrollers.
b. Interrupts and feedback.
c. Interrupt handlers in high-level languages.
d. Hard and soft interrupts and trap-exits.
e. Interacting with hardware, actuators, and
sensors.
f. Energy efficiency.
g. Loosely timed coding and synchronization.
h. Software adapters.
3. Embedded programming.
4. Hard real-time systems vs soft real-time systems:
a. Timeliness.
b. Time synchronization/scheduling.
c. Prioritization.
d. Latency.
e. Compute jitter.
5. Real-time resource management.
6. Memory management:
a. Mapping programming construct (variable)
to a memory location.
b. Shared memory.
c. Manual memory management.
d. Garbage collection.
e. Safety considerations and safety analysis.
(See also: SEP-Context, SEP-Professional)
7. Sensors and actuators.
8. Analysis and verification.
9. Application design.
303
b. Typical Game Platforms (e.g., Personal
Computer; Home Console; Handheld
Console; Arcade Machine; Interactive
Television; Mobile Phone; Tablet;
Integrated Head-Mounted Display;
Immersive Installations and Simulators;
Internet of Things enabled Devices; CAVE
Systems; Web Browsers; Cloud-based
Streaming Systems).
c. Characteristics and Constraints of Different
Game Platforms (e.g., Features (local
storage, internetworking, peripherals); Run-
time performance (GPU/CPU frequency,
number of cores); Chipsets (physics
processing units, vector co-processors);
Expansion Bandwidth (PCIe); Network
throughput (Ethernet); Memory types and
capacities (DDR/GDDR); Maximum stack
depth; Power consumption; Thermal
design; Endian)
d. Typical Sensors, Controllers, and Actuators
(e.g., distinctive control system designs—
peripherals (mouse, keypad, joystick),
game controllers, wearables, interactive
surfaces; electronics and bespoke
hardware; computer vision, inside-out
tracking, and outside-in tracking; IoT-
enabled electronics and I/O.
e. eSports Ecosystems (e.g., evolution of
gameplay across platforms; games and
eSports; game events such as LAN/arcade
tournaments and international events such
as the Olympic eSports Series; streamed
media and spectatorship; multimedia
technologies and broadcast management;
professional play; data and machine
learning for coaching and training)
2. Real-time Simulation and Rendering Systems
a. CPU and GPU architectures: (e.g., Flynn’s
taxonomy; parallelization; instruction sets;
standard components—graphics compute
array, graphics memory controller, video
graphics array basic input/output system;
304
bus interface; power management unit;
video processing unit; display interface).
b. Pipelines for physical simulations and
graphical rendering: (e.g., tile-based,
immediate-mode).
c. Common Contexts for Algorithms, Data
Structures, and Mathematical Functions
(e.g., game loops; spatial partitioning,
viewport culling, and level of detail; collision
detection and resolution; physical
simulation; behavior for intelligent agents;
procedural content generation).
d. Media representations (e.g., I/O, and
computation techniques for virtual worlds:
audio; music; sprites; models and textures;
text; dialogue; multimedia (e.g., olfaction,
tactile).
3. Game Development Tools and Techniques:
a. Programming Languages (e.g., C++; C#;
Lua; Python; JavaScript).
b. Shader Languages (e.g.,HLSL, GLSL;
ShaderGraph).
c. Graphics Libraries and APIs (e.g., DirectX;
SDL; OpenGL; Metal; Vulkan; WebGL).
d. Common Development Tools and
Environments (e.g., IDEs; Debuggers;
Profilers; Version Control Systems including
those handling binary assets; Development
Kits and Production/Consumer Kits;
Emulators).
4. Game Engines
a. Open Game Engines (e.g.,Unreal; Unity;
Godot; CryEngine; Phyre; Source 2;
Pygame and Ren’Py; Phaser; Twine;
SpringRTS)
b. Techniques (e.g., Ideation, Prototyping,
Iterative Design and Implementation,
Compiling Executable Builds, Development
Operations and Quality Assurance—Play
Testing and Technical Testing, Profiling;
Optimization, Porting; Internationalization
and Localization, Networking).
5. Game Design
305
a. Vocabulary (e.g., game definitions;
mechanics-dynamics-aesthetics model;
industry terminology; experience design;
models of experience and emotion).
b. Design Thinking and User-Centered
Experience Design (e.g., methods of
designing games; iteration, incrementing,
and the double-diamond; phases of pre-
and post-production; quality assurance,
including alpha and beta testing;
stakeholder and customer involvement;
community management).
c. Genres (e.g., adventure; walking simulator;
first-person shooter; real-time strategy;
multiplayer online battle arena (MOBA);
role-playing game (rpg)).
d. Audiences and Player Taxonomies (e.g.,
people who play games; diversity and
broadening participation; pleasures, player
types, and preferences; Bartle, yee).
e. Proliferation of digital game technologies to
domains beyond entertainment (e.g.,
Education and Training; Serious Games;
Virtual Production; eSports; Gamification;
Immersive Experience Design; Creative
Industry Practice; Artistic Practice;
Procedural Rhetoric).
The core topics in Society, Ethics, and the Profession (SEP) and Mathematical and Statistical
Foundations (MSF) may be covered across the curriculum or in dedicated courses and benefit all the
competency areas.
Total 73 159
306
SEP: Society, Ethics, and the Profession
307
3. Recognition of the role culture plays in our
understanding, adoption, design, and use of
computing technology
4. Why ethics is important in computing, and how Explain
ethics is similar to, and different from, laws and
social norms
308
7. Strategies for recognizing and reporting designs, Apply
systems, software, and professional conduct (or
their outcomes) that may violate law or
professional codes of ethics
309
databases, data warehouses, surveillance
systems, cloud computing, and artificial
intelligence
2. Conceptions of anonymity, pseudonymity, and
identity Evaluate
3. Technology-based solutions for privacy protection
(e.g., end-to-end encryption and differential Evaluate
privacy)
4. Civil liberties, privacy rights, and cultural
differences Explain
310
raising ethical concerns or addressing accessibility
issues
311
SEP-History 3. Age I (Pre-digital): Ancient analog computing Explain KA 1
(Stonehenge, Antikythera mechanism, Salisbury
Cathedral clock, etc.), human-calculated number
tables, Euclid, Lovelace, Babbage, Gödel, Church,
Turing, pre-electronic (electro-mechanical and
mechanical) hardware
4. Age II (Early modern computing): ENIAC, Explain
UNIVAC, Bombes (Bletchley Park and
codebreakers), computer companies (e.g., IBM),
mainframes, etc.
5. Age III (PC era): PCs, modern computer hardware Explain
and software, Moore’s Law
6. Age IV (Internet): Networking, internet Explain
architecture, browsers and their evolution,
standards, born-on-the-internet companies, and
services (e.g., Google, Amazon, Microsoft, etc.),
distributed computing
7. Age V (Mobile & Cloud): Mobile computing and Explain
smartphones, cloud computing and models thereof
(e.g., SaaS), remote servers, security and privacy,
social media
8. Age VI (AI): Decision making systems, Explain
recommender systems, generative AI and other
machine learning driven tools and technologies
312
2. Social engineering, computing-enabled fraud, Explain
identity theft and recovery from these
3. Cyber terrorism, criminal hacking, and hacktivism Explain
4. Malware, viruses, worms Explain
5. Attacks on critical infrastructure such as electrical Explain
grids and pipelines
6. Non-technical fundamentals of security (e.g., Explain
human engineering, policy, confidentiality)
SEP-DEIA 7. Experts and their practices that who reflect the Evaluate KA 2
identities of the classroom and the world through
practical DEIA principles
8. Historic marginalization due to systemic social Explain
mechanisms, technological supremacy and global
infrastructure challenges to diversity, equity,
inclusion and accessibility
9. Cross-cultural differences in, and needs for, Explain
diversity, equity, inclusion and accessibility
313
KU Topic Skill Core Hours
Level
MSF-Discrete 1. Sets, relations, functions, cardinality Apply, CS/KA 29-40
2. Recursive mathematical definitions Develop
3. Proof techniques (induction, proof by contradiction) ,
4. Permutations, combinations, counting, pigeonhole Explain
principle
5. Modular arithmetic
6. Logic: truth tables, connectives (operators),
inference rules, formulas, normal forms, simple
predicate logic
7. Graphs: basic definitions
8. Order notation
MSF- 1. Basic notions: sample spaces, events, probability, CS- CS/KA 11-40
Probability conditional probability, Bayes’ rule Core:
2. Discrete random variables and distributions Apply
3. Continuous random variables and distributions
4. Expectation, variance, law of large numbers, KA-
central limit theorem Core:
5. Conditional distributions and expectation Apply,
6. Applications to computing, the difference between Develop
probability and statistics (as subjects) ,
Explain
MSF-Statistics 1. Basic definitions and concepts: populations, Develop CS 10
samples, measures of central tendency, variance
2. Univariate data: point estimation, confidence
intervals
MSF-Statistics 3. Multivariate data: estimation, correlation, Apply, KA 30
regression Explain
4. Data transformation: dimension reduction,
smoothing
5. Statistical models and algorithms
6. Hypothesis testing
MSF-Linear 1. Vectors: definitions, vector operations, geometric Develop CS 5
interpretation, angles: Matrices: definition, matrix
operations, meaning of Ax=b.
MSF-Linear 2. Matrices, matrix-vector equation, geometric Apply, KA 35
interpretation, geometric transformations with Explain
matrices
3. Solving equations, row-reduction
4. Linear independence, span, basis
5. Orthogonality, projection, least-squares, orthogonal
bases
314
6. Linear combinations of polynomials, Bezier curves
7. Eigenvectors and eigenvalues
8. Applications to computer science: PCA, SVD,
page-rank, graphics
MSF-Calculus 1. Sequences, series, limits Apply, KA 40
2. Single-variable derivatives: definition, computation Develop
rules (chain rule etc.), derivatives of important
functions, applications
3. Single-variable integration: definition, computation
rules, integrals of important functions, fundamental
theorem of calculus, definite vs indefinite,
applications (including in probability)
4. Parametric and polar representations
5. Taylor series
6. Multivariate calculus: partial derivatives, gradient,
chain-rule, vector valued functions,
7. Optimization: convexity, global vs local minima,
gradient descent, constrained optimization and
Lagrange multipliers.
8. ODEs: definition, Euler method, applications to
simulation, Monte Carlo integration
9. CS applications: gradient descent for machine
learning, forward and inverse kinematics,
applications of calculus to probability
315
Curricular Packaging
A few curricular packaging options of various sizes are presented here. These can be adapted to local
strengths and needs to create a customized computer science curriculum. In each case, effort should
be made to include all the CS Core topics in required courses in the curriculum. The more KA Core
topics covered, the greater the breadth of the curriculum. The more hours dedicated to KA Core topics,
the greater the depth of the curriculum. Non-core topics add to the richness of the curriculum. In each
curricular model, a capstone course is included to emphasize the importance of an integrative hands-on
experience. It may also serve as the course where CS Core topics not covered elsewhere in the
curriculum can be incorporated.
8 Course Model
This is a minimal course configuration that covers all the CS core topics. However, it does not leave
much room for exploration:
10 Course Model
1. CS I (SDF, SEP)
2. CS II (SDF, FPL-4, AL-12, SEP)
3. Mathematical and Statistical Foundations (MSF)
4. Data Structures and Algorithms (AL-20, AI, MSF, SEP)
5. Introduction to Computing Systems (SF, OS, AR, NC)
6. Programming Languages (FPL-17, AL, PDC, SEP)
7. Software Engineering (SE, HCI, GIT, PDC, SPD, DM, SEP)
8. One Systems elective:
a. Operating Systems (OS, PDC)
b. Computer Architecture (AR)
c. Parallel and Distributed Computing (PDC)
d. Networking (NC, SEC, SEP)
e. Databases (DM, SEP)
9. One elective from Applications:
a. Artificial Intelligence (AI, MSF, SPD, SEP)
b. Graphics (GIT, HCI, MSF, SEP)
316
c. Application Security (SEC, SEP)
d. Human-Centered Design (HCI, GIT, SEP)
10. Capstone (SE, SEP)
12 Course Model
1. CS I (SDF, SEP)
2. CS II (SDF, AL-12, DM, SEP)
3. Mathematical and Statistical Foundations (MSF)
4. Algorithms (AL-20, AI, MSF, SEC, SEP)
5. Introduction to Computing Systems (SF, OS, AR, NC)
6. Programming Languages (FPL, AL, PDC, SEP)
7. Software Engineering (SE, HCI, GIT, PDC, SPD, DM, SEP)
8. Two from Systems electives:
a. Operating Systems (OS, PDC)
b. Computer Architecture (AR)
c. Parallel and Distributed Computing (PDC)
d. Networking (NC, SEC, SEP)
e. Databases (DM, SEP)
9. Two electives from Applications:
a. Artificial Intelligence (AI, MSF, SPD, SEP)
b. Graphics (GIT, HCI, MSF, SEP)
c. Application Security (SEC, SEP)
d. Human-Centered Design (HCI, GIT, SEP)
10. Capstone (SE, SEP)
16 Course Model
Three different models are presented here, each with its own benefits.
Model 1:
1. CS I (SDF, SEP)
2. CS II (SDF, AL-12, DM, SEP)
3. Mathematical and Statistical Foundations (MSF)
4. Algorithms (AL-20, SEP)
5. Introduction to Computing Systems (SF, SEP)
6. Programming Languages (FPL, AL, PDC, SEP)
7. Theory of Computation (AL-32, SEP)
8. Software Engineering (SE, HCI, GIT, PDC, SPD, DM, SEP)
9. Operating Systems (OS, PDC, SEP)
10. Computer Architecture (AR, SEP)
11. Parallel and Distributed Computing (PDC, SEP)
317
12. Networking (NC, SEP)
13. Pick one of:
a. Introduction to Artificial Intelligence (AI, MSF, SEP)
b. Machine Learning (AI, MSF, SEP)
c. Robotics (AI, SPD, SEP)
14. Pick one of:
a. Graphics (GIT, MSF, SEP)
b. Human-Centered Design (GIT, SEP)
c. Animation (GIT, SEP)
d. Virtual Reality (GIT, SEP)
15. Security (SEC, SEP)
16. Capstone (SE, SEP)
Model 2:
1. CS I (SDF, SEP)
2. CS II (SDF, AL, DM, SEP)
3. Mathematical and Statistical Foundations (MSF, AI, DM)
4. Algorithms (AL, MSF, SEP)
5. Introduction to Computing Systems (SF, SEP)
6. Programming Languages (FPL, AL, PDC, SEP)
7. Theory of Computation (AL, SEP)
8. Software Engineering (SE, HCI, GIT, PDC, SPD, DM, SEP)
9. Operating Systems (OS, PDC, SEP)
10. Two electives from:
a. Computer Architecture (AR, SEP)
b. Parallel and Distributed Computing (PDC, SEP)
c. Networking (NC, SEP)
d. Network Security (NC, SEC, SEP)
e. Security (SEC, SEP)
11. Pick three of:
a. Introduction to Artificial Intelligence (AI, MSF, SEP)
b. Machine Learning (AI, MSF, SEP)
c. Deep Learning (AI, MSF, SEP)
d. Robotics (AI, SPD, SEP)
e. Data Science (AI, DM, GIT, MSF)
f. Graphics (GIT, MSF, SEP)
g. Human-Computer interaction (HCI, SEP)
h. Human-Centered Design (GIT, HCI, SEP)
i. Animation (GIT, SEP)
j. Virtual Reality (GIT, SEP)
k. Physical Computing (GIT, SPD, SEP)
12. Society, Ethics and Professionalism (SEP)
13. Capstone (SE, SEP)
318
Model 3:
1. CS I (SDF, SEP)
2. CS II (SDF, AL, DM, SEP)
3. Mathematical and Statistical Foundations (MSF)
4. Algorithms (AL, AI, MSF, SEC, SEP)
5. Introduction to Computing Systems (SF, OS, AR, NC)
6. Programming Languages (FPL, AL, PDC, SEP)
7. Software Engineering (SE, HCI, GIT, PDC, SPD, DM, SEP)
8. Two from Systems electives:
a. Operating Systems (OS, PDC)
b. Computer Architecture (AR)
c. Parallel and Distributed Computing (PDC)
d. Networking (NC, SEC, SEP)
e. Databases (DM, SEP)
9. Two electives from Applications:
a. Artificial Intelligence (AI, MSF, SPD, SEP)
b. Graphics (GIT, HCI, MSF, SEP)
c. Application Security (SEC, SEP)
d. Human-Centered Design (HCI, GIT, SEP)
10. Three open CS electives
11. Society, Ethics and Professionalism (SEP) course
12. Capstone (SE, SEP)
319