What Engineer Should Know About Python - Raymond J Madachy
What Engineer Should Know About Python - Raymond J Madachy
com
What Every Engineer Should Know
about Python
Engineers across all disciplines can benefit from learning Python. This
powerful programming language enables engineers to enhance their skill
sets and perform more sophisticated work in less time, whether in
engineering analysis, system design and development, integration and
testing, machine learning and other artificial intelligence applications,
project management, or other areas. What Every Engineer Should Know
About Python offers students and practicing engineers a straightforward and
practical introduction to Python for technical programming and broader
uses to enhance productivity. It focuses on the core features of Python most
relevant to engineering tasks, avoids computer science jargon, and
emphasizes writing useful software while effectively leveraging generative
AI.
His research has been funded by diverse agencies across the Department of
Defense, the National Security Agency, NASA, and various companies. His
research interests include systems engineering tool environments for digital
engineering, modeling and simulation of systems and software engineering
processes, including generative AI usage, integrating systems and software
engineering disciplines, system cost modeling, and affordability and
tradespace analysis. He has developed widely used cost estimation tools for
systems and software engineering, and serves as the lead developer of the
open-source Systems Engineering Library (se-lib).
He previously authored Software Process Dynamics and What Every
Engineer Should Know about Modeling and Simulation, and co-authored
Software Cost Estimation with COCOMO II and Software Cost Estimation
Metrics Manual for Defense Systems.
OceanofPDF.com
What Every Engineer Should Know
Series Editor
Phillip A. Laplante
Pennsylvania State University
What Every Engineer Should Know about Cyber Security and Digital
Forensics
Joanna F. DeFranco
OceanofPDF.com
What Every Engineer Should Know
About Python
Raymond J. Madachy
OceanofPDF.com
Designed cover image: © 2025 Raymond J. Madachy
Reasonable efforts have been made to publish reliable data and information, but the author and
publisher cannot assume responsibility for the validity of all materials or the consequences of their
use. The authors and publishers have attempted to trace the copyright holders of all material
reproduced in this publication and apologize to copyright holders if permission to publish in this
form has not been obtained. If any copyright material has not been acknowledged please write and let
us know so we may rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced,
transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or
hereafter invented, including photocopying, microfilming, and recording, or in any information
storage or retrieval system, without written permission from the publishers.
For permission to photocopy or use material electronically from this work, access
www.copyright.com or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive,
Danvers, MA 01923, 978-750-8400. For works that are not available on CCC please contact
[email protected]
Trademark notice: Product or corporate names may be trademarks or registered trademarks and are
used only for identification and explanation without intent to infringe.
DOI: 10.1201/9781003331070
Publisher’s Note
This book has been prepared from camera-ready copy provided by the author.
OceanofPDF.com
Dedication
This book wouldn't exist without the many past pioneers of computing.
OceanofPDF.com
Contents
List of Figures
List of Tables
List of Listings
Foreword
Preface
Chapter 0 Introduction
2.1 NumPy
2.1.1 Array Creation
2.1.2 Array Operations
2.1.3 Array Functions
2.1.4 Statistics and Probability Distributions
2.2 Matplotlib
2.2.1 Plotting Fundamentals
2.2.2 Plot Function
2.2.3 Line and Scatter Plots
2.2.4 Histograms and Distributions
2.2.5 Box Plots
2.2.6 3-Dimensional Plots
2.2.7 Animation
2.3 Pandas
2.3.1 Creating Datasets
2.3.2 File Reading
2.3.3 Dataframe Iteration
2.3.4 Applying Functions and Vectorized Operations
2.4 SciPy
2.4.1 Statistics
2.4.2 Optimization
2.4.3 Hypothesis Testing
2.4.4 Signal Processing
2.5 NetworkX
2.5.1 Graph Types
2.5.2 Graph Creation and Visualization
2.5.3 Graph Layouts
2.5.4 Graph Analysis
2.5.5 Path Computations and Edge Attributes
2.5.6 Customization
2.6 Graphviz
2.6.1 Graph Types
2.6.2 Graph Creation and Visualization
2.6.3 Layout Engines
2.6.4 NetworkX Compatibility
2.6.5 Customization
2.7 Summary
2.8 Exercises
2.9 Advanced Exercises
References
Index
OceanofPDF.com
List of Figures
OceanofPDF.com
List of Tables
OceanofPDF.com
List of Listings
OceanofPDF.com
Foreword
OceanofPDF.com
Preface
AUDIENCE
ACKNOWLEDGMENTS
This book would not have been possible without the support and assistance
of many people. Many thanks for the peer reviews and helpful ideas from
Dr. Barclay Brown at Rockwell Collins, Dr. Timothy Ferris at Cranfield
University, Dr. Mike Green at Naval Postgraduate School (NPS), PhD
students Ryan Bell and Ryan Longshore at NPS, and Jay Mackey. Dr. Ron
Giachetti initially encouraged me to use Python in our NPS curriculum, and
Dr. Oleg Yakimenko at NPS has been very supportive. The most enduring
conversations were had with Joe Raby at IBM who was a captive audience
while hiking hundreds of miles in the mountains.
Allison Shatkin at CRC Press has been instrumental throughout the
writing process. She was always responsive to help while enduring mid-
course volatility and more than a few delays. I'm indebted for her great
patience and unflagging support.
My wife Nancy endured the writing of yet another book with lots of
patience. She supported me and sacrificed our time together to let me finish
it. She was critical, and I am most grateful.
FOR INSTRUCTORS
Needless to say, technology is advancing faster all the time. Not all recent
developments could make it in this book in order to finish it. However,
some emerging areas are evident for a next edition. These include using
Python with Large Language Models (LLMs) for generative AI (which the
author is already undertaking), more of a focus on modern web frameworks,
and mobile device applications. It is an exciting time that calls for
continuous learning.
Comments on this book and experiences with Python are of great interest
to the author, and feedback will help in developing the next edition. You are
encouraged to send any ideas, improvement suggestions, new and enhanced
examples, or exercises. They may be incorporated in future editions with
due credit given.
OceanofPDF.com
0 Introduction
Table 0.1
Comparison of Python Desktop vs. Cloud-Based IDE Platforms
Aspect Desktop Cloud-Based
Requirements Python, IDE Web browser, internet
connection, cloud account
Deployment
An IDE allows one to both write and run Python scripts within the same
interface. It typically combines a code editor, an interactive Python console,
and additional window panes, such as for help. The code editor enables one
to write and edit scripts with features like syntax highlighting, code
completion, and debugging. The interactive Python console acts as an
interpreter, allowing commands to be executed one at a time with
immediate results. This console usually replaces the need for a separate
terminal window for most tasks.
Open source tools are widely used in the Python community. Modern
IDEs are built with the open-source IPython console, an enhanced console
with advanced features for syntax highlighting, more informative error
messages, inline plotting, and shell command access. Many IDEs also
include features such as library lookups, AI assistance, GitHub integration,
auto-documentation, and customization options. These components work
together to provide a unified environment.
Table 0.1 includes deployment diagrams of local and cloud-based
platforms, illustrating how software components are physically deployed on
hardware. All resources are provided for on a desktop, and for cloud-based,
a browser is required to access resources on web servers communicating
over the Internet using the http protocol. The diagrams from the Unified
Modeling Language (UML) visualize how the IDE Python interpreters and
working code files are distributed across different physical devices, and
their connections. Nodes for physical devices and software execution
environments appear as boxes, where different node types are indicated as
stereotypes surrounded by guillemets (< < > >). An interpreter serves as an
execution environment because it runs the Python code residing on the
same device (see the next section). Devices contain the physical computing
resources with processing memory and services necessary to run the
interpreter.
Any of the environments listed below is sufficient to get started, though
the level of sophistication in the code editors and available tools will vary.
A bare minimum setup can be achieved with any text editor and basic
Python installation. In some cases such as embedded hardware, this might
be the only choice of development platform with limited or no debugging
support.
Figure 0.2 shows after execution that the code cells are numbered (by
most recent execution order), and when executed the printed output goes
below. Each cell may contain full scripts or perform as an interpreter for
single lines. New code cells are generated automatically at the end.
Beyond the Python interpreter itself, one will need a text editor at
minimum. It is best to use a more comprehensive tool environment like
Visual Studio, Anaconda Spyder, JupyterLabs, Jupyter Notebook, or other
IDE. Desktop IDE's can be highly customizable. They allow installation of
any custom packages, IDE configuration, and setting up of alternate
environments.
A desktop Python deployment is diagrammed in Table 0.1 requiring a
Python interpreter running under the local operating system to read and
execute local files. Everything runs on local hardware without the need for
an internet connection. The files may be individual python files (*.py) or
Jupyter Notebook files (*.ipynb). Files are managed using the operating
system's file system, and performance is dependent on local hardware
resources (CPU, RAM, etc.).
For engineering applications on the desktop, it is recommended to use the
Anaconda Python distribution that comes with standard scientific libraries
and development tools including the the Scientific Python Development
Environment (Spyder), Jupyter Notebook, Jupyter Labs with additional
utilities beyond Jupyter Notebook, and other programs. It will also install
Python.
The Spyder editor comes with an IDE specifically designed for scientific
computing, as shown in Figure 0.5. It contains sophisticated features such
as code completion, a variable explorer, integrated documentation, and
debugging capabilities. Figure 0.5 shows a program script in the editor on
the left, and execution on the right side interpreter console with its output.
User inputs are also provided in the console, and it can be used
independently as an interactive shell. The interpreter console shows the
default IPython numbered prompts In [#]: and Out [#]:. Not shown are
several other window views available in Spyder.
The Visual Studio IDE is shown in Figure 0.6. The script on the top pane
is executed in the interpreter window below it. Multiple interpreter tabs are
shown open in this example, each of which can be executed separately at
the command line.
Figure 0.6 Visual Studio IDE
One can also embed REPLs on local or internet web pages without
requiring an account. The open-source tool PyScript provides a browser-
based interpreter in HTML files. An open source JavaScript implementation
of a Python interpreter using Pyodide is instantiated on the web page. This
method provides code syntax coloring but currently doesn't have smart
completion features within the editor like other IDEs. See Section 4.2.4 for
more detail.
Table 0.2
Introductory Python Syntax
Characters Description and Examples
= Assignment operator used to assign a value to a variable.
Characters Description and Examples
reliability = .98
message = "Hello engineers."
measurements = [67.8, 73.1, 79.3, 81.4, 69.9]
pyscript_code = """
<py-repl auto-generate="true">
print("Hello engineers around the world!")
</py-repl>
"""
The syntax characters are invariant across IDEs, but styles for coloring
and fonts can vary across environments, and normally several styles are
available to choose from. The coloring scheme in this book matches the
Spyder IDE light color scheme default. This scheme shows keywords in
blue (for), built-in functions in magenta (range), character strings in green
(’weight’) and numbers in dark red (99). All source code examples and
program output are displayed in courier text which is conventional for code
and the default Python interpreter font.
Other basic Python constructs are introduced next before a fuller
treatment in Chapter 1:
Variables store data values, and they are assigned with the = operator.
A list data structure is an iterable sequence of values contained in square
brackets separated by commas such as [1, 2, 4, 8, 16].
Data structure indices start at zero (0) as exemplified in this Chapter
number.
Computation loops can be controlled with a for statement that iterates
across a sequence.
Comprehensions are powerful single line statements of for loops
demonstrating the Pythonic way, e.g. squares = [num * num for num
in range(100)].
Variables are used to store data values. A variable can be assigned any valid
Python data type, such as numbers, strings, and more complex data
structures like lists. Assignment is done using the = operator, where the
variable on the left-hand side is assigned the value on the right-hand side.
The examples below assign values for a floating point number, integer, and
string respectively. These variables are assigned literal values because they
take on the data type as explicitly written.
safety_factor = 3.5
number_tanks = 12
material = "tungsten"
0.3.2 LISTS
List elements may be numbers, strings, mixed data types, other lists, or
other entire data structures. Lists can be populated programmatically in
various ways, including through list comprehensions as shown later.
Lists and other data structures are also indexed starting at zero. Functions
like range that provide counting sequences start at zero. These constructs
are shown next for controlling loops.
controller
frame
battery
Car number 0
Car number 1
Car number 2
0.3.5 COMPREHENSIONS
Lists and other iterable sequences can be computed via a set of looping and
filtering instructions called a comprehension. A comprehension is a handy
shortcut method for creating and populating sequences. It consists of a
single expression followed by at least one for statement clause and and
optionally includes one or more if clauses (conditional statements
introduced in Chapter 1). The basic form of a list comprehension is shown
below. The square bracket syntax indicates a list is to be created.
Comprehensions for other iterable data structures are similar except for
having different outer brackets per Chapter 1.
List Comprehension Syntax
[0, 10, 20, 30, 40, 50, 60, 70, 80, 90]
The resulting values can now be input directly to a math function for
computation with radians. As a best practice, the angle variable names were
changed to be more explicit and to avoid any unit misinterpretations.
More complete details and examples of these introductory features are in
Chapter 1 and many applications in subsequent chapters. The next example
collectively demonstrates these introductory features as applied to a simple
engineering problem. It will be further elaborated to take advantage of
additional language features and Python scientific libraries.
2
vy
H =
2g
velocity height
25 31.887755102040813
50 127.55102040816325
75 286.98979591836735
100 510.204081632653
The output of velocity and height pairs could be used to populate a more
complete table or graph for summarizing the analysis. The above for loop
for height computation can be collapsed into a single line list
comprehension to create a list of lists. The next example uses finer
gradations of velocity and an automated way to specify the velocities in a
sequence without writing each one manually.
The outer list velocity_heights is created below with an element for
every 10 m/s division of velocity. Each element is a list containing two
values for velocity and height. Since list indices start at zero, the parameters
can be identified with the values 0 for velocity and 1 for height in the print
statement.
velocity height
0 0.0
25 31.887755102040813
50 127.55102040816325
75 286.98979591836735
100 510.204081632653
In the comprehension square bracket syntax was used to define the main
outer list and within that to define the contained lists of two values for each
outer element.
In subsequent chapters, this example will be progressively augmented for
more complex geometry, launch conditions including air resistance, and
fine tuning of the formatted output. The parameter set of the problem will
be increased, and accordingly we'll learn alternative ways to specify them in
code with more convenient methods. Dictionaries are alternative data
structures to lists that use keywords for this purpose as described in Chapter
1.
0.6 SUMMARY
LANGUAGE COMPARISONS
class HelloWorld {
public static void main(String[] args) {
System.out.println("Hello engineers around the
world!");
}
}
#include <stdio.h>
int main()
{
printf("Hello engineers around the world!");
}
The equivalent Python code is just a single line, without the need for
additional structural syntax.
The other languages will not be going away, and Python must co-exist
with them. It behooves the engineer to understand how they work in
general, and in order to compare language alternatives. For example, if
execution speed is the primary concern, then a compiled language may
be faster than Python due to compiler optimizations.
0.7 GLOSSARY
assignment:
A statement that assigns a value to a variable.
argument:
A value passed to a function (or method) when calling it.
comprehension:
A concise way to create lists or other sequences by defining an
expression followed by for and if clauses.
expression:
A combination of variables, operators, and values that produce a single
result.
iterable:
An object capable of returning its members one at a time. Iterables
include all data types containing sequences (including lists) and other
non-sequence data types. Iterables can be used in a for loop and other
places where a sequence is needed.
keyword:
A reserved word that is used by the interpreter to parse a program.
Keywords are in Appendix Table A.1.
list:
An ordered collection of items. Lists are defined by square brackets []
and can store elements of different data types.
statement:
An instruction that executes a command or action including
assignment statements and control statements.
OceanofPDF.com
1 Language Overview
A driving goal for Python is to enable concise and highly readable code. It
achieves this through visual code structure and its language features. The
code layout rules enforce legible, readable code through indentation to
delineate logical blocks of code with no extraneous characters. Statements
simply end with a return key 1.
Highly readable code that is self evident and well documented makes for
easier, quicker, and more reliable changes when inevitably modifying the
code later. The term Pythonic refers to a program generally as short as
possible, but not at the expense of legibility.
print(kinetic_energy(100, 10))
The first example shows indented conditional blocks of code using the if
statement described later in Section 1.5.1. There are two logical paths based
on the value of weight to print the result. It is clear in the second example
that the function computes kinetic energy as mv in the indented lines.
1
2
2
The function definition for kinetic_energy takes inputs for mass and
velocity and returns the kinetic energy. It is prefaced with the def keyword
and ends with a colon (:), indicating a following block of code. The
indented code in the function is run each time it is called (see Section 1.2 on
functions).
1
Optionally one can add a semi-colon ; to designate end of statement in order to put multiple
statements on one line. This is not recommended except for special cases to conserve lines.
Some example language features described later that make for powerful
and elegant code include: iterable data structures where looping and global
operations are automated (e.g., sequences including lists and dictionaries),
single line comprehensions to create data structures using for loops and
logical if conditions, nesting of expressions, passing of functions to other
functions, handy functions to operate on data structures, and more.
Variables are named containers for storing data values. The names of
variables, functions, and classes are all case-sensitive. Variable names are
conventionally written in snake case using lower case letters with spaces
between words replaced with underscore (_) characters, e.g.
kinetic_energy. The name visually invokes a snake between words.
Camel case has new words starting with uppercase letters, e.g,
electricCar, resembling the hump of a camel and is sometimes used for
naming Python classes.
Uppercase characters are frequently used for variable names of fixed
constants that won't vary during execution, such as for physical parameters.
This practice isn't universal as is demonstrated in examples throughout this
book.
An assignment statement uses the equals sign = to assign values to
variables, as well as data structures containing collections of values (e.g.
lists).
number_robots = 14
material_type = "tungsten"
GRAVITY = 9.8
potential_energy = mass * GRAVITY * height
pressure_settings = [220, 240, 260, 280, 300]
1.1.3 STRINGS
test_data = """
60, 63, 55, 67
46, 56, 35, 74
16, 83, 35, 27
"""
In the above each line may represent a data point of multiple values.
Another use case for triple quote strings is writing multi-line files without
resorting to specially escaped line ending characters (see below).
Some special characters cannot easily be entered directly into strings and
must be “escaped” using a backslash character (\) per:
\n represents a newline character
\t represents a tab character
\’ represents a single quote (inside a singly-quoted string)
\" represents a double quote (inside a doubly-quoted string)
1.1.4 NUMBERS
There are numeric types for integers, floating point numbers, and complex
numbers. Numbers are created by numeric literals or as the result of built-in
functions and operators. The functions int(), float(), and complex() can
be used to produce numbers of a specific type (also called casting).
Booleans are considered a subtype of integers. Complex numbers have a
real and imaginary part, both of which are floating point numbers.
1.1.5 LITERALS
>>> str(3.14159)
'3.14159'
>>> int(3.14159)
3
>>> float('3.14159')
3.14159
1.2 FUNCTIONS
A Python function is a reusable named piece of code that can return value(s)
or perform other operations with given input. Functions are fundamental
building blocks in programming and are a best practice to encapsulate
frequently used code in a single place, making the code more modular and
manageable. When an algorithm, or a set of statements, needs to be
executed for multiple inputs or in various sections of a program, it is
inefficient and error-prone to repeat the code each time. This not only
makes it difficult and time-consuming to read but also complicates
maintenance, as any updates or bug fixes would need to be applied in
multiple locations manually.
A function, also called a subroutine, can be written once and called many
times with different parameterized inputs. This modular approach saves
program space, improving readability, and also allows one to better
understand the overall structure by breaking it down into logical
components. Functions make testing and debugging easier, as errors can be
isolated to specific code blocks rather than scattered across the program.
These practices collectively reduce errors and enhance code quality, which
is especially important in larger programs.
The syntax for a function is shown next. The keyword def introduces a
function definition. It is followed by the function name and a parenthesized
list of comma-separated input parameters. The colon at the end indicates
that the following indented lines form the function code block. The
indentation helps maintain a clear structure.
Function Syntax
def sphere_volume(radius):
""" Calculates the volume of a sphere given its radius. """
PI = 3.14159
volume = 4 / 3 * PI * radius ** 3
return(volume)
Calling the function can be done with the code volume(radius) and the
calculated volume will be returned. Below the call is embedded within a
print statement, which is covered in Section 1.3. It demonstrates printing an
f-string, as prefixed with the f, which is covered in Section 1.9.1.
radius = 5
volume = sphere_volume(radius)
print(f'Volume of a sphere with radius {radius} is
{volume:.1f}')
The projectile function is used next to compute the flight time, height,
and distance for given inputs. Here the call is embedded within a print
statement that prints the returned values.
print(projectile(100, 45))
(14.4286123690312, 255.02644724730615, 1020.4081184645057)
v0 = 120
angle = 60
flight_time, max_height, distance = projectile(v0, angle)
print(f'A projectile with initial velocity {v0} m/s and angle ⤦
{angle} degrees will fly for {flight_time:.1f} seconds to
a max ⤦
height of {max_height:.1f} and distance {distance:.1f}
meters.')
The function can be called with just mandatory arguments or also include a
subset of the optional parameters to override their defaults.
axis_settings = {
'xlabel': 'Time',
'ylabel': 'Height',
'title': 'Projectile Motion at given launch angles',
}
axis.set(**axis_settings)
The above can be useful to set inputs once in a dictionary for sending to a
function in many places. It then only needs to be modified in one place.
This function introduction established how code can be structured,
modularized, and reused. Subsequent introductory topics, such as loops and
conditionals, should be viewed with the mindset of creating reusable
components. Knowing functions will also help in manipulating data
structures. See Section 1.14 for more advanced function topics.
The print function has already been demonstrated with simple examples.
The larger syntax details are introduced here and demonstrated later. The
print function is used as a statement with the basic syntax:
print(arguments)
weight = 450
print("The weight is", weight, "kilograms")
The full syntax for print is shown below and detailed in Table 1.1.
Table 1.1
Print Function Parameters
Parameter Description
arguments Arguments to print separated by commas.
sep Optional. Character(s) used to separate the parameters, if there
is more than one. Default is a space.
end Optional. What to print at the output end. Default is a line feed
(\ n).
file Optional. An object with a write method. Default is sys.stdout
(normally the screen).
flush Optional. A Boolean specifying if the output is flushed (True)
or buffered (False). Default is False.
Print Syntax
PI = 3.14159
radius = 5
volume = 4/3 * PI * radius**3
print('The volume of a sphere with radius', radius, 'is',
volume)
Table 1.2
Arithmetic Operators
Operator Name Example
+ Addition x + y
- Subtraction x - y
* Multiplication x * y
/ Division x / y
% Modulus x % y
** Exponentiation x ** y
// Floor division x // y
Comparison operators in Table 1.3 are used to compare two values and to
evaluate the result as True or False. These are typically used in Boolean
expressions to check conditions for alternate logic paths. They can control
conditional execution with if statements described in Section 1.5.1 and
while loops in Section 1.8.2 and for loops in Section 1.8.1.
>>> 5 ** 2 > 20
True
>>> 5 * 2 == 20
False
Table 1.3
Comparison Operators
Operator Name Example
== Equal x == y
!= Not equal x != y
Table 1.4
Logical Operators
Operator Description Example
and True if both statements are true x < 5 and y < 10
Table 1.5
Assignment Operators
Operator Description Example Equivalence
= Simple assignment x = 5 x = 5
Identity operators are used to compare objects, not to determine if they are
equal values but if they are actually the same object residing in the same
memory location. They are listed in Table 1.6. They may be used to
compare different representations of the same data, help to debug code by
checking if two objects are the same object, or optimize execution speed by
avoiding unnecessary object creation.
Table 1.6
Identity Operators
Operator Description Example
is True if both variables are the same object x is y
is not True if both variables are not the same object x is not y
Table 1.7
Membership Operators
Operator Description Example
in True if a value is present in a sequence x in list1
1.5.1 IF
If Syntax
if condition:
# block of code to be executed if the condition is true
An example to check a weight constraint is:
Use the else statement to specify a block of code to be executed when the
if the condition is false.
If Else Syntax
if condition:
# block of code to be executed if the condition is true
else:
# block of code to be executed if the condition is false
The else can be used with a single if condition as above, or after other
conditions using the elif statement described next. The else keyword will
catch anything which isn't caught by preceding condition(s).
The elif keyword checks another condition when the previous condition
is false. The following example uses it to check for three ranges of a
variable.
if complexity > 10:
risk = "High"
elif complexity > 5 and complexity <= 10:
risk = "Medium"
else:
risk = "Low"
Multiple elif statement blocks can be used under the top if. The last
else statement isn't required with the other conditional checks and its use
depends on the situational logic. Each condition is checked in order. If one
is true, the corresponding block executes and no more conditions are
checked. If more than one condition is true, only the first true one executes.
SHORT HAND IF
If there is only one statement to execute, it can be put on the same line as
the if statement to save space. A single line if statement is:
If there is one statement to execute for if and another one for else, it can
all be put on the same line as below.
Multiple else statements can be put on the same line. This example
replicates the multiple line logic of the three condition example in 1.5.2.
One line contains the if and else statements for all three conditions. They
are checked in the order shown.
> complexity = 3
> risk = "High" if complexity > 10 else "Medium" if complexity
> 5 ⤦
else "Low"
> risk
'Low'
Complexity is Low
1.5.5 AND
1.5.6 OR
Iterable data structures include lists, dictionaries, tuples, and sets. Each has
its own methods or functions for performing operations on the data. These
data structures are compared in Table 1.8 and detailed in following sections.
Table 1.8
Comparison of Lists, Dictionaries, Tuples, and Sets
Type Notation Element Access Mutable
list [23, 49, 18] numeric index Yes
list_name[1]
dictionary {’temp’: 23, ⤦ key Yes
’pressure’: 49, ⤦ dict_name[’pressure’]
’humidity’: 22}
tuple (23, 49, 18) numeric index No
tuple_name[1]
set {’gimbal’, ⤦ None. Can loop through or Yes
’gps’, ’frame’}1 check for item membership.
1.6.1 LISTS
A list is a sequence of values which can be of any data type. The values in a
list are called elements or items. Lists are very useful as ordered collections
of elements which may include numbers, strings, other lists, dictionaries,
tuples, and other data types.
Lists are mutable because the elements can be changed. They are ideal
for storing and accessing data for which the number and value of elements
are not known until run-time, such as a simulation or real-time application.
There are many available built-in functions that return list information
and methods that operate on lists including those in Table 1.9. The methods
are accessed via the dot notation, as shown in the examples for append and
remove methods. The functions take list names as inputs such as len.
Following are examples of creating and manipulating lists using some
standard operations.
Table 1.9
List Methods and Functions
Method / Function Description
append(item) Adds an item to the end of the list.
extend(list) Adds all the elements of a list to the end of the
current list.
insert(index, Inserts an item at the specified index.
item)
remove(item) Removes the first occurrence of an item.
Method / Function Description
pop() Removes and returns the last item in the list.
pop(index) Removes and returns the item at the specified
index.
sort() Sorts the items of the list in ascending order.
reverse() Reverses the order of the items in the list.
index(item) Returns the index of the first occurrence of an item.
count(item) Returns the number of times an item appears in the
list.
len(list) Returns the length of a list.
A list can be created using square brackets to enclose elements that are
separated by commas. The following creates two lists.
There are other ways to create a new list. An empty list contains no
elements and can be created with empty brackets: []. This would be
necessary to initialize a list prior to a loop where data values are appended
to it.
ACCESSING ELEMENTS WITHIN A LIST
Elements within a list can be accessed using numeric indices. The index of
an element in a list starts from 0, so to access the first element in the list use
the following:
first_temperature = temperature_data[0]
print(first_temperature)
60
Negative indices can also be used which start from the end of the list, with
-1 being the last element, -2 the second to last, etc. The same notation is
used for slicing of list elements by ranges and steps overviewed in Section
1.13.
temperatures[0] = 295
Elements can be added to the end of a list using the append method.
temperature_data.append(64)
print(temperature_data)
[60, 63, 55, 67, 64]
Lists can be generated from other lists. inputs Consider the problem of
calculating the internal energy of a gas at different temperatures using the
equation U = n ∗ R ∗ T , where n is the number of moles of gas, R is the
ideal gas constant, and T is the temperature. Lists of temperatures and their
corresponding internal energies can be created for analysis and plotting as
below.
print(internal_energies)
Alternatively, one can use plus sign syntax (+) to add lists of one or more
elements each:
temperature_data.remove(67)
CONCATENATING LISTS
SORTING
The sort method arranges the elements of the list from low to high:
>>> temperature_data.sort()
>>> temperature_data
[55, 60, 63, 64, 67]
REPEATING A LIST
The * operator repeats a list a given number of times. It can be useful for
initializing list values.
>>> num_values = 4
>>> [0] * num_values
[0, 0, 0, 0]
>>> [1, 2, 3] * 3
[1, 2, 3, 1, 2, 3, 1, 2, 3]
Lists can be easily iterated over. The most common way to traverse the
elements of a list is with a standard for loop:
295
310
330
340
Note a more elegant and concise way is to use a list comprehension and
redefine the list as follows. See Section 1.6.5 for more details on
comprehensions.
1.6.2 DICTIONARIES
Python dictionaries are data structures that store key-value pairs. They are
similar to lists, but instead of having a numerical index, dictionaries have
keys that can be any immutable type such as strings or numbers or tuples.
The values can be of any data type including lists or additional dictionaries
at any level of nesting. This makes dictionaries a powerful data structure for
storing and accessing data in a logical, organized manner. They are essential
for many applications.
A dictionary provides a mapping where each key maps to a value. Each
key is unique and can be used to access the values as indices. The
association of a key and a value is called a key-value pair or an item. When
viewing a dictionary, the items are displayed in a certain order, but it is
irrelevant. Data is extracted the same way, hence they are called unordered
vs. ordered lists.
These features make dictionaries more flexible than lists. They don't
displace the usage of lists because lists can be the associated values in a
dictionary.
Table 1.10 summarizes the available dictionary methods and functions to
perform operations on them.
Table 1.10
Dictionary Methods and Functions
Method/Function Description
dict() Creates a new dictionary
get(key, Returns the value of the specified key, default if
default=None) key doesn't exist
keys() Returns a view object containing the keys of
the dictionary
values() Returns a view object containing the values of
the dictionary
items() Returns a view object containing the key-value
pairs of the dictionary
pop(key, Removes and returns the value of the specified
default=None) key, default if key doesn't exist
Method/Function Description
popitem() Removes and returns an arbitrary key-value
pair from the dictionary
clear() Removes all key-value pairs from the
dictionary
copy() Returns a shallow copy of the dictionary
update(other_dict) Updates the dictionary with key-value pairs
from other dictionary
fromkeys(seq, Creates a new dictionary from a sequence with
value=None) specified value
len(dict) Returns the number of key-value pairs in the
dictionary
key in dict Returns True if the key exists in the dictionary,
False otherwise
dict[key] = value Assigns a value to the specified key
del dict[key] Removes the key-value pair from the dictionary
CREATING A DICTIONARY
temperature = sensor_datapoint['temperature']
satellite_part_masses['battery'] = 834
satellite_part_masses['thermal_camera'] = 56
To remove a key-value pair from a dictionary, you can use the del keyword
and the key name:
del satellite_part_masses['thermal_camera']
The methods for extracting data from a dictionary by iterating over them
are:
satellite_mass = 0
for mass in satellite_part_masses.values():
satellite_mass += mass
satellite_mass
1644
NESTED DICTIONARIES
projects = {
'Laser Calibration': {
'start_date': '2022-04-01',
'end_date': '2022-06-30',
'tasks': {
'setup': {
'duration': 14,
'assigned_to': 'Jose Omega'
},
'data analysis': {
'duration': 11,
'assigned_to': 'Ada Knuth'
}
}
},
'Board Development': {
'start_date': '2022-07-01',
'end_date': '2022-09-30',
'tasks': {
'design': {
'duration': 23,
'assigned_to': 'Data Pascal'
},
'fabrication': {
'duration': 18,
'assigned_to': 'Rosetta Stone'
}
}
}
}
The following example will use the project dictionary to compute the
total number of person-days in the collective tasks. It sums the durations
from the individual task dictionaries where each value is referenced with
the duration key. This dictionary structure will be revisited in Chapter 2
network analysis and graphing applications.
total_person_days = 0
1.6.3 TUPLES
Tuples are similar to lists, being sequences of values that can be of different
data types. The primary difference is that tuples are immutable because
their values cannot be changed once they are created, as opposed to lists
where items can be changed, deleted, or added. This makes tuples useful for
representing data that should not be modified. A summary of tuple methods
and functions is in Table 1.11.
Table 1.11
Tuple Methods and Functions
Method / Description
Function
count(x) Returns the number of times x appears in the tuple.
index(x) Returns the index of the first occurrence of x in the
tuple. Raises a ValueError if x is not found.
len() Returns the length of the tuple.
sorted() Returns a sorted list of the elements in the tuple.
max() Returns the largest element in the tuple. Raises a
TypeError if the elements are not comparable.
min() Returns the smallest element in the tuple. Raises a
TypeError if the elements are not comparable.
tuple(iterable) Converts an iterable (e.g., a list or a string) into a
tuple.
CREATING A TUPLE
Silicon Dioxide
Due to the restrictions of immutability, tuples may seem less capable than
lists but they are computationally more efficient. They use less memory and
can be operated on faster. They are often used as the return type from
functions that return more than one value. There is also a benefit to limiting
possible errors by restricting data structure updates, which becomes more
important with large and complex software.
In general it is a good idea to use a tuple instead of a list, unless a list is
the only way to process data, When necessary a list can always be created
from a tuple, or vice-versa.
list(compounds)
1.6.4 SETS
Table 1.12
Set Methods and Functions
Method / Function Description
add(x) Adds the element x to the set. If x is
already present, no change occurs.
remove(x) Removes the element x from the set.
Raises a KeyError if x is not found.
discard(x) Removes the element x from the set if it
is present. No error is raised if x is not
found.
union(other) Returns a new set containing all elements
from the set and other. Equivalent to |
operator.
intersection(other) Returns a new set containing elements
common to the set and other. Equivalent
to & operator.
difference(other) Returns a new set containing elements in
the set but not in other. Equivalent to -
operator.
symmetric_difference(other) Returns a new set containing elements in
either set or other but not both.
Equivalent to ^ operator.
len() Returns the number of elements in the
set.
clear() Removes all elements from the set.
set(iterable) Converts an iterable (e.g., a list) into a
set.
Sets are efficient for membership tests, allowing one to check whether an
element is in the set quickly. They are useful to perform operations on data
collections where the sequence or frequency of elements is irrelevant, such
as when identifying unique values from a dataset or comparing different
groups.
A set is defined using curly braces {} or the set() function. In the next
example, uav_components is a set with four elements, and unique_numbers
will contain only the unique values {1, 2, 3, 4}, as duplicates are
automatically removed.
# Defining sets
uav_components = {'GPS', 'Camera', 'Battery', 'Propeller'}
unique_numbers = set([1, 2, 2, 3, 4])
Python provides several methods for working with sets, such as adding
elements, removing elements, or getting the number of elements:
# Union
all_equipment = uav_components.union({'Parachute', 'Antenna'})
# Intersection
common_parts = uav_components.intersection({'Camera',
'Propeller'})
# Difference
missing_parts = uav_components.difference({'Battery',
'Antenna'})
# Symmetric Difference
diff = uav_components.symmetric_difference({'Camera', 'GPS'})
print('union:', all_equipment)
print('intersection', common_parts)
print('difference', missing_parts)
print('symmetric difference:', diff)
if 'GPS' in uav_components:
print('GPS is installed on the UAV.')
import random
# initialize list of windspeeds
random_windspeeds = []
The above can be expressed in a single line to initialize and populate the
list by enclosing the comprehension in brackets:
The same comprehension below in the interpreter will print all the values
from the single statement. The interpreter evaluates the expression and
prints it per a REPL loop.
print(compound_weights)
compound_data = {
"Silicon Dioxide": {
"symbol": "SiO2",
"molecular_weight": 60.088,
"melting_point": 1610, # ○ C
"thermal_conductivity": 1.4, # W/(m*K) (average value)
},
"Boron Nitride": {
"symbol": "BN",
"molecular_weight": 24.813,
"melting_point": 3000, # ○ C (approximate)
"thermal_conductivity": 27.6, # W/(m*K)
},
"Gallium Arsenide": {
"symbol": "GaAs",
"molecular_weight": 144.645,
"melting_point": 1238, # ○ C
"thermal_conductivity": 50, # W/(m*K) (approximate)
},
}
print(melting_points)
{’Silicon Dioxide’: 1610, ’Gallium Arsenide’: 1238}
The subject of a comprehension is very general and can be used for more
than assigning values. The leading expression will be evaluated if there is
no equivalence statement using an equals sign. Below illustrates how
expressions and functions can be nested. Legibility and maintenance should
always be considered before nesting too many levels however. Here for
example, the computation may be better suited outside of the print
statement in the midst of a larger program.
0 0.0
10 5.1020408163265305
20 20.408163265306122
30 45.91836734693877
...
1.7 SLICING
Slicing is a way to extract a subset of elements from data types such as lists,
tuples, and strings. It allows one to create new sequences by selecting
specific elements from the original sequence based on their indices. It uses
the following syntax below with the parameters in Table 1.13.
Table 1.13
Slicing Parameters
Parameter Description
Parameter Description
start The index at which the slicing begins (inclusive). If omitted, it
defaults to 0.
stop Optional. The index at which the slicing ends (not inclusive).
If omitted, it defaults to the length of the sequence.
step Optional. The step size between elements. If omitted, it
defaults to 1. It can be negative
Slicing Syntax
sequence[start:stop:step]
Table 1.14
Iteration Methods for Lists, Dictionaries, and Tuples
Data Structure / Example
Iteration Method
Lists
for loop for item in list_name:
for temperature in temperature_list:
for loop with for datapoint, temperature in
enumerate() enumerate(temperature_list):
Dictionaries
Data Structure / Example
Iteration Method
1
for loop over keys for key in dict_name:
for chemical in chemical_data.keys():
for loop over values for value in dict_name.values():
for molecular_weight in
chemical_data.values():
for loop over key-value for key, value in dict_name.items():
pairs for chemical, molecular_weight in
chemical_data.items():
dictionary {key:value for key, value in
comprehension dict_name.items()}
Tuples
for loop for item in tuple_name:
while condition:
# block of code to be executed if the condition is true
When deciding between while and for loops, the choice depends on the
nature of the iteration and the control needed over the loop's execution. A
for loop is suited for situations where the number of iterations is known in
advance, such as iterating over elements in a list, range, or other iterable
where the number of repetitions is fixed. In contrast, a while loop is
appropriate when the number of iterations is not predetermined and is
instead dependent on a condition being met. This is useful when execution
must continue until a certain logical condition changes, such as in a random
simulation or reading data until the end of a file.
object_spacing = 3
object_x_coords = {} # dictionary holds x coordinates for
action ⤦
nodes and edges
for actor_object_number, actor_object in
enumerate(actors+objects, ⤦
start=1):
x = int(actor_object_number-1)*object_spacing
if actor_object in actors:
g.node(actor_object, pos=f"{x},.2!") # graphviz node
object_x_coords[actor_object] = x
Engineers need to convey data and results in a clear and efficient format,
both for human interpretation and for further processing. The output data
may be file input for additional analysis or serve as AI training data for
machine learning. Significant digits should be considered while suffering no
accuracy loss. This section provides a distillation of numerous formatting
options with some recommendations.
Formatting of numeric and string output can be specified three ways as
summarized in Table 1.15. F-strings are highly recommended and used
throughout this book. The string format method is more limited and harder
to use. The % string formatting is not recommended but retained in legacy
code and some generated solutions, so one should be aware of it.
Table 1.15
Formatting Methods Summary and Equivalent Examples All
examples produce the same output: Volume of a sphere with
radius 5 is 523.6
Method Summary and Equivalent Examples
Method Summary and Equivalent Examples
f-strings Recommended for all new development. The most intuitive,
flexible and powerful formatting technique. Expressions are
embedded in a string with curly brackets {…} prefixed with f
or F with an optional format specifier after the expression.
The variables and expressions inside the brackets form strings
to be evaluated at runtime.
1.9.1 F-STRINGS
{expression[=]:[flags][width][.precision][type]}
The flags are optional parameters for text alignment and number
notations. The parameters and examples are shown in Tables 1.16 through
1.19. The width parameter is used to denote the total space to write an
output element, the precision after the decimal point refers to how many
numbers to display after the decimal point (valid for floating point
numbers), and type identifies the expression format type (e.g., floating point
options, integers, etc.).
Table 1.16
Float Number Character Formatting Examples for 6.2f
Value Display
723.566 1 2 3 . 5 6
472.7 4 7 2 . 7 0
0.029 0 . 0 3
13 1 3 . 0 0
9423.17 9 4 2 3 . 1
Table 1.17
Float and Decimal Number Format Options Given: c = 299792458
# m/s, pi = 3.141592653589793
Option Description Examples
:f Fixed point number with decimal
(default precision 6) print(f'{pi = :f}')
pi = 3.141593
:.nf Fixed point with n digits of
precision after decimal point. print(f'{c = :.0f} m/s
\n{pi = :.2f}')
Option Description Examples
c = 299792458 m/s
pi = 3.14
c = 2.997920e+08 m/s
pi = 3.142e+00
c = 2.997920E+08 m/s
pi = 3.142E+00
:g General format
print(f'{c = :g} m/s
\n{pi = :.3g}')
c = 2.99792e+08 m/s
pi = 3.14
c = 2.99792E+08 m/s
pi = 3.14159
Option Description Examples
Table 1.18
Integer Number Format Options
Option Description Example
:d Decimal format
print(f'{c = :d} m/s')
c = 299792458 m/s
:b Binary format
print(f'255 integer = {255:b} binary')
:o Octal format
print(f'255 octal = {255:o}')
Table 1.19
Number Alignment Options
Description Option
Left align : <
String is the default formatting type and does not need to be specified. If
one needs to print a simple string variable, then no modifiers or data type
specification are necessary. The variable or fuller expression will be printed
starting at the left bracket position within the f-string.
engine_type = "turbocharged"
print(f"Engine type: {engine_type}")
Figure 1.1 Example Fixed Point Number Spacing for Width = 6, Precision = 2
The .2f format specifier alone will print a floating point number rounded
off to two decimal points. The field width can also be specified to align the
output with the full :6.2f to print a floating point number six spaces wide.
print(f"{kinetic_energy = :6.2f}")
kinetic_energy = 371.31
A string is a Python class object with a format method that is invoked with
the dot prefix notation. The basic syntax for the format method is below.
string.format(expression(s))
The string contains replacement fields enclosed in curly brackets {…} for all
the expression(s) contained in the call separated by commas (a tuple). The
expressions are either positional or keyword arguments.
Table 1.15 shows an example using positional arguments for the radius
and volume variables. The same result can be obtained with keyword
arguments per the following.
This more primitive format uses the string modulo operator % where on the
left side is the string to format and on the right side is a tuple with the
content to insert. The content is interpolated into the format string similar to
the format method. The fields and expressions must line up by position on
the left and right sides of the modulo operator.
When using several parameters and longer strings, the code will quickly
become less readable and messy.
Many formatting options are available for numeric display styles and
alignment. The options in Table 1.17 for floating point numbers and Table
1.18 for integers apply to f-strings and the string .format() method. A
subset will work with % string formatting.
Basic number alignment options are in Table 1.19. Examples for the
alignments in a field of six characters are in Table 1.20. An example left
alignment without specifying a field width is:
0 0
20 400
40 1600
Table 1.20
Basic Number Alignment Examples
Option Examples
Left-align within field (default) 1 2
3 . 1 4
Center-align within field 3 5
- 1 4
Right-align within field 3 . 1 4 1
3 . 0 + 0 8
Table 1.18 shows format options for integer numbers. There are
additional format notations available, which can be adjusted based on
precision needs. Case sensitivity affects how special values are displayed,
and formatting can adapt between fixed-point and scientific styles
depending on the number's size. Locale-aware formatting ensures numbers
are displayed according to regional conventions. These more detailed
formatting options for float, decimal numbers, and strings are at
https://docs.python.org/3/library/string.html including unicode conversion,
hex format, and others.
The next example left justifies output for a fixed field width including a
header row.
velocity height
0.00 0.00
10.00 1.56
20.00 6.25
30.00 14.06
40.00 25.00
0.00 0.00
20.00 400.00
40.00 1600.00
The next shows using a space instead of a sign for positive numbers as
detailed in Table 1.19.
0.0
0.7071067811865475
1.0
0.7071067811865476
1.2246467991473532e-16
-0.7071067811865475
...
1.10 IMPORTING MODULES
import module_name
For example, the following import loads the math module into the current
namespace. It can then be used as a prefix to its functions. For example, the
value of the pi constant can be accessed using the dot notation.
import math
print(math.pi)
As an example the random library will be imported and given the name rnd
with the as keyword. Its function to generate a random uniform number is
called with the alias name using the dot notation.
The from statement can be used to import specific objects from a module.
Its syntax is:
This allows one to access the objects defined in a module without having
to specify the full module name. For example, the following from statement
imports the pi constant from the math module. Once it is imported, it can be
accessed directly without having to write the math module prefix.
The import and from statements are similar in that they both load
modules into the current namespace. While the import statement imports
all of the objects in a module, the from statement imports specific objects
from a module.
The input function is used to read input from the user. There are two forms,
with and without an argument:
The following displays prompts and assigns the user inputs to variables:
Python provides built-in functions for creating, writing, and reading files.
Access modes dictate the type of operations available with an opened file.
The file may be read-only, write, append, read-write, or other modes as
summarized in Table 1.21.
Table 1.21
File Access Modes
Access Description
Mode
r Opens a file for reading in text mode. End-of-file (EOF) is
encountered when the internal pointer reaches the end of the file.
w Opens a file for writing in text mode. Existing content is
truncated, and if the file doesn't exist, it's created.
a Opens a file for appending in text mode. The file is created if it
doesn't exist. New data is written to the end of the file.
rb Opens a file for reading in binary mode. Useful for handling
non-text files like images or data.
Access Description
Mode
wb Opens a file for writing in binary mode. Existing content is
truncated, and if the file doesn't exist, it's created.
ab Opens a file for appending in binary mode. The file is created if
it doesn't exist. New data is written to the end of the file.
x Opens a file for exclusive creation. Fails if the file already
exists. Useful for preventing accidental overwrites.
t Opens a text file in text mode. This is the default mode for 'r’
and ’w’ on some systems.
b Opens a binary file in binary mode. This is the default mode for
’rb’ and ’wb’ on some systems.
+ Opens a file for both reading and writing. The file must exist.
Use ’r+’ for reading and writing from the beginning of the file,
or ’a+’ for appending and reading/writing from the end.
The contents of an entire file can be read with the open function. It opens
a named file in read mode (the default mode if not specified) and returns a
file object representing the opened file. The read() method is called to read
the entire file content as a single string which is assigned to a variable.
temperature_data = open("temperatures.csv").read()
A file can alternatively be iterated over line by line. The with statement
is used for working with files to ensure proper handling of file resources,
including automatically closing a file when there are exceptions during
processing. The open function opens the file in read mode and returns a file
object. The returned file object is assigned to the variable f with the as
keyword. The for loop iterates over the lines present in the file object,
where the current line is assigned to the variable line.
Files can written to after specifying the write mode with basic file functions
or additional libraries to handle file formatting. The writing can be done
either all at once or line-by-line. The next example writes a multi-line string
to a csv file in a single operation where the line endings in the string are
purposely retained.
import csv
# Data lists
times = ["00:00", "00:05", "00:10"]
temperatures = [235.3, 237.1 , 237.0]
pressures = [128.6, 132.1 , 131.9]
# Open the CSV file for writing in write mode (will overwrite ⤦
existing file)
with open('temperature_data.csv', 'w', newline='') as csvfile:
writer = csv.writer(csvfile)
The previous examples use very short and simple data structures. More
complex data and multiple formats can be handled with the exhaustive
functions available in Pandas. See the Pandas Section 2.3 and subsequent
examples.
def get_total_weight(system_dictionary):
"""Returns the total weight of all the components in a
system."""
total_weight = 0
return total_weight
UAV = {
"Payload": {
"Camera": {
"weight": .7
},
"Thermal Sensor": {
"weight": 1.2
},
"Lidar": {
"weight": 1.5
}
},
"Onboard Computer": {
"weight": 0.6
},
...
"Power": {
"Battery": {
"weight": 3
},
"Harness": {
"weight": 0.2
}
}
get_total_weight(UAV)
10.7
1.14.2 FUNCTIONS AS INPUTS TO FUNCTIONS
dy
y(tn+1) = y(tn) + (tn)Δt
dt
(1.1)
where
y is a function over time
dy
dt
is the time derivative of y
n is the time index
Δt is the time step size to increment tn to t
n+1 .
The routine in Listing 1.1 will accept any arbitrary derivative function
with its own set of variables. This generality is afforded by the unpacking of
arguments and keyword arguments with the * and ** syntax. The signature
of the function never has to change for new derivatives to accommodate
different parameters.
It is demonstrated with two separate functions. It passes the differential
equation for Newton's law of cooling to the euler function to integrate and
secondly models exponential growth. Parameters specific to each derivative
function are sent as inputs.
Args:
f (function): A function that returns the derivative of
y ⤦
with respect to time.
y0 (float): The initial value of y.
t0 (float): The start time of the integration.
tf (float): The end time of the integration.
dt (float): The step size for the integration.
*args: Additional positional arguments to pass to the ⤦
derivative function f.
**kwargs: Additional keyword arguments to pass to the ⤦
derivative function f.
Returns:
list of tuple: A list of tuples where each tuple
contains a ⤦
time t and the corresponding value of y(t).
"""
time, y = t0, y0
print("time y")
while time <= tf:
print(f"{time:.1f} {y:.1f}")
y += f(y, *args, **kwargs) * dt
time += dt
Args:
temp (float): Current temperature of the object.
external_temp (float): Ambient or external temperature.
coefficient (float): Heat transfer coefficient for the
rate ⤦
of heat loss.
Returns:
float: The rate of temperature change.
"""
return -coefficient * (temp - external_temp)
Returns:
float: The value after applying exponential growth.
"""
return x * rate
1.14.3 DOCSTRINGS
After the function is already imported or read into the interpreter, the
docstring can be accessed with the __ doc__ method:
projectile.__doc__
Args:
v0 (float): Initial velocity of the projectile in meters
per ⤦
second.
angle (float): Launch angle of the projectile in degrees.
Returns:
tuple: A tuple containing:
- flight_time (float): Projectile flight time in
seconds.
- max_height (float): Maximum height reached by the ⤦
projectile in meters.
- distance (float): Horizontal distance traveled by
the ⤦
projectile in meters.
"""
...
Many tools and environments parse the docstrings to generate help and
automated documentation in multiple formats. For example in smart editors,
when hovering over or typing in a function (or class) name, the
documentation is displayed as a tooltip popup. Figure 1.2 demonstrates a
tooltip that displays with the cursor focused on the projectile() function
statement. See further material on docstrings and automated documentation
in Chapter 5.
1.14.4 LAMBDAS
The following lambda function takes two arguments for mass and
velocity and returns the kinetic energy. The function will be bound to the
values of the arguments when it is called, the expression will be evaluated,
and the result will be returned as follows:
750.0
0.7647058823529411
1.15 CLASSES AND OBJECT-ORIENTED PROGRAMMING
class Car:
def __init__(self, name):
self.name = name
self.speed = 0
def stop(self):
print(f"The {self.name} has stopped.")
To create a car object, its name and speed are passed to the class as
Delorean = Car("Delorean", 90). Once it is created, its attributes can be
accessed and methods can be called. E.g., For example, call the
Delorean.drive() method to make the car drive.
import math
class Projectile:
"""
A class representing a moving projectile that calculates
its ⤦
trajectory.
Attributes:
angle (float): The initial angle of the projectile in
degrees.
velocity (float): The initial velocity of the projectile
in ⤦
meters per second.
g (float): The acceleration due to gravity in meters per
⤦
second squared.
Methods:
calculate_trajectory(time_step: float, end_time: float)
-> ⤦
list[tuple[float, float]]:
Calculates the trajectory of the projectile over a ⤦
specified time period with a given time step.
"""
Args:
angle (float): The initial angle of the projectile
in ⤦
degrees.
velocity (float): The initial velocity of the
projectile ⤦
in meters per second.
g (float, optional): The acceleration due to gravity
in ⤦
meters per second squared. Defaults to 9.81.
"""
self.angle = angle
self.velocity = velocity
self.g = g
Args:
time_step (float): The time step in seconds.
end_time (float): The end time in seconds.
Returns:
A list of tuples representing the (x, y) coordinates
of ⤦
the projectile at each time step.
"""
# Convert the angle to radians
angle_rad = math.radians(self.angle)
time_step = 1
time = 0
# Initialize the trajectory list with the initial time and ⤦
coordinates
trajectory = [(time, x, y)]
return trajectory
1.15.1 INHERITANCE
class Car:
def __init__(self, name, speed):
self.name = name
self.speed = speed
self.distance = 0
def stop(self):
print(f"The {self.name} has stopped after {self.distance}
miles.")
class ElectricEngine:
def __init__(self, power_output_kw, battery_capacity_mah):
self.power_output_kw = power_output_kw
self.battery_capacity_mah = battery_capacity_mah
def start(self):
return f"Electric engine started with
{self.power_output_kw} ⤦
kW power and {self.battery_capacity_mah} mAh
battery ⤦
capacity."
class Battery:
def __init__(self, capacity_mah, voltage_v):
self.capacity_mah = capacity_mah
self.voltage_v = voltage_v
def get_status(self):
return f"Battery capacity: {self.capacity_mah} mAh,
Voltage: ⤦
{self.voltage_v}V"
class UAV:
def __init__(self, model, max_range_km, engine, battery, ⤦
propeller_type):
self.model = model
self.max_range_km = max_range_km
self.engine = engine # Aggregation: UAV "has an"
Electric Engine
self.battery = battery # Aggregation: UAV "has a"
Battery
self.propeller_type = propeller_type
def start_uav(self):
engine_status = self.engine.start()
battery_status = self.battery.get_status()
return (f"UAV {self.model} (Max Range:
{self.max_range_km} ⤦
km, Propeller: {self.propeller_type}):\n"
f"{engine_status}\n{battery_status}")
print(uav.start_uav())
1.15.3 ASSOCIATION
The example in Listing 1.5 combines the previous Car class with a new
Race class. It demonstrates how any number of car instances can be added
to a race, illustrating a class association relationship. Class associations are
used to model relationships where objects of one class are linked to objects
of another. The Race class has a method to add Car objects along with their
attributes, modeling participants in an actual race environment.
As shown in the class diagram in Figure 1.6, an association between
classes is represented by a solid line connecting them, indicating the
presence of a relationship. Multiplicity values at the ends of the connection
illustrate how many instances of one class can be associated with instances
of another. In this case, the multiplicity indicates that a single Race object
can be associated with one or more instances of the Car class, represented
by the notation 1..*. This means that a race must have at least one car but
can include multiple cars.
import random
import time
# Time interval
dt = .1
class Car:
def __init__(self, name):
self.name = name
self.speed = 0
self.distance = 0 # To track how far the car has gone
def move(self):
# Move the car based on its speed
self.distance += self.speed * dt
class CarRace:
def __init__(self, race_distance):
self.cars = []
self.race_distance = race_distance
def run(self):
print(f"The race has started. The distance is ⤦
{self.race_distance} miles.\n")
# Create cars
delorean = Car("Delorean")
fiat = Car("Fiat")
# Create a race
race = CarRace(race_distance=500)
race.add_car(delorean)
race.add_car(fiat)
1.16 GENERATORS
T-Minus
10
9
8
...
2
1
and we have liftoff!
for item in s:
if condition:
yield expression
The following two generator expressions perform the same as nested for
loops to read each line with an underlying conditional check for valid
values. The first seconds generator yields seconds by splitting each line
using a comma delimiter to access the last column data. The
production_time generator creates integer values from the seconds
generator only for successful cards in a pipelining approach.
def generate_nuclear_decay_time(mean):
while True:
yield random.exponential(mean)
time = 0
while time < 100: # 100 years
# Get the next value from the generator
next_decay_time = next(generator)
print(next_decay_time)
time += next_decay_time
0.03631109190180292
0.023005702411883964
0.005083286542328262
...
More can be accomplished with generators. They can be used to read and
process very large datasets in chunks rather than loading the entire dataset
into memory at once. They can be composed with other generators or
iterable objects using functions such as zip() and map() for powerful data
manipulation and analysis.
1.17 EXCEPTIONS
File "/var/folders/nb/lqpqgz253d7bh24zdjqk_vvr0000gq/T/
ipykernel_43145/1806623527.py", line 1, in <module>
mean = sum(input_data) / len(input_data)
Exception handling blocks can be set up to catch errors using the keywords
try and except. When an error occurs within the try block, the interpreter
looks for a matching except block to handle it. If there is one, execution
jumps there. The following handler will catch a ZeroDivisionError
exception in the calculation and inform the user.
temperature_data = []
try:
mean = sum(temperature_data) / len(temperature_data)
except ZeroDivisionError:
print("You can't divide by zero. The input data is blank.")
The next example will catch non-numeric input for calculating a square
number and inform the user to re-input until it is valid. Without the handler
a ValueError would result if the input was not numeric.
If you don't specify an exception type on the except line, it will catch all
exceptions. See https://docs.python.org/3/tutorial/errors.html for further
information on errors and exceptions.
1.18 SUMMARY
1.19 GLOSSARY
argument:
A value passed to a function or method when it is called. There are two
kinds:
keyword argument:
An argument that is passed to a function by name, preceded by a
keyword identifier in a function call (e.g., name=value), or passed in a
dictionary preceded by **.
positional argument:
An argument that is passed to a function by position without a
keyword. Positional arguments can appear at the beginning of an
argument list and/or be passed as elements of an iterable preceded by
*.
assignment:
A statement that assigns a value to a variable using the equals sign =.
Boolean:
A data type with two possible values of True or False.
class:
A template for creating objects. A class defines a set of attributes and
methods that the created objects will have. Classes are defined using
the class keyword.
comment:
Line(s) in a program not executed and used to provide explanations or
notes. Single line comments begin with a hash mark # and multiline
comments are surrounded by triple quotes.
data type:
A category of data items, such as integers, floats, strings, and
Booleans, that defines the kind of operations that can be performed on
the data.
dictionary:
An unordered collection of key-value pairs that is mutable, and can be
changed after creation. Dictionaries are defined by curly braces {} and
use keys to map to their corresponding values. Each key in a dictionary
must be unique.
exception:
An error that is detected while a program is running, often causing the
program to halt unless the error is handled.
expression:
A combination of variables, operators, and values that produces a
single result.
function:
A block of reusable code that performs a specific task. Functions are
defined using the def keyword and can take arguments to operate on
and return data.
immutable:
A data object whose state cannot be modified after it is created
including tuple data structures.
indentation:
Leading whitespace (spaces or tabs) at the beginning of a line of code
used to define the structure and hierarchy of code blocks.
indexing:
The process of accessing an element of a sequence, such as a list,
tuple, or string, using its position or index. Indexing starts at zero.
iteration:
The process of looping through elements in a sequence or repeatedly
executing a block of code.
iterable:
An object capable of returning its members one at a time, allowing it
to be looped over in a for loop.
keyword:
A reserved word that is used by the interpreter to parse a program (as
opposed keyword referring to key names in keyword arguments).
Keywords are in Appendix Table A.1.
lambda:
An anonymous, inline function defined using the lambda keyword.
list:
An ordered collection of items that is mutable, which can be changed
after it is created. Lists are defined by square brackets [] and can store
elements of different data types.
literal:
A notation for representing a fixed value and data type.
method:
A function that is defined inside a class and is associated with an
object. Methods are used to perform operations on objects of that class.
mutable:
A data object whose state or contents can be modified after it is created
including lists and dictionaries.
object:
An instance of a class. Objects are created using a class definition and
have attributes (data) and methods (functions) defined by the class.
operator:
A symbol that represents a computation or operation, such as addition
+, subtraction -, or comparison ==.
set:
An unordered collection of unique items that is mutable. Sets are
defined by curly braces {} without key-value pairs.
slicing:
A mechanism for extracting a portion of a sequence, such as a list,
tuple, or string, using a specific range of indices. Slicing is performed
using the colon : operator within square brackets [].
statement:
An instruction that executes a command or action including
assignment statements and control statements.
string:
A sequence of characters enclosed in quotes (single, double, or triple)
used to represent text.
tuple:
An ordered collection of items that is immutable, whose elements
cannot be changed after it is created. Tuples are defined by parentheses
and can store elements of different data types.
()
variable:
A name that refers to a value stored in memory used to hold data.
1.20 EXERCISES
The major libraries for general purpose scientific and engineering usage are
introduced in this chapter. The big four are NumPy as a numerical computing
basis, Pandas for data analysis, SciPy for advanced scientific functions, and
Matplotlib for plotting and data visualization. They are often used together in
various combinations depending on the specific task. They provide generic
capabilities for all areas of engineering without specific domain functionality
(the only exception is that SciPy has a module for signal processing).
These libraries serve as components or building blocks for other open-source
libraries in the larger ecosystem. NumPy is the primary example being the
foundation upstream library depended upon by others. This can help simplify
integration and data exchange with other Python libraries since they are
designed to be interoperable. For example, NumPy arrays and Pandas
DataFrames can be used for input in lieu of lists.
The libraries first need to be installed into a working Python development
environment. They are normally already installed in modern tools (if not use
the pip install or conda install commands).
It can be helpful to understand library dependencies. Pandas uses NumPy
extensively for its underlying data structures and operations. Its DataFrame and
Series objects are built on NumPy arrays. It also uses SciPy and Matplotlib for
some functions, but they are not core dependencies. SciPy builds on top of
NumPy and extends it with additional scientific functions. Matplotlib depends
on NumPy for numerical operations. It also uses SciPy for some functions, but
it's not a required library.
All these open-source libraries are free to use, distribute, and modify, making
them accessible to everyone. They are designed to be reusable and can be
customized to fit specific needs and requirements. They are supported by large,
active communities. Code is subject to intense peer review and scrutiny before
being put into a public baseline.
Open-source development is at the forefront of innovation, as the libraries
are maintained by large communities of developers who are passionate about
advancing the state of the art. Users can benefit from ongoing support,
documentation, and bug fixes. It is even better to help and participate in the
communities.
2.1 NUMPY
NumPy (for Numerical Python) is the de-facto library for scientific computing
with powerful tools for multi-dimensional arrays and matrices as essential data
structures for engineering applications [14]. NumPy arrays resemble standard
Python lists and are used for similar operations, but NumPy is optimized for
fast performance on large datasets involving complex calculations. It is is
written in C and compiled allowing for faster computation.
Beyond its core functionalities in handling arrays and matrices, it has a vast
collection of mathematical functions to operate on arrays, capabilities in
probability and statistics, and other advanced features. Importantly, NumPy is a
basis for other scientific libraries in which it is used internally including SciPy,
Pandas, and Matplotlib, andit integrates with many others.
A major advantage of using NumPy is that array vector operations are
generally much faster than iterating through lists. Vectorized operations apply
functions to entire arrays at once, rather than element by element. This allows
for efficient parallel processing. Operations incur execution overhead only once
for an array operation compared to the interpretation overhead for each
iteration. Furthermore, NumPy arrays are stored in contiguous blocks of
memory for faster memory access and better cache utilization.
NumPy's primary object is the multidimensional array as a table of elements,
all of the same type, indexed by a tuple of non-negative integers. The
dimensions of an array are called axes. A summary of common tasks using
NumPy is in Table 2.1. A small sample of basic tasks follow, with dozens more
in subsequent chapters.
Table 2.1
Common NumPy Operations
Task NumPy Functions and Operations
Array Creation and
Attributes
Create from list arr = np.array([1, 2, 3])
Operations
Element-wise operations arr1 + arr2, arr * 2
Array Manipulation
Reshape arrays arr.reshape(2, 3)
Task NumPy Functions and Operations
Concatenate arrays np.concatenate([arr1, arr2])
Statistical Functions
Mean, Median, etc. np.mean(arr), np.median(arr)
Random Numbers
Generate random numbers np.random.rand(2, 3) (uniform)
Generate random integers np.random.randint(1, 10, size=5)
Linear Algebra
Matrix multiplication np.dot(arr1, arr2)
After importing NumPy, arrays can be created from Python lists or tuples using
the array function. The elements of a multi-dimensional array must have the
same size.
import numpy as np
Arrays can be indexed and sliced similarly to Python lists with additional
options. Use standard Python syntax for the elements starting at number zero.
>>>pressures[0, 1]
200.0
Array attributes include ndim (the number of axes), shape (the size of each
axis), and dtype (the data type of the array).
print(beam_load_external.ndim, beam_load_external.shape, ⤦
beam_load_external.dtype)
print(pressures.ndim, pressures.shape, pressures.dtype)
1 (4,) int64
2 (2, 2) float64
[[0 1 2]
[3 4 5]
[6 7 8]]
Arrays can also be populated in many others ways such as with zeros to
initialize them with random values for probability distributions, or with the
frequently used linspace function to generate an array at evenly spaced values.
# Array addition
beam_load_total = beam_load_internal + beam_load_external
print(total_load)
# Array multiplication
# Areas of square surface grid cells (square meters)
areas = np.array([[0.5, 0.5],
[0.5, 0.5]])
# Calculate force on each surface element
forces = pressures * areas
print(forces)
[[ 50. 100.]
[150. 200.]]
# Criteria weights
weights = np.array([3, 10, 4])
criteria_performance = np.array([[86.4, 89.2, 75.0],
[90.2, 89.0, 82.1 ]])
# Calculate weighted performance for each criteria
weighted_performance = weights * criteria_performance
print(weighted_performance)
print("Fahrenheit:", temperatures_fahrenheit)
print("Celsius:", temperatures_celsius)
# Discretize a beam
L = 15 # Beam length (m)
n_segments = 10 # Number of discrete segments
x = np.linspace(0, L, n_segments)
load
[ 0. 150. 300. 450. 600. 750. 900. 1050. 1200. 1350.]
shear force
[ 0. 150. 450. 900. 1500. 2250. 3150. 4200. 5400. 6750.]
bending moment
[ 0. 150. 600. 1500. 3000. 5250. 8400. 12600. 18000.
24750.]
The next example creates arrays for projectile time and distance. It includes
NumPy trigonometric functions, the linspace function to create an array of
evenly spaced discrete time points, and generates a distance array of the same
length created by multiplication. These arrays will be used later for plotting
trajectories vs. angle.
import numpy as np
print(f'{time_points=} \n {distances=}')
NumPy has basic statistical functions like mean, median, var, np.std, and more
for summarizing and understanding data distributions. These can be applied to
both NumPy arrays and standard lists.
Aggregation functions like sum, prod, cumsum, and cumprod are helpful in
computing cumulative and multiplicative statistics on arrays. Functions such as
np.corrcoef and np.cov are available for calculating correlation and
covariance matrices to help understand relationships between variables.
The random module includes methods to generate random numbers from
probability distributions which are useful in simulations and random sampling.
The following creates an array of 1000 values of a Rayleigh distribution to
model wind velocity. The scale is the mean value, and the size parameter is the
number of samples. This method of generating random samples is used to drive
a simulation of wind turbine power output in Section 3.13.
np.random.rayleigh(scale=5, size=1000)
NumPy can compute histograms and bin data for statistical analysis,
visualization, and understanding of data distributions. Matplotlib can do much
of the same, though NumPy improves upon it. For example, continuous
cumulative distributions can be computed and custom bins specified precisely
with NumPy compared to Matplotlib histogram defaults. See Section 2.2.
Data transformation functions for sorting, filtering, and applying Boolean
conditions are available. The following sorts testing data to generate a
cumulative probability distribution. The arange function returns evenly spaced
values within a given interval. This example continues in Section 3.1.3 to plot
the distribution and use it for an inverse transform method to generate random
variates.
# sort data
sorted_durations = np.sort(durations)
np.arange(len(sorted_durations))/float(len(sorted_durations)-1)
print(cum_probabilities)
2.2 MATPLOTLIB
As illustrated, a Figure is the larger canvas that Axes are placed within. Each
pyplot function makes some change to a figure, such as create a figure, create a
set of axes within a figure, plot some lines, decorate the plot with labels, etc.
Examples in Figure 2.1 show lines are drawn with ax.plot and scatter plots
with ax.scatter. The axis labels can be set with the ax.set_xlabel and
ax.set_ylabel methods respectively, and other components similarly as
shown.
Matplotlib has both an object-based Axes interface and a function-based
pyplot interface. The Axes interface is used to create a figure and one or more
axes objects; then methods are used explicitly on the named objects to add data,
configure limits, set labels etc. The pyplot interface consists of functions to
manipulate figures and axes that are implicitly present (also called state-based).
pyplot is mainly intended for rapid plots and simple cases of plot generation.
Many functions are identical in both interfaces though the object-oriented API
provides more options and flexibility.
Table 2.2 contrasts the two interfaces to draw an identical line plot. The
essential Matplotlib module is typically imported with import
matplotlib.pyplot as plt. The primary statement in both is the identical
plot method. Note that the end statement show() isn't necessary in many newer
environments to display the plots. A plt.savefig() function would be used to
save a graphic file.
Table 2.2
Matplotlib Interfaces
Axes interface pyplot interface
An extra statement is needed with the Axes interface to instantiate figure and
axis objects, but the pyplot approach has limitations and will require more
bookkeeping overhead with additional plots. Complex and detailed plots are
usually simpler with the explicit O-O interface compared to using the implicit
pyplot interface.
Table 2.3 shows common Matplotlib operations. These examples use the
Axes interface and assume the figure and axis object are already created with
fig, ax = plt.subplots(). The plot type functions work the same with the
pyplot interface using the plt name instead of ax. For more examples, a set of
Matplotlib cheatsheats for beginning to advanced users is available on GitHub
at https://github.com/matplotlib/cheatsheets. They contain numerous options
and detailed reference beyond this introduction.
Table 2.3
Common Matplotlib Operations
Task Examples
Line and Scatter Plots
Line plot with X and Y ax.plot(x, y)
Bar Plot
Bar plot ax.bar(x, height)
Task Examples
Horizontal bar plot ax.barh(x, height)
3D Plot
3D scatter plot ax.scatter3D(x, y, z)
Plot Customization
Add title ax.set_title(’Title’)
Matplotlib Defaults
Customization
Change global default plt.rcParams[’font.size’] = 14
settings
Use a custom style plt.style.use(’seaborn-darkgrid’)
The basic plot function is versatile and will take an arbitrary number of
arguments per the call signature:
The x and y data sequences are the coordinates for lines or points. The
optional parameter fmt specifies basic formatting like color, marker, and
linestyle (e.g., ’r’ or ’red’ specifies red color). The optional data parameter is
an object with labelled data in lieu of x and y sequences (e.g., a dictionary or
Pandas DataFrame), and the labels are provided for plotting. Optional keyword
arguments (kwargs) provided as a dictionary will be unpacked to specify plot
properties such as a line label, linewidth, marker face colors, etc. These inputs
are shown in successive examples.
Matplotlib can plot y versus x as lines and/or unconnected markers for scatter
plots. The minimum input is a set of y values. When provided a single
sequence, it assumes y values and automatically generates an x sequence
starting with 0 of the same length. In this example, a list of 24 measurements is
sent to generate the line plot in Figure 2.2. After importing pyplot into the
conventional namespace plt, only plt.plot() is necessary to draw a line.
Labels for the axes are also added.
temperatures = [303, 341, 315, 320, 301, 320, 330, 330, 323, 309,
310⤦
, 330, 333, 320, 310, 330, 323, 299, 310, 309, 293, 300, 310,
314]
# input y values
plt.plot(temperatures)
plt.xlabel('Hour')
plt.ylabel('Tank Temperature (K)')
Figure 2.2 Line Plot Given Y Values
g = 9.81 # m/s^2
velocities = np.linspace(20, 100, 9)
heights = [velocity**2/g/2 for velocity in velocities]
The pyplot API used in the previous examples is rapid but less flexible than
the object-oriented Axes interface primarily used in this book. When generating
multiple plots in a script, pyplot becomes trickier and more verbose. The plot
state needs re-initializing between plots with a clf() to clear the current figure
or a cla() to clear the current axes.
With the Axes interface, Figure and Axes objects are created using plt.
subplots(). The subplots method returns figure and axis objects that are
typically assigned to fig and ax respectively. Object methods are then used to
draw data such as ax.plot() or ax.scatter() and set properties such as
ax.set_xlabel() or ax.set() which can take multiple attribute settings.
The Axes interface is used to plot multiple flight time curves against varying
launch angles in Figure 2.4. A dictionary is created with trajectory data for a
specified parameter space and plotted. The dictionary contains lists of flight
times for each angle keyed by velocity values. Each ax.plot in the loop adds
another velocity curve.
# Parameter space
angles = np.linspace(10, 90, 10)
velocities = np.linspace(50, 80, 4)
A scatter plot can be drawn with the same plot command used for a line by
specifying a marker type (e.g., with a shorthand parameter ’o’ or marker=’x’)
or with the scatter method. The next example uses a scatter plot to visualize
and characterize cost functions for different types of engines from historical
data. Each engine type is plotted as a separate call and color coded. The NumPy
line fitting function polyfit is used to derive a best fit regression line and
plotted for each engine type.
This example reads input from a CSV file with the following structure. Note
the manual file processing and data analysis could be simpler using a general
purpose Pandas function per the next section.
cost,horsepower,engine_type
21343.6,326,diesel
34990.2,658,diesel
54330.3,586,gas_turbine
...
import numpy as np
import matplotlib.pyplot as plt
# Histogram
fig, axis = plt.subplots()
axis.hist(velocities, bins=100)
axis.set(xlabel = 'Wind Velocity', ylabel='Frequency', xlim = [0,
200])
Figure 2.6 Histogram
# Cumulative distribution
fig, axis = plt.subplots()
axis.hist(velocities, histtype='step', cumulative=True, bins=100)
axis.set(xlabel = 'Wind Velocity', ylabel='Cumulative ⤦
Probability',xlim = [0, 200])
Box plots are also created from data sequences in lists, arrays, or labeled series.
In this example the targeting errors for different types of projectile launchers
are modeled as normal distributions. Both arrays are sent and displayed
alongside each other for easy comparison as seen in Figure 2.8.
fig, ax = plt.subplots()
ax.boxplot([errors_auto, errors_manual])
ax.set_xticklabels(['Electronic', 'Manual'])
ax.set(xlabel = "Launcher Type", ylabel="Targeting Error (m)")
Three dimensional plots require input arrays for the X, Y, and Z axes.
Matplotlib can visualize either surface or volumetric data in 3D space, as well
as lines, histograms, and other plot types. They can also be animated (see the
next section). Many examples of 3D plotting can be found at
https://matplotlib.org/stable/gallery/mplot3d/index.html. Color maps for 3D
plotting are described in detail at
https://matplotlib.org/stable/users/explain/colors/colormaps.html.
In the next example, a response surface visualization of projectile distance by
angle and velocity is created with the plot_surface method. A color map is
specified with the cmap parameter to visualize the response surface in Figure
2.9.
# Plotting
fig, ax = plt.subplots(subplot_kw={'projection': '3d'})
ax.plot_surface(velocity, angle, distances, cmap='viridis')
ax.set(xlabel = 'Velocity (m/s)', ylabel ='Angle (degrees)',
zlabel ⤦
= 'Distance (m)', ) # 'Response Surface of Distance of a ⤦
Projectile'
ax.view_init(elev=20, azim=35, roll=0)
2.2.7 ANIMATION
...
fig, ax = plt.subplots()
ax.set(xlabel='Distance (m)', ylabel='Height (m)', xlim=[0, ⤦
x_max+1], ylim=[0, y_max+1], title='Projectile Animation')
# Draw projectile moving over time as scatter plot points
for time in np.arange(0, flight_time, dt):
x, y = projectile_position(time)
ax.scatter(x, y, marker='o', color='b')
plt.draw()
plt.pause(0.1 ) # animation time delay between points (seconds)
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
# Parameters
v0 = 60
theta = np.pi / 4 # Launch angle (radians)
g = 9.81
# Plot
fig, ax = plt.subplots()
trajectory, = ax.plot([], [])
projectile, = ax.plot([], [], marker='o', color='red')
display_text = ax.set_title('')
ax.set(xlim=[0, max(x)], ylim=[0, max(y)], xlabel='Distance (m)',
⤦
ylabel='Height (m)')
Note that the animation API call method mirrors the logic of a general
purpose time-based simulation. As described in [11], a simulation framework
requires a time clock, parameter initialization routine, and update procedure for
each time step which corresponds to a frame in Matplotlib. Thus the animation
template can be easily and naturally integrated with dynamic simulations in
various ways (see Chapter exercises). They can also be made interactive with
user controls.
2.3 PANDAS
Pandas is the Python Data Analysis Library used to explore and analyze data in
many formats. It provides data structures for efficient data manipulation and
rapid analysis with functions for statistics, time series analysis, data cleaning,
merging, reshaping and more. It is widely used in engineering applications for
large datasets including simulations and machine learning. Being compatible
with NumPy, SciPy, Matplotlib, statsmodels, scikit-learn, and many other
libraries, it saves effort with its richness of underlying data structures and
functions.
Central to Pandas are data structures for a Series as a one-dimensional array
holding data of any type and a DataFrame as a two-dimensional table similar to
a spreadsheet. A DataFrame object contains rows and columns with additional
labelling metadata. DataFrames are powerful for data manipulation and
analysis involving multiple related variables.
Data structures are labeled with row indices and column names. Pandas by
default will create its own index for each Series or DataFrame row starting
from 0 and incrementing by 1 for each element. Other indices can also be
specified. Each DataFrame column is labeled with a column name and acts like
a Series.
A summary cheatsheet of common tasks using Pandas is in Table 2.4
showing the breadth of available operations. The reader can find a thorough
coverage of using DataFrames for data manipulation for engineering
applications in [13]. Examples of basic tasks for creating datasets, selecting and
indexing data, generating statistics, and vectorized operations follow.
Table 2.4
Common Pandas Operations
Task Examples
Data Loading and
Saving
Load CSV df = pd.read_csv(’data.csv’)
Iteration
By row for index, row in df.iterrows():
for i, row_index in enumerate(df.index):
By column for column in df:
Data Manipulation
Add new column df[’new_column’] = ...
Missing Values
Check for missing values df.isnull().sum()
Drop rows with missing df.dropna()
values
Fill missing values df.fillna(value)
Task Examples
Aggregation and
Statistics
Describe data df.describe()
Statistics df[’column’].mean()
df[’column’].median()
df[’column’].std()
Group-by operations df.groupby(’column’).mean()
Time Series
Set time as index df.set_index(’time_column’)
Apply Functions
Apply function to each df[’column’] = df[’column’].apply(function)
element
Vectorized operations Use operators and functions directly on DataFrame
or Series
Other Useful Functions
Show first, last few rows .head(), .tail()
DataFrame information .info()
import pandas as pd
print(df)
# Pressure mean
print(f"Mean pressure: {df.mean()} PSIA")
0 22
1 24
2 24
3 25
4 27
5 29
6 30
7 27
8 26
9 26
dtype: int64
Third pressure reading: 24 PSIA
Mean pressure: 26.0 PSIA
Often a custom index is desired that may be part of the data itself or created
independently. Next an index is defined and a series element is accessed with
its custom index.
import pandas as pd
print(df)
Naphtha 21
Liquefied Petroleum Gas -45
Benzene 14
Toluene 18
dtype: int64
Temperature setting of Benzene: 14C
External files can be imported and used for datasets. The next example reads an
Excel file containing detailed altitude and air density values for atmospheric
calculations (this is used in Section 3.16 for meteor trajectory analysis). The air
density data points are measured every 2000 feet per the input spreadsheet
shown in Figure 2.12.
Figure 2.12 Altitude Density Spreadsheet
This example also demonstrates the use of labels for each column. In this
case they come with the data in a header row, but could also be specified
manually.
The file contents are put directly into the DataFrame using the read_excel
function.
import pandas as pd
# Create dataframe
df = pd.read_excel('air densities.xlsx')
print(df)
Altitude Density
0 0.0 1.230000e+00
1 2000 1.010000e+00
2 4000 8.190000e-01
3 6000 6.600000e-01
4 8000 5.260000e-01
.. ... ...
96 192000 3.600000e-10
97 194000 3.600000e-10
98 196000 3.600000e-10
99 198000 3.600000e-10
100 200000 2.500000e-10
One can then select DataFrame columns with their column names.
print(df['Altitude'])
Altitude
0.0 1.230000e+00
2000 1.010000e+00
...
198000 3.600000e-10
200000 2.500000e-10
Name: Density, Length: 101, dtype: float64
In this case it is desired to set the index appropriately for altitude dependent
calculations. The next example defines the altitude column as the index which
is seen in the printed summary. It can now be used to relate density for given
altitudes in simulations.
import pandas as pd
time,tank_1,tank_3,tank_4,tank_5
0:00,415,302,245,303
0:05,414,301,216,309
0:10,417,308,225,317
...
23:50,417,308,232,320‘56
23:55,417,319,215,319
The rows for each data collection time point can be iterated over with
.iterrows().
df=pd.read_csv('tank_levels.csv', index_col='time')
index = 0:00
row = tank_1 415
tank_3 302
tank_4 245
tank_5 303
Name: 0:00, dtype: object
index = 1:00
row = tank_1 414
tank_3 301
tank_4 216
tank_5 309
...
The columns for each tank can be iterated over and accessed for calculations by
iterating directly over the dataframe.
import pandas as pd
2.4 SCIPY
SciPy is a general purpose library for scientific computing with a wide range of
tools for numerical optimization, statistics, interpolation, signal processing,
linear algebra, integration, differential equations, and more [18]. The SciPy
functions are segmented across modules, and are generally imported one
module at a time and namespaced accordingly (as opposed to all Pandas
functions imported at once as pd). Hence Table 2.5 showing common
operations performed with SciPy includes the different module import names
for usage.
Table 2.5
Common SciPy Tasks
Task Examples
Optimization
Non-linear least from scipy.optimize import curve_fit
squares popt, pcov = curve_fit(func, xdata, ydata)
Signal Processing
Fast Fourier Transform from scipy.fft import fft, ifft
Filtering from scipy.signal import filtfilt
Sparse Matrices
Creating sparse from scipy.sparse import csr_matrix
matrices
Sparse matrix Use similar syntax as dense matrices
operations
Other Useful
Functions
Distance metrics from scipy.spatial.distance import cdist
2.4.1 STATISTICS
import numpy as np
import scipy.stats as stats
2.4.2 OPTIMIZATION
It is desired to minimize the cost of building holding tanks, where the material
cost depends on the exterior surface area. The SciPy optimize module is used
in Listing 2.3 to design a cylindrical tank for a specific volume. First, the
minimize method is chosen, which is suitable for finding the minimum value of
a scalar function with or without constraints. The optimization is then
formulated by defining the objective function as the surface area dependent on
radius and height, and the constraint for the total volume.
An initial guess for the radius and height of the tank is provided to start the
optimization algorithm. The constraint is then defined as a dictionary with the
type eq indicating an equality constraint, and specifying the function that
ensures the volume of the tank is exactly 1000 units. Bounds are set for the
radius and height to ensure both remain positive throughout the optimization.
The optimization uses the Sequential Least Squares Quadratic Programming
(SLSQP) method, which is well-suited for many constrained optimization
problems.
The result returned when calling minimize is a special object that contains
information about the outcome of the optimization process. The attribute for the
success Boolean flag is checked, and then the optimized values are printed out.
The message attribute is a string that provides a description of the exit status of
the optimizer.
distance,intensity
0.0,241.571
0.08,202.015
0.16,179.053
0.24,158.883
...
The curve_fit method takes a defined model function, x and y data points,
then finds the optimal values for the function parameters (a, b, and c) that
minimizes the difference between the function's output and actual data points.
The fitted parameter values are stored in popt as a tuple.
The script first imports the experimental data with Pandas, uses Matplotlib to
draw a scatter plot of the data alongside a continuous fitted curve, and displays
the calibrated decay curve equation written with LaTeX.
A hypothesis test of the exponential model fit can be performed with the SciPy
ttest_ind module. It calculates the T-test for the means of two independent
samples of scores, assuming the null hypothesis that the samples have identical
average values. Here the residuals are compared to a normal distribution with a
mean of zero.
# Significance level
alpha = 0.05
print(f"t-statistic: {t_statistic:.4f}")
p_value = abs(p_value) # two-tailed test
print(f"p-value (two-tailed): {p_value:.4f}")
t-statistic: -0.0000
p-value (two-tailed): 1.0000
Fail to reject the null hypothesis that the model fits the data
(residuals are zero).
import numpy as np
from scipy import signal
import matplotlib.pyplot as plt
2.5 NETWORKX
The NetworkX library is widely used to create, modify, and analyze network
models for engineering systems or processes involving multiple components,
connections, or interactions. Network data structures are particularly useful for
modeling, analyzing, and optimizing such systems because they represent them
as graphs consisting of nodes (vertices) and edges (connections) between
nodes. In a graph, nodes represent system entities, and edges represent
relationships between those entities.
NetworkX includes algorithms to analyze graph, node, and edge properties
such as connectivity, flow, and centrality. Paths through networks can be
computed and optimized. These analyses can help identify bottlenecks,
optimize routes, and improve overall system performance in areas such as
communication networks, electrical circuits, supply chains, and project
management.
Constructing a network in NetworkX fundamentally involves adding nodes
and edges. There is flexibility to add them individually, in groups, or from
external data sources such as lists or datasets. Additionally, attributes can be
assigned to graphs, nodes, and edges to provide contextual or quantitative data
for computations. NetworkX uses a default weight attribute for graph
computations, though any named attribute can be used, as is demonstrated later.
Table 2.6 provides examples of common NetworkX operations. It includes
functions for creating different types of network graphs, adding nodes and
edges, computing various network properties and algorithms such as shortest
paths, maximum flow, and centrality measures, as well as visualization and file
I/O. Full documentation is available at
https://networkx.org/documentation/stable/reference/index.html, with
additional resources on graph theory and network analysis at
https://networkx.org/nx-guides/index.html.
Table 2.6
Common NetworkX Operations
Task Examples
Graph
Creation
Create a G = nx.Graph()
regular graph
Create a G = nx.DiGraph()
directed
graph
Create a G = nx.MultiGraph()
multigraph
Add a single G.add_node(’A’)
node
Add multiple G.add_nodes_from([2, 3])
nodes
Task Examples
Add nodes G.add_node(’A’, role=’Router’)
with
attributes
Add a single G.add_edge(1, 2)
edge
Add multiple G.add_edges_from([(’A’, ’B’), (’B’, ’C’)])
edges
Add edges G.add_edge(’A’, ’B’, latency=10, label=’Wi-Fi’)
with G.add_edge(3, 4, {’weight’: 20, ’label’: ’Route1’})
attributes
Import G.add_edges_from(edge_list)G.add_nodes_from(node_list)
nodes/edges
from a list
Add node or G.add_node(1, label="Start"), G.add_edge(1, 2,
edge weight=10)
attributes
Get nodes G.nodes(), G.edges()
and edges
Get the G.degree(1)
degree of a
node
Graph
Operations
Find all paths list(nx.all_simple_paths(G, 1, 3))
between
nodes
Find the nx.shortest_path(G, 1, 3)
shortest path
Task Examples
Calculate nx.path_weight(G, path, weight=’attribute’)
total path
weight
Check if nx.has_path(G, 1, 3)
nodes are
connected
Find all list(G.neighbors(1))
neighbors of
a node
Graph
Analysis
Clustering nx.clustering(G, 1)
coefficient
Degree nx.degree_centrality(G)
centrality
Betweenness nx.betweenness_centrality(G)
centrality
Find nx.connected_components(G)
connected
components
Layout and
Visualization
Compute pos = nx.spring_layout(G)
layout
positions
Draw the nx.draw(G, pos)
graph
Task Examples
Draw with nx.draw(G, pos, with_labels=True)
labels
File Input
and Output
Read graph G = nx.read_edgelist(’edges.txt’)
from a file
Write graph nx.write_edgelist(G, ’output.txt’)
to a file
Graph
Properties
Check graph nx.density(G)
density
Get graph nx.diameter(G)
diameter
Check if nx.is_connected(G)
graph is
connected
NetworkX provides flexible methods for adding nodes and edges to a graph.
Per Table 2.6, nodes can be added one at a time or as a group from any iterable
such as a list. Similarly, edges can be added individually, or in groups.
Attributes can be assigned to nodes or edges using keyword arguments or
dictionaries. Nodes and edges can also be imported from external lists or files.
The example in Listing 2.5 models a Local Area Network (LAN) where
nodes represent devices (modems, routers, and switches), and edges represent
bi-directional communication links between them. In this example, a graph is
initialized using nx.Graph(), nodes are added for the devices, and edges are
defined for the communication links.
After creating the graph, it can be visualized as shown in Figure 2.15 using
the nx.draw() function as an undirected network. The graph rendering
illustrates the structure of the nodes and edges, aiding in the understanding of
the system layout. The graph is rendered internally using Matplotlib.
import networkx as nx
import matplotlib.pyplot as plt
import networkx as nx
The displayed graph using the default layout places the top node of the
hierarchy in the center. Radial connections spread out as successive levels in
the hierarchy. It can be visualized which main branches go down one or two
levels of decomposition by counting the connections. However, a directed tree
visualization is desirable. Note that there is an imperfection due to a long label
being clipped off. NetworkX adjusts layouts for node placement only. This case
can be remedied with Matplotlib parameters to add margin (used in Section
2.5.3).
Listing 2.7 NetworkX Directed Hierarchical Graph
import networkx as nx
There are several layout options for positioning nodes and edges within a
graph. The layout determines the coordinates of nodes in a two-dimensional
plane. Common layout functions in NetworkX include the following. A
complete list of graph layouts is at
https://networkx.org/documentation/stable/reference/drawing.html#module-
networkx.drawing.layout.
spring_layout is a layout where node positions are computed using a
force-directed algorithm where nodes repel each other, while edges act as
springs to keep connected nodes close. This often results in an aesthetically
pleasing layout, but the node locations are random unless a seed is
provided. This is the default.
circular_layout places the nodes in a circle, useful for highlighting cyclic
structures.
shell_layout arranges nodes in concentric circles, useful for layered graph
structures.
spectral_layout is based on the eigenvectors of the graph Laplacian
matrix; it positions nodes in a way that minimizes a quadratic energy
function.
random_layout places nodes randomly within a unit square.
Table 2.7
NetworkX Layout Examples
Per Table 2.7, the default spring layouts evenly spread the nodes with
random directions. The hierarchical system is seen with the top node in the
middle with radial edges. The circular layouts put all nodes in a ring, and in
these cases the shell layout produces mirror images of the circular (which isn't
the case for all networks). The spectral layout minimizes an energy function but
places some nodes on top of each other. The occluding is undesirable for
visualizing these situations. The beginning and end nodes of directed graphs are
difficult to follow in some of the cases.
The node positions in layouts such as spring_layout are random due to the
initial conditions used in force-directed algorithms. These algorithms are
iterative, calculating forces between nodes and edges to determine the final
layout. As a result, the outcome can vary between runs unless a seed is
explicitly provided. Layouts like random_layout are designed to place nodes
randomly by default.
The optional pos argument in the nx.draw function allows for specifying
exact node positions, offering more control over the layout. By default,
NetworkX automatically generates node positions based on the selected layout
algorithm (e.g., spring_layout). For simple visualizations, the pos argument
can often be omitted. The following example demonstrates how to generate a
consistent layout by providing a fixed seed to the nx.spring_layout()
function for the randomization:
NetworkX provides basic functionality for visualizing all types of graphs, but
its primary purpose is graph analysis rather than visualization. In the future,
graph visualization features may be removed from NetworkX or made available
as an add-on package. For more advanced graph visualization, using a
dedicated tools such as Graphviz described in Section 2.6 is recommended.
One common task in engineering is finding the shortest path between two
nodes in a network. In a communication network, for instance, the shortest path
represents the minimum number of links (edges) required to transmit data
between two devices (nodes). The following example for the LAN network
model finds the shortest path between Router1 and Switch2 using the
nx.shortest_pathfunction. It returns a list of nodes along the path with the
minimum number of hops. In this simple case, there is a direct route between
the nodes, with one alternative path involving three hops.
The example in Listing 2.9 calculates basic measurements for each node in
the LAN model and outputs a summary table.
import networkx as nx
import pandas as pd
def calculate_node_measures(G):
"""Calculates degree, betweenness, and clustering coefficients
for ⤦
each node in a graph and returns a combined pandas
DataFrame.
Args:
G: A NetworkX graph.
Returns:
A pandas DataFrame containing node names, degree centrality, ⤦
betweenness centrality, and clustering coefficient values.
"""
degree_centrality = nx.degree_centrality(G)
betweenness_centrality = nx.betweenness_centrality(G)
clustering_coefficient = nx.clustering(G)
df = pd.DataFrame(data)
df.sort_values(by='Degree Centrality', ascending=False,
inplace=True)
return df
import networkx as nx
start = 'Router1'
end = 'Switch1'
# Find the shortest path and compute its total cost (latency)
path = nx.shortest_path(G, start, end)
path_cost = nx.path_weight(G, path, weight='latency')
print(f"Shortest path from {start} to {end} is {path}")
print(f"Total latency for this path = {path_cost} ms")
import networkx as nx
# Manufacturing process
# Add edges with duration attribute
G.add_edge("Board Stock", "Fabrication", duration=4)
G.add_edge("Fabrication", "Assembly 1", duration=3)
G.add_edge("Fabrication", "Assembly 2", duration=3)
G.add_edge("Fabrication", "Assembly 3", duration=5)
G.add_edge("Assembly 1", "Test Station 1", duration=3)
G.add_edge("Assembly 1", "Test Station 2", duration=5)
G.add_edge("Assembly 2", "Test Station 1", duration=3)
G.add_edge("Assembly 2", "Test Station 2", duration=6)
G.add_edge("Assembly 3", "Test Station 1", duration=4)
G.add_edge("Assembly 3", "Test Station 2", duration=5)
G.add_edge('Test Station 1', 'Packaging', duration=2)
G.add_edge('Test Station 2', 'Packaging', duration=2)
# Find paths and their durations
paths = nx.all_simple_paths(G, 'Board Stock', 'Packaging')
print('Path Durations')
for path in paths:
path_duration = nx.path_weight(G, path, 'duration')
print(path, path_duration)
nx.draw(G, with_labels=True)
Path Durations
[’Board Stock’, ’Fabrication’, ’Assembly 1’, ’Test Station 1’,
’Packaging’] 12
[’Board Stock’, ’Fabrication’, ’Assembly 1’, ’Test Station 2’,
’Packaging’] 14
[’Board Stock’, ’Fabrication’, ’Assembly 2’, ’Test Station 1’,
’Packaging’] 12
[’Board Stock’, ’Fabrication’, ’Assembly 2’, ’Test Station 2’,
’Packaging’] 15
[’Board Stock’, ’Fabrication’, ’Assembly 3’, ’Test Station 1’,
’Packaging’] 15
[’Board Stock’, ’Fabrication’, ’Assembly 3’, ’Test Station 2’,
’Packaging’] 16
Shortest path = [’Board Stock’, ’Fabrication’, ’Assembly 1’,
’Test Station 1’,
’Packaging’]
Duration = 12
Node and edge attributes can be useful for many calculations. NetworkX has
ample facilities for computing with edge data, but it doesn't provide any
functions for quantitative node attributes. Network node calculations are
demonstrated in the project network critical path application in Section 3.7.
2.5.6 CUSTOMIZATION
import networkx as nx
import matplotlib.pyplot as plt
2.6 GRAPHVIZ
Graphviz is an extensive library for creating and visualizing graph diagrams. It
can generate visual representations of various graph types, including directed
and undirected graphs, trees, and flowcharts. It uses the DOT language for
describing the graph, which is then rendered into an image file.
While both Graphviz and NetworkX are used for graph manipulation and
visualization, Graphviz excels in rendering, offering more advanced visuals. In
contrast, only NetworkX has graph computational capabilities. The common
graph structure as a network of nodes and edges enables compatibility between
them.
Graphviz provides a simple, intuitive syntax for creating graph diagrams. It
supports a wide range of graph types, making it versatile for many applications,
including system modeling, data visualization, flowchart creation, and network
diagramming. The library allows for extensive customization of the graph's
appearance, including colors, shapes, and labels, to meet specific visualization
needs. Additionally, it integrates well with other libraries and tools.
Table 2.8 summarizes common Graphviz operations for creating, modifying,
visualizing, and managing graph layouts. A complete API reference is available
at https://graphviz.readthedocs.io/en/stable/api.html.
Table 2.8
Common Graphviz Operations
Task Examples
Graph Creation
Create a new import graphviz as gv
graph
Initialize dot = gv.Digraph()
directed graph
Initialize dot = gv.Graph()
undirected graph
Task Examples
Set graph dot.attr(rankdir=’LR’)
attributes dot = gv.Graph(graph_attr={’rankdir’: ’LR’},
node_attr={’shape’: ’none’}, edge_attr=
{’penwidth’: ’2’})
dot.attr(’node’, fontname="arial",
fontcolor=’blue’, color=’invis’)
dot.edge_attr.update(color="gray50",
arrowsize="0.5")
Node and Edge
Operations
Add node dot.node(’A’)
Graphviz supports both undirected and directed graphs, with multiple edges
between the same pair of nodes allowed by default. Unlike NetworkX, no
additional configuration is needed to handle multigraphs. The graphs can be
either cyclic or acyclic. Graph types in Graphviz include:
In the next example in Listing 2.13, Graphviz is applied to visually illustrate the
initial structure of a satellite system design as a hierarchy in Figure 2.19. This
hierarchy represents and communicates the system's engineering design, project
work allocation, and computation of weight and cost based on its structural
decomposition.
import graphviz as gv
# Display graph
G
In the next revision in Listing 2.14, more layers, nodes, and edges are added
to fully represent the satellite system structure. The G.edges() function for
adding edges implicitly adds the nodes in defined edge pairs, reducing
redundancy in defining nodes separately using G.node(). An external list of
edges could also be passed to G.edges() to draw the entire tree. The resulting
graph, shown in Figure 2.20, illustrates the hierarchical structure.
import graphviz as gv
# Display graph
G
Graphviz provides several layout engines for positioning nodes and edges
within a graph. Each engine uses a different algorithm to determine the
placement of nodes in a two-dimensional space, and some allow for custom
node placements. These layout engines produce visualizations with distinct
structural properties. A full list of layout engines and their options is available
at https://graphviz.org/docs/layouts/.
dot is the default layout engine, used for directed graphs. It arranges nodes
hierarchically, minimizing edge crossings.
neato implements a spring model (force-directed) layout for undirected
graphs. It minimizes a global energy function to produce visually balanced
layouts, which are less structured compared to dot. Custom node placement
is supported for greater control.
fdp is similar to neato, but optimized for larger graphs, using a multi-scale
approach to generate layouts more efficiently.
sfdp is a multi-scale version of fdp, specifically designed for very large
graphs with a focus on scalability. Custom node placement is supported for
greater control.
twopi arranges nodes in concentric circles, with one node at the center and
other nodes placed in layers radiating outward. This layout is useful for
visualizing radial structures or hierarchical data that fans out from a single
central point.
circo creates circular layouts, useful for visualizing graphs with cyclic
structures or feedback loops.
osage is designed for clustering and layered graphs. Unlike dot, it focuses
on preserving node proximity within clusters rather than enforcing a strict
hierarchy. Custom node placement is supported.
patchwork generates a squarified treemap-like layout where nodes are
represented as rectangles arranged in a hierarchical structure. This is useful
for visualizing hierarchical data where node size represents importance or
weight.
import networkx as nx
import graphviz as gv
def networkx_to_graphviz(graph):
"""Converts a NetworkX graph into a Graphviz Digraph.
Args:
graph: A NetworkX graph object.
Returns:
A Graphviz Digraph object.
return dot
# Manufacturing process
# Add edges with duration attribute
G.add_edge("Board Stock", "Fabrication", duration=4)
G.add_edge("Fabrication", "Routing", duration=4)
G.add_edge("Routing", "Assembly 1", duration=4)
G.add_edge("Routing", "Assembly 2", duration=4)
G.add_edge("Assembly 1", "Test Station 1", duration=1)
G.add_edge("Assembly 1", "Test Station 2", duration=4)
G.add_edge("Assembly 2", "Test Station 1", duration=4)
G.add_edge("Assembly 2", "Test Station 2", duration=4)
G.add_edge('Test Station 1', 'Packaging', duration=2)
G.add_edge('Test Station 2', 'Packaging', duration=2)
Rendering of the shortest path is added in the next script in Listing 2.16.
NetworkX is used to identify the path, which is sent to the
draw_shortest_path function with the rest of the graph. The function converts
the model to Graphviz while highlighting edges contained in the shortest path
seen in the resulting Figure 2.22.
Figure 2.22 Circuit Board Manufacturing Process Shortest Path with Graphviz
import networkx as nx
import graphviz as gv
Args:
graph: A NetworkX graph object.
shortest_path: A list of nodes representing the shortest
path.
Returns:
A Graphviz Digraph object.
"""
return dot
def adjacent_list(input_list):
"""Creates an adjacent list of tuples from a given input list.
Args:
input_list: A list of elements.
Returns:
A list of tuples representing adjacent elements in the
input ⤦
list.
"""
adjacent_list = []
for i in range(len(input_list) - 1):
adjacent_list.append((input_list[i], input_list[i + 1]))
return adjacent_list
2.6.5 CUSTOMIZATION
Graphviz has a wide variety of customizations for overall graph, node, and
edge attributes. In conjunction with the underlying DOT language, graph
clusters can be defined as groups of graphs. Edge relationships are possible
within clusters, between clusters, or between nodes in separate clusters.
Advanced graphic markup with HTML labels are possible. For more
information see graph attributes at https://graphviz.org/docs/graph/, node
attributes at https://graphviz.org/docs/nodes/, and edge attributes at
https://graphviz.org/docs/edges/. The DOT language reference is at
https://graphviz.org/doc/info/lang.html.
In this example, the LAN network diagram will be customized with
Graphviz. It matches some of the custom styling from the NetworkX example
in Section 2.18, but it also adds images for the nodes and explicitly sets the
graph layout direction from left to right.
As shown in Listing 2.17, the graph object is created with gv.Graph() and
customized with attributes for layout, node appearance, and edge properties.
The graph_attr dictionary is used to set global attributes for the entire graph,
such as the rank direction ’LR’ for a left-to-right layout. The node_attr
dictionary defines attributes for all nodes, where the shape is set to none to use
images instead. The edge_attr dictionary specifies the edge width as 2 units.
When adding nodes with G.node(), the image attribute is used to specify the
image file for each node, and the labelloc=’b’ option positions the label at the
bottom of the node. The G.edge() calls define edge labels and set attributes
including color and style. The resulting graph is shown in Figure 2.23.
Figure 2.23 Customized LAN Network Graph with Graphviz
import graphviz as gv
# Add edges
G.edge('Modem', 'Router1', label="coax")
G.edge('Router1', 'Router2', label="WiFi", color='green', ⤦
style='dashed')
G.edge('Router1', 'Switch1', label="ethernet", color='blue')
G.edge('Router2', 'Switch2', label="ethernet", color='blue')
G.edge('Router2', 'Switch3', label="ethernet", color='blue')
import graphviz
import textwrap
Parameters
----------
system : tuple or string
The name of the system to label the diagram. If it's a
tuple, ⤦
the first element is the text label and the second is
the ⤦
unicode icon.
external_systems : list of tuples or strings
Names of the external systems that interact with the system
⤦
in a list.
filename : string, optional
A filename for the output not including a filename
extension. ⤦
The extension will be fspecified by the format
parameter.
format : string, optional
The file format of the graphic output.
Returns
-------
g : graph object view
Save the graph source code to file, and open the rendered
⤦
result in its default viewing application. PyML calls
the ⤦
Graphviz API for this.
"""
wrap_width = 12
def wrap(text): return textwrap.fill(
text, width=wrap_width, break_long_words=False)
node_attr = {'color': 'black', 'fontsize': '11', 'fontname': ⤦
'arial',
'shape': 'box'} # 'fontname': 'arial',
c = graphviz.Graph('G', node_attr=node_attr,
filename=filename, format=format,
engine=engine)
if isinstance(system, tuple):
# Write html label for unicode font size and label
placement
system_name = wrap(system[0])
c.node(system_name, label=f'''<<font ⤦
point-size="30">{system[1]}</font><br/><font ⤦
point-size="11">{system[0]} </font>>''', labelloc="b",
⤦
shape='none')
else:
system_name = wrap(system)
c.node(wrap(system_name))
return c
2.7 SUMMARY
2.8 EXERCISES
The standard ordinary least squares (OLS) method for linear regression will
be used, which minimizes the sum of squared residuals between actual
observed values and predicted values of the dependent variable. The linear
regression equation for a single independent variable is:
y = a + bx
Where:
y is the dependent variable
x is the independent variable
a is the intercept
b is the linear coefficient.
import statsmodels.api as sm
import matplotlib.pyplot as plt
temperatures = [0, 50, 100, 150, 200, 250, 300, 350, 400]
resistances = [93, 122, 153, 248, 326, 363, 482, 518, 584]
X = temperatures
Y = resistances
R-squared: 0.9844126608032431
Prob (F-statistic): 1.384471289860644e-07
p-values: [5.15302813e-03 1.38447129e-07]
Coefficients: [59.33333333 1.30833333]
Predicted Values: [ 59.33333333 124.75 190.16666667
255.58333333 321.
386.41666667 451.83333333 517.25 582.66666667]
The next script plots the results with the predicted relationship against the
data points shown in Figure 3.1.
fig, ax = plt.subplots()
ax.scatter(X, Y, label='data points')
ax.plot(X, predicted, color='red', label='predicted')
ax.set(xlabel='Temperature (C)', ylabel= 'Resistance (Ohms)')
ax.legend()
In this example, the OLS results show the generic variable names y and
x1 since the input lists had no data labels. The next Listings 3.2 and 3.3 use
Pandas dataframes with their richer data structure producing explicit
variables names labeled on the output. This can help to understand and
communicate the results especially with more independent variables.
Prolaunch desires to better estimate the testing effort for new launchers
and software update releases. Data has been automatically collected on
previous developments on the counts of product features and actual testing
effort. It is written to a csv file to be read by Pandas with the following
structure:
Features,Testing Hours
14,129
25,277
9,122
...
Per Listing 3.2 that reads the csv file, the desired columns from the data
frame are specified for the X and Y variables. This is sufficient to explicitly
label the output results with the appropriate names.
import pandas as pd
import statsmodels.api as sm
y = a + b1 x1 + b2 x2 + … + bn xn
Custom drones are designed and built for a variety of applications, and a
cost model is desired to estimate the development cost of future drones.
Listing 3.3 reads data from previous drone projects for cost, drone weight,
and data rate (as total sensor throughput rate) in a csv file with Pandas. The
file has three columns with a header row of labels.
Two independent variables are specified for X in the OLS model creation.
The results show a strong cost model with both independent variables of the
form cost = 113.3 ∗ weight + 69.3 ∗ data rate.
A 3-dimensional scatter plot can be generated to show the regression line
and visualization of the residuals against the data points. The script
constructs a slightly customized plot resulting in Figure 3.2.
Figure 3.2 3D Scatter Plot and Regression
import pandas as pd
import statsmodels.api as sm
# Create figure
fig = plt.figure()
ax = plt.axes(projection ="3d")
The standard Python random library, NumPy, and SciPy provide functions
for generating random numbers from many different probability
distributions which can be used in simulations or other engineering
applications. Single values are generated with the standard random library.
NumPy and SciPy can generate arrays which are more suitable for large
data and advanced applications. The standard library has basic distributions,
NumPy has an extensive variety, while SciPy has even more available.
For example, the random.random function generates a random number
uniformly distributed between 0 and 1 where all values are equally likely to
occur. The np.random.uniform(min, max) function can generate an array
of uniformly distributed random numbers in a given range,
np.random.normal() generates random numbers from a normal
distribution, and np.random.exponential() function can be used to
generate random numbers from an exponential distribution.
NumPy distribution functions can generate a single random value or an
array of values. By default a single value will be drawn and returned, while
passing the optional size parameter will specify the number of values to
generate and return in an array of that size. For example,
np.random.normal(size=5000) will generate an array of 5000 normally
distributed random values.
Random distribution examples using the standard random library for
uniform and normal distributions are in Listing 3.4. NumPy examples with
arrays containing uniform, normal, triangular, and Rayleigh distributions
used in later examples are in Listing 3.5.
uniform_number = 0.5988667056940171
random_integer = 146
random_normal = 95.72
import numpy as np
import numpy as np
import matplotlib, matplotlib.pyplot as plt
matplotlib.rcParams['axes.spines.top'] = False
matplotlib.rcParams['axes.spines.right'] = False
number_samples = 10000
num_bins = 12
plt.subplots_adjust(hspace=1, wspace=0.5)
# Sort data
sorted_data = np.sort(cycles)
# CDF
fig, axis = plt.subplots()
axis.set(xlabel='# Cycles to Failure',
ylabel='Cumulative Probability',
yticks=np.linspace(0, 1.0, 11)
)
plt.grid(True)
axis.plot(sorted_data, cum_probabilities)
The CDF can be used for the inverse transform method by indexing it
with a random number r that is uniformly distributed between 0 and 1,
denoted as U (0, 1). It is set equal to the cumulative distribution, F (x) = r,
and x is solved for. For Monte Carlo analysis, a particular value ri gives a
value xi, which is a particular sample value of X per:
−1
xi = F (r i )
This will generate values xi of the random variable X with a distribution of
values matching the empirical CDF. With enough samples, a uniform
distribution for r will produce an X distribution that matches the empirical
CDF for number of cycles until failure.
# inverse transform
# generate random values of cycles using empirical CDF with r_i
= ⤦
U(0, 1)
1027
4363
2765
...
FastCircuit has a production line goal of 80% average yield of circuit cards
on first pass. It measures the percent of cards started that pass all tests first
time without defects or rework. Yield data has been collected on a small
sample of production runs to be subjected to hypothesis testing for the
entire population.
Hypothesis testing is used when it is desirable to know if the mean of a
variable is equal to, less than, or greater than a specific value; or if there is a
significant difference between two means. It is based on the assumption of
normality of sample means. A null hypothesis, H0, states a certain
relationship that may or may not be true at a given significance level, α,
which is the probability of rejecting a hypothesis when it is true. The
hypothesis is not rejected if the relationship holds true statistically.
Hypothesis testing is employed with the scipy.stats module in Listing
3.8. They null hypothesis H0 states that the average is less than the goal of
80%. A standard α of 5% is specified. The output indicates to reject the
hypothesis and accept that the average is equal to or greater than the goal.
# Significance level
alpha = .05
t-statistic: 1.7542295178350589
p-value: 0.04607666303535189
Reject null hypothesis and accept HA: mean yield > 80
# Observed frequencies
observed_counts = [151, 89, 72, 31, 19, 13, 9]
N = sum(observed_counts) # Total number of observations
# float('inf') ensures 100% coverage of the asympotic CDF
interval_edges = [0, 10, 20, 30, 40, 50, 60, float('inf')]
# Hypothesis testing
# H0: there is no significant difference between the observed
and ⤦
expected frequencies.
alpha = .05 # significance level
if p_value < alpha:
print("Reject the null hypothesis")
else:
print("Fail to reject the null hypothesis")
X2=10.629398914521413 p_value=0.10052814836317078
Fail to reject the null hypothesis
3.2 PROJECTILE MOTION WITH AIR RESISTANCE
1
2
x drag (t) = v 0 cos(θ)t − C d ρAv
0
2
1 2
1 2
h drag (t) = v 0 sin(θ)t − gt − C d ρAv 0 t/m
2 2
where:
– x drag (t) is the horizontal position with air resistance at time t
– h drag (t) is the vertical position with air resistance at time t
– Cd is the drag coefficient
– ρ is the air density (kg/m3)
– A is the projectile cross-sectional area (m2)
– m is the projectile mass (kg).
import math
import matplotlib.pyplot as plt
# Constants
g = 9.81 # m/s^2
# Initial conditions
v0 = 32 # m/s
theta = 50 * math.pi / 180 # radians
height = 0
# Drag parameters
cd = 0.25 # drag coefficient
rho = 1.225 # air density (kg/m^3)
# Projectile parameters
A = 0.2 # projectile cross-sectional area (m^2)
m = 10 # projectile mass (kg)
def trajectory(v0, theta, height, use_drag, cd, rho, A, m):
"""
Computes the trajectory of a projectile with or without air
drag.
Parameters:
v0 (float): Initial velocity (m/s)
theta (float): Launch angle (radians)
height (float): Initial height (m)
cd (float): Drag coefficient
rho (float): Air density (kg/m^3)
A (float): Cross-sectional area of the projectile (m^2)
m (float): Mass of the projectile (kg)
Returns:
t (list): Time points (s)
x (list): Horizontal distances (m)
h (list): Heights (m)
"""
# Data output lists
t, h, x = [], [], []
time = 0
dt= .1
while height >= 0:
height = v0* math.sin(theta) * time - 0.5* g * time**2 -
0.5* ⤦
cd * rho * A * v0**2 * time / m # gravity - drag
distance = v0* math.cos(theta) * time - 0.5* cd * rho *
A * v0⤦
**2 * time / m
t.append(time)
h.append(height)
x.append(distance)
time += dt
return (t, x, h)
fig, axis = plt.subplots(figsize=(5,4))
axis.set(xlabel='Distance (m)', ylabel='Height (m)')
use_drag = True
_, x, h = trajectory(v0, theta, height, use_drag, cd, rho, A,
m)
axis.plot(x, h, label="With drag")
use_drag = False
_, x, h = trajectory(v0, theta, height, use_drag, cd, rho, A,
m)
axis.plot(x, h, label="No drag")
plt.legend()
n
P V = F V /(1 + r)
where:
PV is the present value of a cash flow
FV is the future value of a cash flow
r is the discount rate
n is the period (time step) for compounding.
N
F Vn
NPV = ∑
n
(1 + r)
n=0
import numpy as np
Returns:
Net present values
"""
present_values = [present_value(cash_flow, rate, time) for
time, ⤦
cash_flow in enumerate(cash_flows)]
cumulative_npvs = list(np.cumsum(present_values))
print(f'{NPV_purchase=:.0f}\n{NPV_rent=:.0f}')
NPV_purchase=-48247
NPV_rent=-41749
Returns:
The cash flow present values and net present values for
each ⤦
period.
"""
effective_rate = annual_rate/n
present_values = [present_value(cash_flow, effective_rate,
time) ⤦
for time, cash_flow in enumerate(cash_flows)]
cumulative_npvs = [sum(present_values[:i+1]) for i in ⤦
range(len(present_values))]
if plot:
plot_cashflows(cash_flows, present_values,
cumulative_npvs, ⤦
plot_pv, show_values)
The positive NPV in this example justifies the purchase. The laser
measurement system is the backdrop for machine learning in Section 3.11.2
for identification of electronic circuit cards.
This script is also contained in a web application in Section 4.8. It is the
subject of automated testing described in Section 5.4.1. It was iteratively
developed with a testing script that was run after each change. This ensured
it passed test cases by comparing the outputs with expected results without
manual testing effort.
NetworkX has ample facilities for computing with edge data. However, it
doesn't provide any functions for quantitative node attributes so this
implementation was developed using the duration node attribute. The
function find_max_path_key finds the maximum value from a newly
created dictionary of path data. The task dictionary tree structure unpacking
replaces numerous hand coded NetworkX and Graphviz function statements
for each node and edge and the resulting redundancies (e.g. single nodes
with many successors would require many edge statements).
import NetworkX as nx
import graphviz as gv
import textwrap
wrap_width = 12
def wrap(text): return textwrap.fill(
text, width=wrap_width, break_long_words=False)
return dot
def find_max_path_key(dictionary):
"""Finds the dictionary key for the maximum value in
dictionary"""
max_value = max(dictionary.values())
max_path_key = None
for key, value in dictionary.items():
if value == max_value:
max_path_key = key
return max_path_key
def create_edges_from_nodes(nodes):
"""Returns a list of edges from a tuple of nodes."""
edges = []
for i in range(len(nodes) - 1):
edges.append((nodes[i], nodes[i + 1]))
return edges
2
A = πr
1
2 3
P max = ρπr v
2
P = P max ⋅ C p
where:
A is the swept area
r is the blade radius
ρ is the air density
v is the wind speed
Cp is the power coefficient
P max is the theoretical maximum power using the Betz limit
P is the actual power output.
Listing 3.13 implements the simulation with the function
calculate_power taking blade parameters and wind speeds. A Rayleigh
distribution is used to match collected field data of 10 minute average wind
speeds. It performs a day long simulation of intervals and computes total
energy generated by the wind turbine.
import numpy as np
# Constants
PI = 3.14159
AIR_DENSITY = 1.225 # kg/m^3 (standard air density)
Args:
blade_radius (float): Radius of the wind blade in meters.
power_coefficient (float): Power coefficient (Cp) of blade
design
wind_speed (float): Wind speed in meters per second.
Returns:
float: Power output of the wind blade in watts.
"""
return power_output
# Blade parameters
blade_radius = 50 # meters
# Power coefficient (Cp) between 0.3 and 0.45
power_coefficient = 0.4
Q added
Q added = m f uel ⋅ H V , ΔT = , Q rejected = m air ⋅ γ ⋅ (T f ina
m air ⋅ c p
1
η ideal = 1 − , η actual = η ideal ⋅ 0.75, m air = V d ⋅ ρ air , mf u
γ−1
r
where:
Qadded Heat added from fuel (J) mair Mass of air (kg)
mfuel Mass of fuel (kg) cp Heat capacity of air (J/kg·K)
HV Heat value of fuel (J/kg) Qrejected Heat rejected (J)
ΔT Temperature increase (K) γ Heat ratio of air
Tfinal Final temperature (K) Tinitial Initial temperature (K)
ηideal Ideal thermal efficiency ηactual Actual thermal efficiency
r Compression ratio Vd Displacement volume (m³)
ρair Air density (kg/m³) AFR Air-fuel ratio
ηv Volumetric efficiency
N P ⋅ 60
P = E cycle ⋅ , E cycle = H V ⋅ m f uel ⋅ η v ⋅ η actual , τ = ,
120 2πN
where:
import math
class Engine:
"""
The Engine class models the thermodynamic properties and ⤦
performance of an internal combustion engine.
It calculates thermal efficiency, estimates torque, and
computes ⤦
power output based on engine parameters.
Attributes include displacement, bore, stroke, compression
ratio, ⤦
and initial temperature and pressure.
"""
def __init__(self, displacement, bore, stroke,
compression_ratio):
self.displacement = displacement # engine displacement
in liters
self.bore = bore # diameter of the cylinder bore in mm
self.stroke = stroke # length of the piston stroke in
mm
self.compression_ratio = compression_ratio #
compression ⤦
ratio of the engine
self.temperature = 298 # initial temperature of the
engine in K
self.pressure = 101325 # initial pressure of the engine
in Pa
self.thermal_efficiency = None
def get_thermal_efficiency(self, fuel_heat_value):
"""Calculate the thermal efficiency of the engine"""
specific_heat_ratio = 1.4# specific heat ratio of air at
⤦
constant pressure and constant volume
air_mass = self.displacement * 1.225 # mass of air in
the ⤦
engine at standard temperature and pressure
fuel_mass = air_mass / 14.7# stoichiometric air-fuel
ratio
heat_added = fuel_mass * fuel_heat_value # heat added to
the ⤦
engine from the combustion of fuel
return torque
rpm = 6000
estimated_torque = new_engine.estimate_torque(45e6, rpm)
print(f'Estimated torque = {estimated_torque:.1f} Nm')
power_output = new_engine.get_power_output(estimated_torque,
rpm)
print(f'Estimated power output = {power_output:.1f}
horsepower')
import selib as se
import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import interp1d
import pandas as pd
# Simulation parameters
time = 0
dt = 0.1
# Output lists
air_density_plot = []
height_over_ground_plot = []
vertical_velocity_plot = []
drag_plot = []
time_plot = []
# Time loop
while height_over_ground > 0:
time += dt
displacement += vertical_velocity * dt
vertical_velocity += acceleration * dt
# Auxiliaries
gravitation = mass * g # N
height_over_ground = height_at_start - displacement # m
air_density = get_density(height_over_ground, df)
cross_section = (3 * mass / (4 * np.pi * meteor_density))
** ⤦
(2/3) * np.pi # m^2
drag = 0.5* cross_section * air_density * shape_factor * ⤦
vertical_velocity ** 2 # N
total_force = gravitation - drag # N
# Rates
acceleration = total_force / mass # m/s^2
# 2 x 2 figure
figure, ((axis1, axis2), (axis3, axis4)) = plt.subplots(2, 2)
axis1.plot(time_plot, vertical_velocity_plot)
axis2.plot(time_plot, height_over_ground_plot)
axis3.plot(time_plot, air_density_plot)
axis4.plot(time_plot, drag_plot)
axis1.set(xlabel = 'Time (seconds)', ylabel = 'Vertical
Velocity ⤦
(m/s)')
axis2.set(xlabel = 'Time (seconds)', ylabel = 'Height (m)')
axis3.set(xlabel = 'Time (seconds)', ylabel = 'Air Density
(kg/m^3)')
axis4.set(xlabel = 'Time (seconds)', ylabel = 'Drag (N)')
plt.subplots_adjust(hspace=0.5, wspace=0.5)
Orbits can also be plotted with poliastro Matplotlib utilities. The script
produces the static orbit trajectories in Figure 3.11 for the Hohmann
transfer. The simulation also calculates the time required for each
maneuver, illustrating the duration between the impulses. Additionally,
perturbative forces like atmospheric drag or non-uniform gravity can be
introduced to explore more complex orbital scenarios.
Figure 3.11 Hohmann Transfer Orbits
Pc μ
zc =
Rρ c T c
where:
Pc is the critical pressure of the gas
μ is the mean molecular weight of a gas particle in units of the mass of a
hydrogen atom
R is the ideal gas constant
ρc is the critical density of the gas
Tc is the critical temperature of the gas.
Cantera is a library for chemical kinetics, thermodynamics, and transport
processes used in this example. In the following, zc is calculated and then
used to plot the contour lines for the compressibility factor isolines at
different temperature and pressure values. The output graph in Figure 3.12
shows the compressibility for the desired range of temperatures and
pressures.
import cantera as ct
import numpy as np
import matplotlib.pyplot as plt
# Define fluid
fluid = ct.Heptane()
tc = fluid.critical_temperature
pc = fluid.critical_pressure
rc = fluid.critical_density
mw = fluid.mean_molecular_weight
zc = pc * mw / (rc * ct.gas_constant * tc) # critical ⤦
compressibility factor
z = np.zeros((num_T, num_p))
for i in range(num_T):
for j in range(num_p):
T_curr = T[i]
p_curr = pressures[j]
fluid.TP = T_curr, p_curr
rho = fluid.density
z[i, j] = p_curr * mw / (rho * ct.gas_constant *
T_curr)
plt.xlabel('Temperature (K)')
plt.ylabel('Pressure (MPa)')
Python has several powerful machine learning libraries for a wide range of
engineering applications. Scikit-Learn [17] is widely-used with extensive
tools for supervised and unsupervised learning tasks such as classification,
regression, clustering, and dimensionality reduction. TensorFlow is oriented
for deep learning with an ecosystem of tools and libraries often used for
image recognition, natural language processing, and speech recognition.
PyTorch [16] is also used for deep learning known for its dynamic
computation graph and a wide range of pre-trained models that can be
easily used for transfer learning. Keras is a high-level neural network API
that can run on top of TensorFlow and others. It provides an easy-to-use
interface for building and training deep learning models that can be fine-
tuned for specific tasks.
Regression and classification for prediction purposes are two common
types of machine learning algorithms demonstrated here. Regression is used
to predict continuous values. It minimizes the difference between the
predicted values and the actual values, by finding the line or curve that best
fits the data. Classification is used to predict categorical values. It aims to
separate data into different classes based on the features, by finding the
decision boundary that separates the classes.
3.11.1 REGRESSION
This example uses Pandas to load a dataset from a CSV file. The target
variable is separated from the feature variables, and the data is split into
training and testing sets using the train_test_split function. A linear
regression model is then trained on the training data using the fit method of
the LinearRegression class. Finally, the model is evaluated on the test data
by calculating the mean squared error (MSE) between the predicted values
and the actual values.
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
Classification Report:
precision recall f1-score support
Confusion Matrix:
[[44 0 0]
[ 0 43 5]
[ 0 3 40]]
Accuracy: 0.9407407407407408
ẏ = v sin(θ)
θ̇ = w
˙
θ(t) = θ(t − 1) + θdt
where:
x is x position of the robot
y is y position of the robot
ẋ is x velocity of the robot
ẏ is y velocity of the robot
θ is heading angle of the robot
v is linear velocity of the robot
w is angular velocity of the robot
dt is time step.
The simulate_robot() function calls the forward_kinematics function in
a loop to update the robot state over time per the equations. It returns the x,
y, and theta values for each time step given the control inputs.
import numpy as np
import matplotlib.pyplot as plt
import selib as se
import numpy as np
import matplotlib.pyplot as plt
def board_testing_model(num_testers):
"""
A discrete event simulation model for testing electronic
boards ⤦
in a manufacturing process.
Args:
num_testers (int): The number of testers (servers)
available ⤦
for testing the boards.
Returns:
tuple:
- model_data (dict): A dictionary containing the
overall ⤦
model data, including
timing and resource usage statistics.
- entity_data (list of dict): A list of
dictionaries, each ⤦
containing detailed
data for each entity (board) that was processed in
the ⤦
simulation.
"""
se.init_de_model()
se.add_source('incoming boards',
entity_name="board",
num_entities = 100,
connections={'firmware installation': 1},
interarrival_time='np.random.exponential(.6)')
se.add_delay('firmware installation',
connections={'testing station': 1},
delay_time='np.random.uniform(.5, 1)')
se.add_server(name='testing station',
connections={'passed boards': .95, 'failed boards':
.05},
service_time='np.random.uniform(.1, .5)',
capacity = num_testers)
se.add_terminate('passed boards')
se.add_terminate('failed boards')
model_data, entity_data = se.run_model(verbose=False)
return model_data, entity_data
# Simulation parameters
num_runs = 1000
testers = [1, 2, 3]
scenario_average_waiting_times.append(average_waiting_time)
average_waiting_times.append(scenario_average_waiting_times)
se.draw_model_diagram()
se.run_model(verbose=True)
The Monte Carlo simulation results are displayed in Figure 3.16. For
each scenario, it shows a boxplot of 1000 samples to summarize the
distribution of waiting times.
Long Description for Figure 3.16
N
dH i (t)
= − ∑ P ij (t)A j H j (t)
dt
j=1
where:
H i (t) is the health of robot i at time t
N is the total number of robots
P ij (t) is the probability that robot j chooses robot i as its target at time t
Aj is the attack power of robot j
H j (t) is the health of robot j at time t.
The simulation model in Listing 3.23 has two major classes for the
simulation. It enables the creation of multiple robots with health and attack
power attributes with the Robot class. The Battle class allows for inclusion
of any number of robots and runs a simulated battle. Outputs are shown for
a simple battle with three robots. A plot of the robot health over salvos is in
Figure 3.17.
class Robot:
"""
A robot class with health, attack, and fire_weapon methods.
"""
def __init__(self, name, health, attack):
"""
Initializes a robot with the given name, health, and
attack ⤦
power.
"""
self.name = name
self.health = health
self.max_health = health
self.attack = attack
class Battle:
"""
A class to manage the robot battle.
"""
def __init__(self):
self.robots = []
self.health_history = []
def attack(self):
"""
Simulates a round attack by having each robot fire at a
⤦
random target.
"""
for attacker in self.robots:
if attacker.health <= 0:
continue
possible_targets = [robot for robot in self.robots if ⤦
robot != attacker and robot.health > 0]
if possible_targets:
target = random.choice(possible_targets)
attacker.fire_weapon(target)
def record_health(self):
"""
Records the current health of all robots.
"""
self.health_history.append([robot.health for robot in ⤦
self.robots])
def plot_health_chart(self):
"""
Plots a step chart of robot health over rounds
"""
rounds = range(1, len(self.health_history) + 1)
robot_names = [robot.name for robot in self.robots]
fig, ax = plt.subplots()
ax.set_xlabel('Round')
ax.set_ylabel('Robot Health')
ax.set_xticks(rounds)
ax.legend()
def run(self):
"""
Runs the battle until only one robot remains.
"""
round_num = 0
while sum(1 for robot in self.robots if robot.health > 0) >
1:
round_num += 1
print(f"\nRound {round_num}:")
self.attack()
self.record_health()
self.plot_health_chart()
Round 1:
Trojan fires at Bruin and causes 11 damage
Bruin fires at Fighting Irishman and causes 15 damage
Fighting Irishman fires at Trojan and causes 2 damage
Round 2:
Trojan fires at Bruin and causes 0 damage
Bruin fires at Trojan and causes 8 damage
Fighting Irishman fires at Bruin and causes 7 damage
Round 3:
Trojan fires at Fighting Irishman and causes 10 damage
Bruin fires at Trojan and causes 7 damage
Fighting Irishman fires at Bruin and causes 12 damage
...
Round 12:
Trojan fires at Fighting Irishman and causes 9 damage
Fighting Irishman has been destroyed!
Trojan wins the battle!
3.15 EXERCISES
Modify the projectile with air resistance example for any of the
following.
Modify the economic present value routines for any of the following:
1. Integrate fault tree analysis with Monte Carlo simulation and perform
a probabilistic analysis for a given system. Use probability
distributions to model basic event probabilities. These can be simple
uniform, triangular, or PERT distributions. Use Monte Carlo analysis
to run the model for at least 1,000 iterations to develop distributions
for the AND gates, the OR gates, and the top-level fault probability.
Plot the resulting distributions.
2. Add additional gate types to the se-lib fault tree functions.
3. Develop the capability to compute and visualize reliability block
diagrams as an alternative to fault trees.
4. Optimize the design of a wind turbine by extending the example to
vary the power coefficient, location (wind speed), and blade radius to
maximize energy output over a year. Use generated or actual wind
speed data for multiple locations to simulate realistic wind conditions.
Visualize the results as 3D surface plots for each location, showing
energy output on the z-axis against power coefficient and blade radius
on the x- and y-axes. Note that structural limits are not modeled. Vary
the blade radius in a feasible range from 30m to 70m and power
coefficient from 0.3 to 0.5.
5. Add a cost-benefit analysis to the extended wind turbine exercise from
above. Incorporate estimated costs for increasing the blade radius,
building turbines in high-wind locations, and optimizing the turbine
design.
a. Make the critical path analysis stochastic by using probability
distributions for task durations. Allow for multiple distribution
types, including uniform, triangular, normal, lognormal, and
others.
Conduct a Monte Carlo analysis and gather statistics on
individual task durations, slack times, and total project duration.
Generate frequency histograms for the output variables and a
cumulative distribution for the total project duration.
b. Create a master class for the dynamics of orbiting objects, with
derived classes covering human-engineered satellites (as in the
Hohmann transfer program) and meteors in the Earth's vicinity.
c. Create a master class for the dynamics of orbiting objects, with
derived classes for controlled objects like human-engineered
satellites (as in the Hohmann transfer program) and natural
objects like meteors. For satellites, account for the ability to
change orbits using delta-v maneuvers. For both meteors and
defunct satellites, model their uncontrolled descent through the
atmosphere.
d. Extend the Hohmann transfer program to include a bi-elliptic
transfer option to assess fuel and time trade-offs. Implement the
bi-elliptic transfer for the same initial and final orbits, comparing
the total delta-v, time required for each transfer, the number of
impulses, and the positions of the burns. Vary the ratio of the
final orbit's radius to the initial orbit's radius over a wide range
(e.g., LEO to GEO), and determine when the bi-elliptic transfer
is more efficient than the Hohmann transfer. Visualize the trade-
offs by plotting delta-v versus transfer time for varying final
orbit radii. Include annotations to highlight when the bi-elliptic
transfer becomes more efficient than the Hohmann transfer.
e. Apply the machine learning classification task to another
engineering application, such as classifying different types of
objects based on physical measurements. Use the machine
learning process of data preprocessing, training, and testing to
implement a classification. Experiment with different training
and testing data splits (e.g., 70/30, 80/20). Visualize the results
using a confusion matrix and performance metrics.
f. Extend the robot path control to simulate obstacle avoidance.
Introduce obstacles along the robot's path, and modify the control
inputs dynamically to avoid collisions. The robot should adjust
its angular velocity to navigate around obstacles without
deviating from its overall objective.
g. Develop an optimization algorithm for controlling the robot's
path to minimize energy consumption or travel time. Use it to
find the most efficient route between two points in the
workspace, taking obstacles into account.
h. Create a real-time control interface that allows users to adjust the
robot's velocity and angular velocity during the simulation.
Implement keyboard controls or graphical sliders that directly
change the robot's movement parameters during execution.
i. Extend the orbital mechanics program to simulate a Hohmann
transfer between Earth and Mars using poliastro. Calculate the
necessary delta-v and time for the interplanetary transfer.
Additionally, plot the transfer trajectory and the orbits of Earth
and Mars.
j. Compute the estimated engineering costs of a complex project
over time using cost models implemented in Python. Assess
different project options, and make a decision using present
value analysis by passing then-year costs to the net present value
routine.
OceanofPDF.com
4 Python Applications
Table 4.1
Technologies for Graphical User Interfaces
Browser Desktop Mobile
Technology Description Internet Local Application Application
PyQT Python library ✓
tkinter Python library ✓
Kivy Python library ✓ ✓
Pyodide JavaScript ✓ ✓
library
PyScript JavaScript ✓ ✓
library
Browser Desktop Mobile
Technology Description Internet Local Application Application
CGI web server ✓ ✓
protocol
Flask web ✓ ✓
framework
Django web ✓ ✓
framework
< !DOCTYPE html > is a declaration specifying the document type to the
browser.
< html lang="en" > represents the root of an HTML document and
indicates the language used in the content.
< head > contains meta-information about the document, such as
character encoding, title, and stylesheets.
< body > contains the visible content of the document.
The file would be saved with an. html extension and rendered when opened
in a browser from either a local directory or Internet server URL.
<!DOCTYPE html>
<html lang="en">
<head>
<title>Hello Engineers</title>
</head>
<body>
Hello Engineers on the Internet
</body>
</html>
1
HTML 5 compliancy can always be checked at https://validator.w3.org/
This example shows an essential template to be used for HTML pages
and how Python can write strings to generate any portions of a page. It can
be a concatenation of multiple strings. There is no provision yet for user
input to provide data or control the script execution. Other methods are
described next that support data fields, interface controls and provisions for
accessing them in Python.
Listing 4.2 HTML Web Page with Python
#!/usr/bin/env python3
import datetime
html_text = f"""
<!DOCTYPE html>
<html>
<head><title>{title}</title></head>
<body>
<h3>{heading}</h3>
<a href="test_report.html">Add Test Report</a>
</body>
</html>
"""
print(html_text)
4.2.1 CGI
A user enters values for velocity and mass in the input fields, and clicks
the ”Calculate” button to submit the form, The browser sends an HTTP
request to the specified URL including the form data using the POST
method. The form action calls itself to handle the input data and display the
calculation result. This generalized example uses the environment variable
os.environ[’SCRIPT_URI’] for the URL of the current script to call itself.
The cgi.FieldStorage class is used to parse the incoming form data. It
extracts form field names and values from the request in order to access the
submitted data. The getvalue() function is used to access data based on
field names. The resulting web page is shown in Figure 4.2 after a
calculation.
Figure 4.2 Kinetic Energy Calculator Application with CGI
#!/usr/bin/env python3
import cgi
import os
The kinetic energy calculator has the input field names hardwired in.
Adding more inputs the same way would be laborious and cumbersome. A
generalized approach is desired to handle any number of fields. The
form.keys() method returns a list of all the form field names which can be
iterated over. This method for form handling is shown next.
The HTML in Listing 4.4 3 presents the test report page filled out in
Figure 4.3. When submitted it sends form inputs to the script in Listing 4.5.
The HTML line < form action="save_data.py" method="post" >
specifies the script filename to send the form data when it is submitted. The
script processes the user data, writes it to a database, and replies with the
web page in Figure 4.4.
3
repeated form patterns are truncated
<b>Additional Details</b><br>
<textarea name="additional_details" rows="4" ⤦
cols="50"></textarea><br>
<br>
<input type="submit" value="Submit Test Results">
</form>
</body>
</html>
The processing of form data when the HTML page is submitted is done
per Listing 4.5. It loops over form.keys() to get all the field names, and
uses form.getvalue(key) to extract the values and add them to the data
dictionary. The time entry to the data dictionary is explicitly written as
another field.
It is straightforward to make responsive web applications combining CGI
with JavaScript and CSS, but modern frameworks should be considered
especially for high-traffic and data-intensive applications. CGI is a legacy
technology and was recently planned to be deprecated in a future Python
version. It isn't efficient for large data streams, but will remain suitable for
many use cases and prevalent in existing implementations. However, it may
become incompatible with other evolving libraries in the future that an
application depends on.
#!/usr/bin/env python3
import cgi
import csv
import os
import datetime
print("Content-Type: text/html")
print()
form = cgi.FieldStorage()
data = {}
data["Time"] = now
for key in form.keys():
data[key] = form.getvalue(key)
csv_file = "test_report_data.csv"
file_exists = os.path.isfile(csv_file)
4.2.2 FLASK
/project-directory
|-- app.py
|-- /templates
| |-- index.html
| |-- layout.html
|-- /static
| |-- style.css
| |-- script.js
|-- requirements.txt
In Flask and other templating systems, using the Jinja2 syntax {{}} is the
recommended way to embed variables into the HTML string as is done for
the input fields here. Flask templates offer additional functionality such as
loops, conditionals, and filters that work with the {{}} syntax but not with
f-strings.
import numpy as np
from flask import Flask, render_template_string, request
app = Flask(__name__)
index_template = """
<!DOCTYPE html>
<html>
<head>
<title>Projectile Distance Calculator</title>
</head>
<body>
<h1>Projectile Distance Calculator</h1>
<form method="post">
<label for="velocity">Velocity (m/s):</label>
<input type="text" name="velocity" id="velocity"
value="{{ ⤦
velocity }}"><br><br>
<label for="angle">Angle (degrees):</label>
<input type="text" name="angle" id="angle" value="{{
angle ⤦
}}"><br><br>
<input type="submit" value="Calculate">
</form>
<p>Distance: {{ distance }} meters</p>
</body>
</html>
"""
if __name__ == '__main__':
app.run(debug=True, port=0)
4.2.3 PYODIDE
src="https://cdn.jsdelivr.net/pyodide/v0.20.0/full/pyodide.js"
>
</script>
</head>
<body>
<h1>Welcome to the Projectile Calculator</h1>
<label for="velocity">Velocity (m/s):</label>
<input type="text" id="velocity" style="width: 50px;">
<br>
<label for="angle">Angle (degrees):</label>
<input type="text" id="angle" style="width: 50px;">
<br>
<button id="calculate_button">Calculate</button>
<br>
<p id="result"></p>
<script type="text/javascript">
// Asynchronous function allowing for non-blocking
operations
async function main() {
// Load Pyodide and import the necessary packages
let pyodide = await loadPyodide({
indexURL: ⤦
'https://cdn.jsdelivr.net/pyodide/v0.20.0/full/'
});
await pyodide.loadPackage(['numpy']);
// Add click event listener to the "Calculate"
button
document.getElementById("calculate_button").addEventListener
("click", function() {
// Get the values of velocity and angle entered
by the ⤦
user
let velocity =
document.getElementById("velocity").value;
let angle =
document.getElementById("angle").value;
document.getElementById("result").innerText = ⤦
'Distance = ' + result + ' meters';
});
});
}
</body>
</html>
4.2.4 PYSCRIPT
<!DOCTYPE html>
<head>
<link rel="stylesheet" ⤦
href="https://pyscript.net/latest/pyscript.css" />
<script defer src="https://pyscript.net/latest/pyscript.js">
</script>
</head>
<body>
<py-repl auto-generate="true">
print("Hello engineers around the world!")
</py-repl>
</body>
</html>
PyScript with Pyodide can tap into the full Python ecosystem. Listing 4.9
scales up functionality with PyScript for an online present value calculator
with graphics. The code from Listing 3.6 is placed in the script, which can
then be called within a REPL cell. Explanatory library calls are put onto the
web page for users with a calculated page shown in Figure 4.8.
Figure 4.8 Present Value Calculator with PyScript and Matplotlib
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<meta name="viewport" ⤦
content="width=device-width,initial-scale=1" />
<p>
<b>present_value</b>(<i>cash_flow, rate, n</i>)</br>
Calculates the present value of a cash flow for given rate and
time ⤦
period n.</br>
• <i>cash_flow</i>: A future cash flow value.</br>
• <i>rate</i>: The discount rate per period as a ⤦
fraction.</br>
• <i>n</i>: The future time period</br>
<p>
<b>net_present_value</b>(<i>cash_flows, rate, plot=True, ⤦
plot_pv=False, show_values=False</i>)</br>
Calculates net present value of cashflows and optionally plots
them. ⤦
It returns a tuple containing the cash flow present
values and ⤦
net present values for each period, and displays the
plot.</br>
mpl.rcParams['axes.spines.top'] = False
mpl.rcParams['axes.spines.right'] = False
<py-repl auto-generate='True'>
cash_flows = [0, 20, 23, 30]
annual_rate = .07
net_present_value(cash_flows, annual_rate, plot=True,
plot_pv=True, ⤦
show_values=True)
</py-repl>
</body>
</html>
There are various Python frameworks that can be used to create desktop
applications. They typically use the native GUI widgets of the operating
system on which the application is running. PyQt is a cross-platform
framework with an API for creating Qt-based applications. Qt is a widely
used, cross-platform application framework for developing GUIs. Tkinter is
a simple but powerful GUI framework that is built into the Python standard
library. Kivy is a cross-platform GUI framework for creating mobile and
desktop applications.
4.3.1 PYQT
import sys
from PyQt5.QtWidgets import QApplication, QWidget, QLabel, ⤦
QPushButton, QVBoxLayout, QLineEdit, QGridLayout
from PyQt5.QtGui import QFont
import math
class ProjectileCalculator(QWidget):
"""
A PyQt-based GUI application for calculating and displaying
the ⤦
trajectory of a projectile.
Inherits:
QWidget: Base class for all UI objects in PyQt.
Methods:
__init__(): Initializes the GUI components and sets up
the ⤦
application window.
initUI(): Configures the UI elements, including labels,
input ⤦
fields, and buttons.
on_click(): Handles the click event of the calculate
button, ⤦
performs the calculations,
and updates the result labels.
Attributes:
lbl (QLabel): A label that displays the title of the ⤦
application.
label_velocity (QLabel): A label for the velocity input
field.
line_edit_velocity (QLineEdit): A text field for the
user to ⤦
input the initial velocity.
label_angle (QLabel): A label for the angle input field.
line_edit_angle (QLineEdit): A text field for the user
to ⤦
input the launch angle.
lbl_time (QLabel): A label to display the calculated
flight ⤦
time.
lbl_height (QLabel): A label to display the calculated ⤦
maximum height.
lbl_distance (QLabel): A label to display the calculated
⤦
distance.
"""
def __init__(self):
super().__init__()
self.initUI()
def initUI(self):
self.setWindowTitle('Projectile Calculator with PyQT')
self.setGeometry(300, 300, 250, 250) # Set initial position
⤦
and size
self.lbl = QLabel('Welcome to the Projectile
Calculator', self)
self.lbl.setFont(QFont('Arial', 16, QFont.Bold)) # Set
the ⤦
font to bold
vbox = QVBoxLayout()
vbox.addWidget(top_label_container) # Add the container
with ⤦
the top label
vbox.addWidget(container) # Add the container with the
labels ⤦
and text inputs
vbox.addWidget(button)
vbox.addWidget(self.lbl_time)
vbox.addWidget(self.lbl_height)
vbox.addWidget(self.lbl_distance)
self.setLayout(vbox)
self.show()
def on_click(self):
velocity = float(self.line_edit_velocity.text())
angle = float(self.line_edit_angle.text())
flight_time, max_height, distance = projectile(velocity,
angle)
self.lbl_time.setText(f'Flight Time = {flight_time:.1f}
⤦
seconds')
self.lbl_height.setText(f'Max Height = {max_height:.1f}
meters')
self.lbl_distance.setText(f'Distance = {distance:.1f}
meters')
if __name__ == '__main__':
app = QApplication(sys.argv)
calculator = ProjectileCalculator()
sys.exit(app.exec_())
Listing 4.11 Button Sensor Reading and LED Control with GPIO
# Global variables
blinking = False
last_toggle_time = 0
def toggle_blinking():
global blinking, last_toggle_time
current_time = time.time()
if current_time - last_toggle_time > 0.3: # Debounce
blinking = not blinking
last_toggle_time = current_time
if blinking:
print("Starting LED blinking")
else:
print("Stopping LED blinking")
for pin in LED_PINS:
GPIO.output(pin, GPIO.LOW)
try:
print("Press the button to toggle LED blinking (Ctrl+C to
exit)")
led_state = False
last_blink_time = 0
while True:
current_time = time.time()
except KeyboardInterrupt:
print("\nProgram stopped by user")
finally:
GPIO.cleanup()
Listing 4.12 Potentiometer Sensor Reading and Motor Control with GPIO
GPIO.setup(input_pin, GPIO.IN)
GPIO.setup(motor_pin, GPIO.OUT)
try:
# Main loop to poll the voltage and adjust the motor speed
while True:
# Read analog voltage from potentiometer
voltage = GPIO.input(input_pin) * max_voltage # fraction
of max ⤦
voltage on pin
time.sleep(0.1 )
except KeyboardInterrupt:
print("Program stopped by user")
finally:
motor_pwm.stop()
GPIO.cleanup()
import time
import urllib.request
from time import gmtime, strftime
while True:
# Update every 60 seconds
time.sleep(60.0)
# Fly to a waypoint
target_latitude = 33.2170
target_longitude = 117.3514
target_altitude = 20
target_location = LocationGlobalRelative(target_latitude, ⤦
target_longitude, target_altitude)
vehicle.simple_goto(target_location)
5.1 INTRODUCTION
Table 5.1
Common Magic Commands in Jupyter Notebooks
Syntax Description
%ls Lists files in the current directory.
%pwd Prints the current working directory.
%cd < directory > Changes the current working directory.
%time < command > Times the execution of a single command.
%timeit < command > Runs a command multiple times and reports the
best time.
%run < script.py > Runs an external Python script.
Syntax Description
%matplotlib < mode > Sets up Matplotlib to work interactively.
%%writefile < Writes the contents of the cell to a file.
filename >
Line magics are prefixed by a single percent sign (%) and apply to a
single line of code. Commands like %ls to list directory contents and %pwd
to show the current directory allow for quick file management within the
notebook. Magics like %time and %timeit can be used to measure the
execution time of code snippets.
Cell magics are prefixed by two percent signs (%%) and apply to an entire
cell. For example, %%timeit will time the execution of all code in a cell.
Examples of magic commands in a collapsed Google Colab notebook are
in Figure 5.1. These were performed after running selected scripts to
generate figures. The %pwd shows the current directory, which here is at the
top level of the virtual file system. The command helps to confirm the
directory context in which the notebook is operating, which is important to
determine when working with relative file paths. The %ls command lists the
contents of the directory as a list of files and subdirectories similar to Unix-
like systems. It shows three PDF files that were generated and the default
sample_data folder that is created in Colab notebooks.
Figure 5.1 Magic Commands in Notebook
Magic commands also include utilities for timing the execution of code.
The %time and %timeit commands are used to measure how long a piece of
code takes to run. The %time command returns the execution time of a
single run of the code. For more rigorous performance analysis, the
%timeit command runs it multiple times and provides statistical outputs.
The %timeit command is used next to compare execution times for the
generation of one million random variates for delay time uniformly
distributed between 0 and 10. It measures a list comprehension using the
Python random.random function vs. the creation of a NumPy array. These
timing commands are invaluable for performance optimization to identify
and improve slow sections of code.
import random
%timeit random_delays = [random.random() * 10 for i in
range(1000000)]
311 ms 8.67 ms per loop (mean std. dev. of 7 runs, 1 loop
each)
import numpy
%timeit random_delays_np = np.random.rand(1000000) * 10
Table 5.2
Common Shell Commands in Jupyter Notebooks
Syntax Description
!ls Lists files in the current directory.
!pwd Prints the current working directory.
!cd < directory > Changes the current working directory temporarily
for the cell.
!mkdir < directory > Creates a new directory.
Syntax Description
!rm < file > Removes a file.
!rmdir < directory > Removes a directory.
!cp < source > < Copies a file from source to destination.
destination >
!mv < source > < Moves or renames a file or directory.
destination >
!echo < text > Prints text to the output.
!cat < file > Displays the content of a file.
!git < command > Executes Git version control commands. E.g., !git
clone, !git add < file >
!pip < command > Executes pip commands for installing packages.
E.g., !pip install numpy
!python < script.py > Runs an external Python script using the Python
interpreter.
!top Displays system processes and resource usage.
!ps Shows the currently running processes.
!history Displays the command history.
!git clone ⤦
https://github.com/madachy/what-every-engineer-should-
know-about-
python-data-files
!cp -r what-every-engineer-should-know-about-python-data-
files/*.* .
!ls
The shell syntax also supports the combination of Python variables with
shell commands. The following example in Listing 5.1 is run at the
beginning of a notebook that automates the batch running of multiple
scripts and generation of figure files. It sets style defaults, connects to
Google Drive, and defines a writing function. A Python variable is used to
specify a filename for writing and passed to the cp shell command. This
integration of Python and shell commands within a single environment is a
key strength of Jupyter Notebooks to leverage both the power of Python
and the flexibility of the command line.
import os
import shutil
backup_project('/project_dir', '/backup_dir')
import unittest
class TestNPVFunction(unittest.TestCase):
def test_npv(self):
test_cases = [
{
"cash_flows": [-100000] + [0] * 23 + [50000],
"annual_rate": 12 * .0075,
"n": 12,
"expected": -58208
},
{
"cash_flows": [0, 20, 23, 30],
"annual_rate": .07,
"n": 1,
"expected": 18.69 + 20.09 + 24.49
}
# Add more test cases here as dictionaries
]
suite =
unittest.TestLoader().loadTestsFromTestCase(TestNPVFunction)
unittest.TextTestRunner().run(suite)
--------------------------------------------------------------
--------
Ran 1 test in 0.015s
OK
<unittest.runner.TextTestResult run=1 errors=0 failures=0>
--------------------------------------------------------------
--------
Ran 1 test in 0.001s
FAILED (failures=1)
<unittest.runner.TextTestResult run=1 errors=0 failures=1>
This example only checked a single output parameter for each test case.
Multiple parameters and entire data structures can also be compared. A
more thorough and rigorous test harness would check the output lists of
present values and cumulative net present values for different sequence
lengths.
import pdb
projectile(55, 45)
> <ipython-input-4-7fd730a5c1e3>(6)projectile()
4 """ Returns the projectile flight time, maximum
height and ⤦
distance given initial velocity in meters per second
and ⤦
launch angle in degrees. """
5 pdb.set_trace()
----> 6 g = 9.8# gravity (meters per second squared)
7 angle_radians = 0.01745 * angle # convert degrees to
radians
8 flight_time = 2 * v0* sin(angle_radians) / g
ipdb> help
ipdb> a
v0 = 55
angle = 45
ipdb> step
> <ipython-input-4-7fd730a5c1e3>(7)projectile()
5 pdb.set_trace()
6 g = 9.8# gravity (meters per second squared)
----> 7 angle_radians = 0.01745 * angle # convert degrees to
radians
8 flight_time = 2 * v0* sin(angle_radians) / g
9 max_height = (1 / (2 * g)) * (v0 * math.sin(0.01745
* ⤦
angle)) ** 2
E
Ef f ort = A ∗ Size ∗ ∏ EM i
i=1
(5.1)
where
Effort is measured in Person-Months
Size is a weighted sum of requirements, interfaces, algorithms, and
scenarios
A is a calibration constant derived from project data
E represents a diseconomy of scale
EM i is the effort multiplier for the i th cost driver.
The geometric product ∏ EM i is called the Effort Adjustment Factor
(EAF) that is multiplicative to the nominal effort.. The overall effort is then
decomposed by phase and activities into constituent portions using average
percentages.
The application was built from the inside-out beginning with a main
equation, incrementally replacing stubbed out parameters with more
complex calculations. ChatGPT generated new functions and populated
data structures, while the author integrated its outputs and performed
testing. The author is familiar with the application and planned the
increments based on experience.
Preparation prior to starting the chat session included getting armed with
the following:
class TestNPVFunction(unittest.TestCase):
def test_npv(self):
test_cases = [
{
"cash_flows": [-100000] + [0] * 23 + [50000],
"annual_rate": 12 * .0075,
"n": 12,
"expected": -58208
...
In this step the structure of a cost factor dictionary is explicitly defined with
examples. The dictionary will be needed to calculate the eaf parameter
currently stubbed at its nominal value. It is shown as cost_factors_dict in
Listing 5.4.
populate the new dictionary with the cost factor names, ratings
and effort
multipliers copied below.
Very Low Low Nominal High Very High
Extra High
Requirements Understanding 1.87 1.37 1.00 0.77
0.60
Architecture Understanding 1.64 1.28 1.00 0.81
0.65
Level of Service Requirements 0.62 0.79 1.00
1.36 1.85
Migration Complexity 1.00 1.25 1.55 1.93
Technology Risk 0.67 0.82 1.00 1.32 1.75
...
The function to calculate eaf using the cost factor dictionary is specified.
Model default values are specified for factors when they aren't set by the
user.
The eaf function is integrated with the existing code to replace the
stubbed value of 1. It is ready for testing to be set up in the next step.
Q. now modify this class so the "eaf" test case inputs are
replaced by the
dictionary inputs. the first case shall be a blank dictionary
(with all
cost factors defaulted to nominal) and the second case uses the
cosysmo_case_study_2 dictionary input from above:
class TestCOSYSMOFunction(unittest.TestCase):
def test_cosysmo(self):
...
The last step to complete the full model is to use the size weights to
computer the total aggregate size. A detailed cost estimate can be generated
with this last function integrated.
def eaf(ratings_dict):
effort_adjustment_factor = 1.0
return effort_adjustment_factor
cost_factors_dict = {
"Requirements Understanding": {"Very_Low": 1.87, "Low":
1.37, ⤦
"Nominal": 1.00, "High": 0.77, "Very_High": 0.60},
"Architecture Understanding": {"Very_Low": 1.64, "Low":
1.28, ⤦
"Nominal": 1.00, "High": 0.81, "Very_High": 0.65},
"Level of Service Requirements": {"Very_Low": 0.62, "Low":
0.79, ⤦
"Nominal": 1.00, "High": 1.36, "Very_High": 1.85},
"Migration Complexity": {"Nominal": 1.00, "High": 1.25, ⤦
"Very_High": 1.55, "Extra_High": 1.93},
"Technology Risk": {"Very_Low": 0.67, "Low": 0.82,
"Nominal": 1.00⤦
, "High": 1.32, "Very_High": 1.75},
"Documentation": {"Very_Low": 0.78, "Low": 0.88, "Nominal":
1.00, ⤦
"High": 1.1 3, "Very_High": 1.28},
"Diversity of Installations/Platforms": {"Nominal": 1.00,
"High": ⤦
1.23, "Very_High": 1.52, "Extra_High": 1.87},
"Recursive Levels in Design": {"Very_Low": 0.76, "Low":
0.87, ⤦
"Nominal": 1.00, "High": 1.21, "Very_High": 1.47},
"Stakeholder Team Cohesion": {"Very_Low": 1.50, "Low":
1.22, ⤦
"Nominal": 1.00, "High": 0.81, "Very_High": 0.65},
"Personnel/Team Capability": {"Very_Low": 1.50, "Low":
1.22, ⤦
"Nominal": 1.00, "High": 0.81, "Very_High": 0.65},
"Personnel Experience/Continuity": {"Very_Low": 1.48,
"Low": 1.22⤦
, "Nominal": 1.00, "High": 0.82, "Very_High": 0.67},
"Process Capability": {"Very_Low": 1.47, "Low": 1.21,
"Nominal": 1⤦
.00, "High": 0.88, "Very_High": 0.77, "Extra_High":
0.68},
"Multisite Coordination": {"Very_Low": 1.39, "Low": 1.1 8,
⤦
"Nominal": 1.00, "High": 0.90, "Very_High": 0.80, ⤦
"Extra_High": 0.72},
"Tool Support": {"Very_Low": 1.39, "Low": 1.1 8, "Nominal":
1.00, ⤦
"High": 0.85, "Very_High": 0.72}
}
def phase_effort(effort):
# Define the effort distribution percentages for each
activity
activity_percentages = {
"Acquisition and Supply": .07,
"Technical Management": .1 7,
"System Design": .30,
"Product Realization": .1 5,
"Technical Evaluation": .31
}
return sub_efforts
size_weights = {
"System Requirements": {
"Easy": 0.5,
"Nominal": 1.00,
"Difficult": 5.0
},
"Interfaces": {
"Easy": 1.1 ,
"Nominal": 2.8,
"Difficult": 6.3
},
"Critical Algorithms": {
"Easy": 2.2,
"Nominal": 4.1 ,
"Difficult": 11.5
},
"Operational Scenarios": {
"Easy": 6.2,
"Nominal": 14.4,
"Difficult": 30
}
}
def total_size(driver_counts):
total = 0
for driver, complexities in driver_counts.items():
for complexity, count in complexities.items():
total += count * size_weights[driver][complexity]
return total
In summary, the entire chat, integration, and testing through the final
iteration took 65 minutes. Doing it manually would have probably taken
more than a day. All generated solutions were correct in this instance, and
only minor refinements were made for clarity. However, generative AI
solutions are non-deterministic even when using the same LLM. The caveat
is always Your Mileage May Vary.
The cost model was subsequently extended into an online CGI
application shown in Figure 5.3. In this case, generative AI was imperfect
when converting it into a web-based application. It helped to create much of
the underlying HTML but was not as effective integrating it with Python.
Figure 5.3 Online COSYSMO Tool Partially Developed with Generative AI
The software produced from a process has various attributes that affect its
reliability and utility, which are crucial to consider. Attributes like execution
speed, memory usage, portability, and maintainability are critical for
producing software that can scale and adapt. Optimizing these attributes
during the process is not straightforward, as choosing language features or
libraries for one attribute often involves trade-offs with others. For example,
using NumPy can provide significant speed gains, but it may increase
memory consumption and introduce platform dependencies. Conversely,
using generators can reduce memory usage, but might slow execution due
to the overhead of generating items on demand. Similarly, prioritizing
maintainability may lead to more readable code, though potentially slower.
Understanding these trade-offs allows for more informed decisions to strike
the right balance between attributes based on the specific goals and
requirements of the project.
Speed is often the most critical factor in many engineering applications.
Table 5.3 provides a prioritized list of techniques that offer the highest
potential for speed optimization (though not all techniques may be feasible
in every scenario). Speed optimization focuses on reducing code execution
time, especially for tasks involving large datasets or complex calculations.
Techniques such as using NumPy for numerical computations or Just-in-
Time (JIT) compilation with Numba can significantly speed up code by
either converting Python into machine code or leveraging optimized C
implementations. Additionally, built-in Python functions, list
comprehensions, and asynchronous programming can all enhance
performance.
Table 5.3
Top Recommendations for Speed Optimization
Recommendation Description
Use NumPy arrays for Significant speed improvement compared to lists
numerical due to vectorized operations implemented in C.
computations
Use just-in-time Substantial speed boost for numerical code
compilation with through just-in-time compilation, providing near
Numba
Recommendation Description
C-like performance for array operations and
loops.
Implement Significant speed improvements for CPU-bound
multiprocessing and tasks by utilizing multiple cores.
multithreading for
parallelism
Use built-in functions Built-in functions offer faster performance, as
and libraries they are optimized in C, making them much
quicker than manual implementations for
common tasks.
Utilize asynchronous Significant speed improvement for I/O-bound
programming with tasks by enabling non-blocking operations and
asyncio concurrent task management. This is ideal for
network and I/O-heavy applications.
Use list Faster than traditional for loops, especially for
comprehensions and simple transformations or filtering tasks.
generator expressions
Optimize data Choosing appropriate data structures improves
structures efficiency. For example, dictionary lookups use a
hash table and are faster than searching unsorted
lists. Sets are the most efficient for membership
checks, and tuples are faster when iterated over
due to their immutability, which allows interpreter
optimizations.
Minimize use of OOP Eliminates the overhead of method lookups and
for performance- dynamic attribute access to improve execution
critical code speed.
Recommendation Description
Avoid unnecessary data When manipulating large datasets (e.g., NumPy
copies in processing arrays or Pandas dataframes), avoid creating
unnecessary data copies to enhance performance.
Use in-place operations on existing arrays or
views where possible.
Use local variables Local variables are quicker to access than global
over global variables variables due to scope handling, resulting in
improved performance, especially for frequently
accessed variables within functions.
import numpy as np
import time
# List operation
def list_operation(n):
return [i**2 for i in range(n)]
# NumPy operation
def numpy_operation(n):
return np.arange(n)**2
sizes = [100, 1000, 10000, 100000]
multiplier = []
for n in sizes:
list_times = []
numpy_times = []
for run in range(100):
start = time.time()
list_result = list_operation(n)
list_times += [time.time() - start]
start = time.time()
numpy_result = numpy_operation(n)
numpy_times.append(time.time() - start)
multiplier += [np.mean(list_times) / np.mean(numpy_times)]
Figure 5.4 shows the distributions for 1000 runs to generate 1,000 and
10,000 values, respectively. In both cases, NumPy is significantly faster and
exhibits less variability. As the number of generated values increases, the
execution speed distribution for the list-based approach shifts higher by an
order of magnitude, while the NumPy dispersion remains close to zero on
the scale.
Optimizing solely for speed often comes with trade-offs. For instance,
parallel processing or JIT compilation can increase memory usage or limit
portability. Sets and other optimized structures can consume more memory
than lists due to the need for hashing and other optimizations, but they trade
memory for performance.
Memory usage is sometimes a greater constraint than execution speed.
Prioritized recommendations for reducing memory usage are provided in
Table 5.4. Applications that handle large datasets or run on resource-
constrained devices, such as embedded systems, will especially benefit
from these optimizations. Reducing memory consumption can ensure that a
program scales effectively. One of the most efficient techniques for memory
optimization is using generators instead of lists, as they generate data on-
the-fly without storing it all in memory. Similarly, in-place operations in
libraries like NumPy help avoid unnecessary memory copies, and more
streamlined class designs can reduce the memory overhead of object-
oriented code. However, some techniques, such as using generators, may
lead to slower performance since they calculate values on demand rather
than upfront.
Table 5.4
Top Recommendations for Memory Optimization
Recommendation Description
Use generator expressions Generators use no memory as they yield one
instead of lists for large item at a time, instead of holding all items in
data streams memory.
Use list comprehensions List comprehensions are more memory-
instead of loops efficient than manually appending to lists in a
loop.
Optimize data structures Choosing the appropriate data structure can
improve memory efficiency. For example,
using a set for membership testing avoids the
overhead of storing duplicates.
Minimize use of OOP for Reducing reliance on classes and objects can
performance-critical code save memory by avoiding the overhead of
storing instance attributes.
Recommendation Description
Use NumPy arrays instead NumPy arrays are more memory-efficient
of native Python lists for than Python lists for storing large datasets
large datasets because they use contiguous memory and
avoid Python's object overhead.
Use in-place operations Perform in-place operations where possible
(e.g., arr += 1 in NumPy) to reduce memory
consumption by avoiding the creation of new
arrays or dataframes.
Table 5.5
Top Recommendations for Portability
Recommendation Description
Use built-in functions and Fully portable across all Python platforms
libraries and versions, with no external
Recommendation Description
dependencies required.
Use list comprehensions and Core language features are fully portable
generator expressions across all Python platforms and versions.
Optimize data structures using Data structures from Python's standard
Python's standard library library are highly portable and work
consistently across all Python versions
and platforms.
Use asyncio for asynchronous Fully portable for systems running
programming Python 3.4 or later. Simplifies cross-
platform asynchronous programming.
Ensure cross-platform Python's os and subprocess modules
compatibility for system-level ensure portability across different
tasks with os and subprocess operating systems.
Write version-agnostic code Use features compatible with multiple
Python versions, especially for older
systems, or utilize the six library to ease
transitions between versions.
Avoid platform-specific Some libraries or tools may not be
libraries available across all platforms. Stick to
widely supported libraries like NumPy,
Pandas, and Python's standard library.
Use Python's virtual Ensures that projects are portable across
environments for dependency different environments by isolating
management dependencies using virtualenv or
Python's built-in venv module.
5.9 SUMMARY
5.10 EXERCISES
Python keywords are reserved words that form the core syntax and structure
of the language shown in Table A.1. Python built-in functions available are
listed alphabetically in Table A.2. Additional information on any function
may be obtained from their docstring via the function__ name.__ doc__
attribute.
2. Stephen Cass. Top Programming Languages 2022. IEEE Spectrum, 2022. URL:
https://spectrum.ieee.org/top-programming-languages-2022.
3. Allen B Downey. Think Python: How to Think Like a Computer Scientist. Green Tea Press,
2nd edition, 2015.
4. Python Software Foundation. Welcome to python.org, 2024. URL: http://www.python.org/.
6. Robert Johansson. Numerical Python: Scientific Computing and Data Science Applications
with NumPy, SciPy and Matplotlib. Apress, Division of Springer Nature, New York, NY,
2019.
7. Jaan Kiusalaas. Numerical Methods in Engineering with Python 3. Cambridge University
Press, New York, NY, 2013.
8. Qingkai Kong, Timmy Siauw, and Alexandre Bayen. Python Programming and Numerical
Methods: A Guide for Engineers and Scientists. Academic Press, Elsevier, Cambridge,
MA, 2021.
9. Hans P. Langtangen. A Primer on Scientific Programming with Python. Springer, Berlin,
5th edition, 2016.
10. Raymond J. Madachy. Software Process Dynamics. Wiley-IEEE Computer Society Press,
Hoboken, NJ, 2007.
11. Raymond J. Madachy and Daniel X. Houston. What Every Engineer Should Know About
Modeling and Simulation. CRC Press, Boca Raton, FL, 2017.
12. Matplotlib development team. Examples - matplotlib 3.8.2 documentation, 2024. URL:
https://matplotlib.org/stable/gallery/index.html.
13. Wes McKinney. Python for Data Analysis: Data Wrangling with Pandas, NumPy, and
IPython. O'Reilly Media, Sebastopol, CA, 2nd edition, 2017.
14. NumPy Developers. NumPy Documentation – NumPy v2.0 Manual, 2024. URL:
https://numpy.org/doc/stable/.
15. Travis E. Oliphant. Python for scientific computing. Computing in Science & Engineering,
9(3):10–20, 2007.
21. Al Sweigart. Automate the Boring Stuff with Python: Practical Programming for Total
Beginners. No Starch Press, San Francisco, CA, 1st edition, 2015.
OceanofPDF.com
Index
A
Anaconda, 5–7, 9, 19
animation, 115–117
applications, 225
browser, 227, 228, 235, 238
desktop, 241
Ardupilot, 251
artificial intelligence, see also ChatGPT, see also machine learning, see also generative AI
assignment statement, 11, 12
C
cantera (library), 204
cgi (library), 228, 229, 231
CGI (protocol), 225, 228–233
CGI form service, 231
ChatGPT, xxi, 267–272
chi square test, 179
class
aggregation, 79
association, 80
diagram, 73, 77, 79, 81
inheritance, 77
classes, 73–83, 194, 216, 241
classification (machine learning), 208
Colab, 9, 259
command line, 255–260, see also magic commands, see also shell commands
comprehensions, 12, 14, 16, 44, 49–51, 53, 66, 83
conditional execution, 34–37
control statement, 11, 12
COSYSMO cost model, 267, 270, 272
critical path analysis, 188
csv (library), 66, 231
D
data structures, 37–51, see also dictionaries, see also lists, see also sets, see also tuples
debugging, 261, 264–265, see also pdb (library)
deployment diagram, 2
dictionaries, 42–46
nested, 45, 67, 268, 270
distributions, 101, 111, 175, 179
cumulative, 101, 176
random, 173
Django, 225
docstrings, 70–71
drone flight control, 251
dronekit (library), 251
E
embedded systems, 225
engine performance analysis, 193, 194
exceptions, 86–87
execution speed, 273–275
F
f-strings, 57
fault tree analysis, 197
file reading and writing, 64, 66, 170, 171, 198, 231
Flask, 225, 228, 233–235
flask (library), 234
fluid compressibility, 203, 204
for loop, 13, 16
formatting, 55–62
functions
built-in, 9, 285, 287
custom, 26–30, 67–70, 184, 185, 211
functions:custom, 68
G
generative AI, 265–272
generators, 83–86
GitHub, 255, 258, 259, 267, 278
GPIO, 246–248
GPIO (library), 247, 249
graph diagrams, 150
Graphviz, 150–161, 188
H
histograms, 111, 175
history, 16–18
HTML, 225–228, 230, 236, 238
forms, 228–233
hypothesis testing, 178
I
IDLE, 7
importing modules, 62–64
indexing, 13
Integrated Development Environment (IDE), 2, 5, 12
interpreter, 1–5
inverse transform, 177
IPython, 2, 4, 17
iterables, 12, 14, 53–55
iterative development, 261, 265
J
JavaScript, 17, 225, 227, 228, 231, 233, 235, 236, 238
Jinja2, 234
Jupyter Labs, 7
Jupyter Notebook, 5, 7, 9, 17, 19
K
keywords, 9, 285
kinetic energy calculator, 229
Kivy (library), 225
L
lambdas, 71
LED control, 247
linear algebra, 100
lists, 11–14, 16, 38–41
M
machine learning, 206
classification, 208
regression, 207
magic commands, 256–257
manufacturing discrete event simulation, 212
Matplotlib, 103–117, 131, 168, 175, 181, 185, 201, 208, 211, 239, 259
memory usage, 275, 277
meteor entry simulation, 198
motor control, 249
N
NetworkX, 135–149, 188–192
notebook with PyScript, 238
numbers, 55–62
NumPy, 95–102, 174–176, 179, 184, 192, 198, 201, 204, 207, 208, 211, 212, 274
O
object-oriented, see classes
operators, 31–34
optimization, 129, 130
orbital mechanics, 198
os (library), 229, 260
P
Pandas, 119–126, 131, 170, 171, 198, 207, 208
path weight, 144
pdb (library), 264
poliastro (library), 198, 201
portability, 277
present value, 183, 185, 262
present value calculator, 239
print statements, 30
projectile calculator, 234, 236, 241
projectile motion, 15, 74, 181
projectile trajectory, 115
Pyodide, 225, 235–238
PyQT (library), 225, 241
PyScript, 9, 225, 238, 239
R
random number generation, 173–175, 177
Raspberry Pi, 246, 249
reading user input, 64
regression, 167–173
linear, 167–171, 207
machine learning, 207
REPL, 4, 9, 239
Replit, 9
robot competition simulation, 216
robot path control, 210, 211
robotics, 215
S
satellite orbital mechanics, 201
scikit-learn (library), 206–208
SciPy, 127–133, 178, 179, 198
se-lib (library), 197, 212, 215
seaborn (library), 208
sensor reading
button, 247
potentiometer, 249
sets, 48
shell commands, 258–260
shortest path, 143
shutil (library), 260
signal processing, 133
simulation, 192, 198, 210, 212, 215
software attributes, see also execution speed, see also memory usage, see also portability
trade-offs, 272
spreadsheets, 122
Spyder, 1, 5, 7, 12
statsmodels (library), 167–170
strings, 55–62
sys (library), 241
T
test portal web page, 227
testing, 261, 262
time (library), 274
timing speed, 257, 274, 275
tkinter (library), 225
tuples, 46–47
U
Unified Modeling Language (UML), see class diagrams, see deployment diagrams
unittest (library), 262
urllib (library), 250
V
variables, 12, 24
version control, 278
Visual Studio, 5–7, 9, 19
W
weather station reporting, 250
wind turbine power simulation, 192
OceanofPDF.com