Traffic Engineering Combined
Traffic Engineering Combined
Looking from an aeroplane at a freeway one can visualize the vehicular traffic as a stream
or a continuum fluid. It seems therefore quite natural to associate traffic with fluid flow
and treat it similar. Because of this analogy, traffic is often described as flow,
concentration (density) and speed.
In the fluid flow analogy, the traffic stream is treated as a one dimensional compressible
fluid. This leads to 2 basic assumptions:
a) That traffic flow is conserved
b) There is a 1 to 1 relationship between speed and density or between flow and density.
Flow - density
Assumptions
The first assumption is expressed by the conservation or continuity equation. In more practical
traffic engineering terms, the conservation equation implies that in any traffic system, input is
equal to output plus storage. This principle is generally accepted and there is no controversy in
its validity.
However, the second assumption has raised many objections to the literature partly because it is
always of contradicting measurements. Specifically, if the speed u is a function of density k it
follows that the drivers adjust their speed according to the density k, i.e. as density increases with
the distance, the speed decreases. This is usually correct but it can theoretically lead to negative
speeds or density. In addition, it has been observed that for the same value of density many
values of speed can be measured.
Evidently the assumption has to be qualified but the qualification is that speed or flow is a
function of density but only at equilibrium. Since the conservation equation describes flow and
density as a function of distance and time, one can see that continuum modelling is superior to
the input-output models used in practice.
In addition, because flow is a function of density, continuum models have a major advantage and
that is one of compressibility.
The simple continuum model referred to here consist of the conservation equation and the
equation of state i.e. speed-density or flow-density relationship. If these equations are solved
together with a basic traffic flow equation q = kus, then we can obtain speed flow density at any
time or point of the roadway.
Knowing these basic traffic flow variables, and if we know the state of the traffic system then we
can derive measures of effectiveness such as total travel time, total delay etc. that would allow
engineers to evaluate how well the system is performing.
The conservation equation can easily be derived by considering a unidirectional continuous
road section with two counting sections 1 and 2 (upstream and downstream) respectively as
shown in the figure.
The spacing between the two stations is ∆𝑥, furthermore no sinks or sources are assumed within
∆𝑥 i.e there is no generator or dissipation of flow within this station. Let N1 be the number of
cars (volume) passing section during time ∆𝑡 and q1 the flow passing station 1, ∆𝑡 is the duration
of simultaneous counting at stn 1 and stn 2.
Without loss of generalities, suppose that N1 > N2 and because there is no loss of cars in ∆𝑥 (i.e.
no sink), this assumption implies that there is a build-up of cars between stn 1 and 2.
Let (N2 – N1) = ∆𝑁
For a built up ∆𝑁 will be negative.
Based on these definitions, the built up of cars between stations during ∆𝑡 will be (remember that
flow is the number of vehicles per unit time)
𝑁1
= q1 (flow rate at stn 1)
∆𝑡
𝑁2
= q2 (flow rate at stn 2)
∆𝑡
∆𝑁 N2 – N1
= ∆q, = N2 – N1 𝑡
= ∆𝑁 = ∆𝑞∆𝑡
∆𝑡
if the medium is now considered continuous and the discrete increments are allowed to become
infinitesimal, then taking the limit we obtain:
δq δk
+ = 0 ……………………………………………………….(1)
δx δt
Equation 1 expresses the law of conservation of a traffic stream and is known as the
conservation or continuity equation. If, however, sinks or sources exist within the section of
the roadway then the conservation equation takes the more general form:
δq δk
+ = 𝑔(𝑥, 𝑡)…………………………………………………..(2)
δx δt
Where g (x, t) is the generation (dissipation) rate in vehicles per unit length.
TRAFFIC ASSIGNMENT
The process of allocating given set of trip interchanges to the specified transportation system is
usually referred to as traffic assignment. The fundamental aim of the traffic assignment process
is to reproduce on the transportation system, the pattern of vehicular movements which would be
observed when the travel demand represented by the trip matrix, or matrices, to be assigned is
satisfied.
To carry out a trip assignment, the following data are required:
The number of trips that will be made from one trip to another
Available highway or transit routes between zones
The length of time it takes to travel on each route.
The frequently used assignment models are All or nothing assignment, User equilibrium
assignment and System optimum assignment.
In this method, the trips from any origin zone to any destination zone are loaded onto a single,
minimum cost path between them.
This model is unrealistic as only one path between every O-D pair is utilized even when there is
another path with the same or nearly the same travel cost. Also, traffic on links is assigned
without consideration of whether or not there is adequate capacity or heavy congestion, travel
time is a fixed input and does not vary depending on the congestion on a link. However, this
model may be reasonable in sparsely and uncongested networks where there are few alternative
routes and they have a large difference in the travel cost. This method may also be used to
identify the desired path i.e. the path which the drivers would like to travel in the absence of
congestion. In fact, this model most important practical application is that it acts as a building
block for other types of assignment technique. It has a limitation of ignoring the fact that link
travel time is a function of link volume.
Trip assignment are intended to predict the number of travelers using various routes, this is the
traffic on the links of transportation network.
Based on this principle traffic can be assigned to various routes between any O-D pair. In cases
in which travel time is independent of the volume of traffic on each link, all trips are assigned to
their minimum time parts. For example, in the network below, trips would be assigned to route 2
rather than route 1 if route 2 is shorter than route 1. The shortness of a path is defined in terms of
the cost of traversing the link or set of links within a network. Shortest parts in networks are
determined using procedures or methods called minimum path algorithms, e.g. Moore’s method.
In practice, traffic assignment models tend to employ either a simple all-or-nothing assignment
to the minimum path or constrained all-or-nothing assignment in which the network is
incrementally loaded.
In this method, least cost journey trees are first built for all interzone journey. All vehicles trips
are then loaded onto least cost routes. The total number of trips on each link can then be
determined.
Example
To demonstrate how this assignment works, an example network is considered. This network has
two nodes having two paths as links. Let us suppose a case where travel time is not a function of
flow as shown in other words it is constant as shown in the figure below.
This inter-relationship between route choice decision and traffic flow, forms the basis of route
choice theory and model development. To begin modelling a traveler’s route choice, a
mathematical relationship between route travel time and route traffic flow is needed. Such a
relationship is commonly referred to as a highway performance function.
This free-flow speed is generally computed assuming that the vehicles travel at the speed limit of
the route. Although the linear highway performance function has the appeal or simplicity, if it is
not a particularly realistic representation of the travel time/traffic flow relationship. Recall, the
relationship between traffic speed and flow is parabolic in nature. This parabolic speed-flow
relationship suggests a non-linear highway performance function. This figure below shows that
route travel time increases more quickly as traffic flow approaches capacity. In developing
theories of route choice, two important assumptions are usually made:
Assumptions:
a) Its assumed that travelers would select routes between origins and destinations on the
basis of route travel time only. i.e. they would select the routes with the shortest travel
time. This assumption isn’t terribly restrictive since travel time obviously plays a
dominant role in route choice but other more important factors that may influence route
choice e.g. scenery aren’t accounted for.
b) Travelers know the travel time that would be encountered on all available routes between
an origin and destination and may repeatedly day after day choose one route based only
on the perception that travel times on alternative routes are higher. However, in support
of this assumption studies have shown that travelers perceptions of alternative route
travel times are reasonably close to actual observed travel time.
With these assumptions, the theory of user equilibrium route choice can be operationalized. The
role of choice underlying user equilibrium is that travelers will select the route so as to minimize
their travel time between their origin and their destination. User equilibrium is said to exist when
individual travelers can’t improve their travel times by unilaterally changing routes.
Stated differently by wardrop in 1952, user equilibrium can be defined as “the travel time
between a specified origin and destination on all used routes is less than or equal to the travel
time that will be expressed by a traveler on any unused route”.
Example
Two routes connect a city and a sub-urban. During the peak-hour morning commute, a total of
4500 vehicles travel from the suburban to the city. Route 1 has a 96km per hour speed limit and
is 9.6km in length. Route 2 is 4.8km in length with a 72kph speed limit. Studies show that the
total travel time on route 1 increases 2 minutes for every additional 500 vehicles added. Minutes
of travel time on route 2 increases with the square of the number of vehicles expressed in
thousands of vehicles per hour. Determine user equilibrium travel time.
Solution
𝐷𝑖𝑠𝑡𝑎𝑛𝑐𝑒
Time =
𝑠𝑝𝑒𝑒𝑑
If q is the total traffic flow between the origin and destination in thousands of vehicles per hour,
then we will also have the basic flow conservation identity.
4500
q = x1 + x2 =
1000
= 4.5
with Wardrop’s definition of user equilibrium it is known that the travel times on all used routes
are equal. With both routes used, Wardrop’s user equilibrium definition gives,
t1 = t2
6 +4x = 4 +x2
from flow conservation,
x1 + x2 = 4.5
x1 = 4.5 - x2 substituting
6 +4(4.5 - x2) = 4 +x22
6 +18 - 4x2 = 4 +x22
24 - 4x2 = 4 +x22
x22 + 4x2 – 20 = 0
x2 = 2.9 or 2900veh/hr.
x1 + 2.9 = 4.5
x1 = 4.5 - 2.9
x1 = 1.6 or 1600veh/hr.
t1 = 6 +4(1.6)
t1 = 6 + 6.4
t1 = 12.4 (average route travel time)
t2 = 4 +(2.9)2
t2 = 4 + 8.41
t2 = 12.41
When there are many increment used, the flow may resemble an equilibrium assignment but the
difference is that the method does not yield an equilibrium solution. Consequently, there will be
inconsistency between link volumes and travel times that can lead to error in evaluation
measures.
Also incremental assignment is influenced by the order in which volumes for O-D pairs are
assigned raising the possibility of additional bias in results.
v) Capacity restraint
This method specifically takes account of the fact that as traffic is incrementally loaded onto a
link, it becomes increasingly less attractive to travelers. This loading causes other links to
assume the role of least cost alternative, thereby amending the journey cost tree for the next
increment of load. Trips are then assigned to this alternative least cost routes after the
amendment. After which the assignment trips on each link are then determined.
Shockwaves
The fundamental diagram of traffic flow for 2 adjacent sections of a highway with different
capacities is shown in figure below:
This figure describes the phenomenon of backups and queuing on a highway due to a sudden
reduction of the capacity of the highway known as bottleneck condition. (e.g. from 4 lanes to 2
lane). The sudden reduction in capacity could be due to a crash, reduction in the number of lanes,
restricted bridge sizes, work zones, a signal turning red etc. creating a situation where the
capacity on a highway suddenly changes from C1 to a lower value of C2 with a corresponding
change in optimum density from K0a to a value of K0b.
When such a condition exists and the normal flow and density on the highway are relatively
high, the speed of the vehicles will have to reduce while passing the bottleneck. The point at
which speed reduction takes place can be approximately noted by turning on of the brake lights
of the vehicle.
An observer will see that this point moves upstream as traffic continuous to approach the vicinity
of indicating an upstream movement of the point at which flow and density change.
This phenomenon is usually referred to as a shockwave in the traffic stream. The phenomenon
also exists when the capacity suddenly increases but in this case the speeds of the vehicles tend
to increase as the vehicle pass the section of the road where the capacity increases.
Types of shockwaves
These can be:
i. Frontal stationary shockwaves
ii. Backward forming shockwaves
iii. Backward recovering shockwaves
iv. Rear stationery & forward recovery shockwaves
Frontal stationary shockwaves
These are formed when the capacity suddenly reduces to zero at an approach or set of lanes
having the red indication at a signalized intersection. OR when the highway is completely closed
because of a serious incident. In this case a frontal stationary shockwave is formed at the stop
line of the approach or lanes that have a red signal indication. These type of shockwaves occur at
locations where the capacity is reduced to zero.
Backward forming shockwaves
These are formed when the capacity is reduced below the demand flow rate resulting in the
formation of (q) upstream of bottleneck. The shockwaves move upstream with its location at any
time indicating the number of q at the time.
Backward recovering shockwaves
These are formed when the demand flow rate becomes less than the capacity of the bottleneck.
This would imply that the restriction causing the capacity reduction has been removed. This
would happen for example when the signals at an approach or set of lanes on a signalized
intersection change from red to green.
Rear stationery & forward recovery shockwaves
These are formed when the demand flow rate upstream of the bottleneck is first higher than the
capacity of the bottleneck and then the demand flowrate reduces the capacity of the bottleneck to
express this, consider a 4 lane one direction highway that leads to a 2 lane tunnel in an urban
area.
During the off-peak period when the demand capacity is less than the tunnel capacity no
shockwave is formed, however when the demand capacity becomes higher than the tunnel
capacity during the peak hour, a backward forming shockwave is formed this shockwave
continues to move upstream of the bottleneck as long as the demand flow is higher than the
tunnel capacity. However, as the end of the peak period approaches the demand flow rate tends
to decrease until it is the same as the tunnel capacity.
At this point, a rare stationery shockwave is formed until (the demand flow becomes less than
the tunnel capacity resulting in the formation of a forward recovery shockwave and this is
shown in figure below:
Shockwaves due to a bottleneck
Velocity of shockwaves
Let us consider 2 different densities of traffic k1 and k2 along a straight highway as shown in the
figure, where k1 > k2. Let us also assume that these densities are separated by the line w
representing the shockwave moving at a speed uw. if the line w moves in the direction of the
arrow (i.e. direction of flow of traffic), uw is positive.
with u1 equal to the space mean speed of vehicles in the area with density k (i.e. section
P) the speed of the vehicles in this area relative to the line w is given as,
Ur1 = (u1 - uw)
To find the number of vehicles crossing line w from P during a time period t,
𝑁1 𝑁1 𝑁1
Density K1 = = =
𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒 𝑠𝑝𝑒𝑒𝑑 𝑥 𝑡𝑖𝑚𝑒 𝑈𝑟1 𝑡
𝑁1
K1 = = 𝑁1 = K1𝑈𝑟1 𝑡
𝑈𝑟1 𝑡
Shock Waves and Queue Lengths Due to a Red Phase at a Signalized Intersection
The length of the queue at the end of the red signal is given as;
Distance = time x velocity
= ɤ x Ꞷ13
ɤ 𝑥 𝑞1
=
𝑘1 −𝑘𝑗
Note that also time ɤ = the length of time for the red indication. (the hatched line)
(Ꞷ23)
Example
The southbound approach of the signalized intersection carries a flow of 1000 veh/h/ln at a
velocity of 50 mi/h. the duration of the red signal indication for this approach is 15sec. if the
saturation flow is 2000veh/h/ln with a density of 75 veh/ln, the jam density is 150veh/mi.
determine the following:
a) The length of the queue at the end of the red phase
b) The maximum queue length
c) The time it takes for the queue to dissipate after the end of the red phase
GRAPH THEORY
Definitions
A graph is formed by vertices and edges connecting the vertices
A graph is a pair of sets (V, E), where V is the set of vertices and E is the set of edges,
formed by pairs of vertices. E is a multiset, in other words, its elements can occur more
than once so that every element has a multiplicity. Often, we label the vertices with letters
(for example: a, b, c, . . . or v1, v2, . . .) or numbers 1, 2, . . .
Example. (Continuing from the previous example) We label the vertices as follows:
We have V = {v1, . . . , v5} for the vertices and E = {(v1, v2), (v2, v5),(v5, v5),(v5, v4),(v5, v4)}
for the edges.
Similarly, we often label the edges with letters (for example: a, b, c, . . . or e1, e2, . . .) or
numbers 1, 2, . . . for simplicity.
Example. (Continuing from the previous example) We label the edges as follows:
So E = {e1, . . . , e5}.
We have the following terminologies:
1. The two vertices u and v are end vertices of the edge (u, v).
2. Edges that have the same end vertices are parallel.
3. An edge of the form (v, v) is a loop.
4. A graph is simple if it has no parallel edges or loops.
5. A graph with no edges (i.e. E is empty) is empty.
6. A graph with no vertices (i.e. V and E are empty) is a null graph.
7. A graph with only one vertex is trivial.
8. Edges are adjacent if they share a common end vertex.
9. Two vertices u and v are adjacent if they are connected by an edge, in other words, (u, v) is an
edge.
10. The degree of the vertex v, written as d(v), is the number of edges with v as an end vertex.
By convention, we count a loop twice and parallel edges contribute separately.
11. A pendant vertex is a vertex whose degree is 1.
12. An edge that has a pendant vertex as an end vertex is a pendant edge.
13. An isolated vertex is a vertex whose degree is 0.
Example. (Continuing from the previous example)
• v4 and v5 are end vertices of e5.
• e4 and e5 are parallel.
• e3 is a loop.
• The graph is not simple.
• e1 and e2 are adjacent
v1 and v2 are adjacent.
• The degree of v1 is 1 so it is a pendant vertex.
• e1 is a pendant edge.
• The degree of v5 is 5.
• The degree of v4 is 2.
• The degree of v3 is 0 so it is an isolated vertex.
A simple graph that contains every possible edge between all the vertices is called a complete
graph. A complete graph with n vertices is denoted as Kn. The first four complete graphs are
given as examples:
Example. We have the graph
A graph is a generalization of the simple concept of a set of dots, links, edges or arcs.
Representation: Graph G =(V, E) consists set of vertices denoted by V, or by V(G) and set of edges
E, or E(G)
Trees and Forests
A forest is a circuitless graph (A graph is circuitless exactly when there are no loops and there is
at most one path between any two given vertices.) A tree is a connected forest. A subforest
is a subgraph of a forest. A connected subgraph of a tree is a subtree. Generally speaking, a
subforest (respectively subtree) of a graph is its subgraph, which is also a forest (respectively
tree).
Example. Four trees which together form a forest:
A spanning tree of a connected graph is a subtree that includes all the vertices of that graph.
Types of Graphs
Graphs can be:
Undirected: if for every pair of connected nodes, you can go from one node to the other in both
directions.
Directed: if for every pair of connected nodes, you can only go from one node to another in a
specific direction. We use arrows instead of simple lines to represent directed edges.
Weighted Graphs
A weight graph is a graph whose edges have a "weight" or "cost". The weight of an edge can
represent distance, time, or anything that models the "connection" between the pair of nodes it
connects.
The Shortest Path/lightest (directed): DIJKSTRA’S ALGORITHM
Purpose and Use Cases
With Dijkstra's Algorithm, you can find the shortest path between nodes in a graph. Particularly,
you can find the shortest path from a node (called the "source node") to all other nodes in
the graph, producing a shortest-path tree.
This algorithm is used in GPS devices to find the shortest path between the current location and
the destination. It has broad applications in industry, especially in domains that require modeling
networks.
If there is a negative weight in the graph, then the algorithm will not work properly. Once a node
has been marked as "visited", the current path to that node is marked as the shortest path to reach
that node. And negative weights can alter this if the total weight can be decremented after this
step has occurred.
🔹 Example of Dijkstra's Algorithm
Now that you know more about this algorithm, let's see how it works behind the scenes with a
step-by-step example.
The distance from the source node to itself is 0. For this example, the source node will be
node 0 but it can be any node that you choose.
The distance from the source node to all other nodes has not been determined yet, so we use the
infinity symbol to represent this initially.
We also have this list (see below) to keep track of the nodes that have not been visited yet (nodes
that have not been included in the path):
💡 Tip: Remember that the algorithm is completed once all nodes have been added to the path.
Since we are choosing to start at node 0, we can mark this node as visited. Equivalently, we cross
it off from the list of unvisited nodes and add a red border to the corresponding node in diagram:
Now we need to start checking the distance from node 0 to its adjacent nodes. As you can see,
these are nodes 1 and 2 (see the red edges):
💡 Tip: This doesn't mean that we are immediately adding the two adjacent nodes to the shortest
path. Before adding a node to this path, we need to check if we have found the shortest path to
reach it. We are simply making an initial examination process to see the options available.
We need to update the distances from node 0 to node 1 and node 2 with the weights of the edges
that connect them to node 0 (the source node). These weights are 2 and 6, respectively:
Select the node that is closest to the source node based on the current known distances.
Mark it as visited.
Add it to the path.
If we check the list of distances, we can see that node 1 has the shortest distance to the source
node (a distance of 2), so we add it to the path.
In the diagram, we can represent this with a red edge:
We mark it with a red square in the list to represent that it has been "visited" and that we have
found the shortest path to this node:
We cross it off from the list of unvisited nodes:
Now we need to analyze the new adjacent nodes to find the shortest path to reach them. We will
only analyze the nodes that are adjacent to the nodes that are already part of the shortest path (the
path marked with red edges).
Node 3 and node 2 are both adjacent to nodes that are already in the path because they are
directly connected to node 1 and node 0, respectively, as you can see below. These are the nodes
that we will analyze in the next step.
Since we already have the distance from the source node to node 2 written down in our list, we
don't need to update the distance this time. We only need to update the distance from the source
node to the new adjacent node (node 3):
We add it to the path graphically with a red border around the node and a red edge:
We also mark it as visited by adding a small red square in the list of distances and crossing it off
from the list of unvisited nodes:
Now we need to repeat the process to find the shortest path from the source node to the new
adjacent node, which is node 3.
You can see that we have two possible paths 0 -> 1 -> 3 or 0 -> 2 -> 3. Let's see how we can
decide which one is the shortest path.
Node 3 already has a distance in the list that was recorded previously (7, see the list below). This
distance was the result of a previous step, where we added the weights 5 and 2 of the two edges
that we needed to cross to follow the path 0 -> 1 -> 3.
But now we have another alternative. If we choose to follow the path 0 -> 2 -> 3, we would need
to follow two edges 0 -> 2 and 2 -> 3 with weights 6 and 8, respectively, which represents a total
distance of 14.
Clearly, the first (existing) distance is shorter (7 vs. 14), so we will choose to keep the original
path 0 -> 1 -> 3. We only update the distance if the new path is shorter.
Therefore, we add this node to the path using the first alternative: 0 -> 1 -> 3.
We mark this node as visited and cross it off from the list of unvisited nodes:
We need to check the new adjacent nodes that we have not visited so far. This time, these nodes
are node 4 and node 5 since they are adjacent to node 3.
We update the distances of these nodes to the source node, always trying to find a shorter path, if
possible:
For node 4: the distance is 17 from the path 0 -> 1 -> 3 -> 4.
For node 5: the distance is 22 from the path 0 -> 1 -> 3 -> 5.
💡 Tip: Notice that we can only consider extending the shortest path (marked in red). We cannot
consider paths that will take us through edges that have not been added to the shortest path (for
example, we cannot form a path that goes through the edge 2 -> 3).
We need to choose which unvisited node will be marked as visited now. In this case, it's
node 4 because it has the shortest distance in the list of distances. We add it graphically in the
diagram:
For node 5:
The first option is to follow the path 0 -> 1 -> 3 -> 5, which has a distance of 22 from the source
node (2 + 5 + 15). This distance was already recorded in the list of distances in a previous step.
The second option would be to follow the path 0 -> 1 -> 3 -> 4 -> 5, which has a distance
of 23 from the source node (2 + 5 + 10 + 6).
Clearly, the first path is shorter, so we choose it for node 5.
For node 6:
The path available is 0 -> 1 -> 3 -> 4 -> 6, which has a distance of 19 from the source node (2 +
5 + 10 + 2).
We mark the node with the shortest (currently known) distance as visited. In this case, node 6.
We select the shortest path: 0 -> 1 -> 3 -> 5 with a distance of 22.
We mark the node as visited and cross it off from the list of unvisited nodes:
We have the final result with the shortest path from node 0 to each node in the graph.
In the diagram, the red lines mark the edges that belong to the shortest path. You need to follow
these edges to follow the shortest path to reach a given node in the graph starting from node 0.
For example, if you want to reach node 6 starting from node 0, you just need to follow the red
edges and you will be following the shortest path 0 -> 1 -> 3 -> 4 - > 6 automatically.
🔸 In Summary
Graphs are used to model connections between objects, people, or entities. They have two main
elements: nodes and edges. Nodes represent objects and edges represent the connections between
these objects.
Dijkstra's Algorithm finds the shortest path between a given node (which is called the "source
node") and all other nodes in a graph.
This algorithm uses the weights of the edges to find the path that minimizes the total distance
(weight) between the source node and all other nodes.