Algorithm For Intelligent Systems and robotics-STUDEXGUD
Algorithm For Intelligent Systems and robotics-STUDEXGUD
Contents
Exercise 1. Time Domain and frequency Domain representation using Python ........................................... 1-2
Exercise 2. Pendulum Simulation using Python: example of a system ....................................................... 2-5
Exercise 3. 8 queens problem using Python ............................................................................................ 3-13
Exercise 4. Search Algorithms using python ............................................................................................ 4-18
Exercise 5. Hill Climbing using python ..................................................................................................... 5-20
Exercise 6. Reinforcement learning using python..................................................................................... 6-28
Exercise 7. Simple Neural Network concept using python ........................................................................ 7-36
Exercise 8. Compare various learning strategies for MLP classifier .......................................................... 8-40
Exercise 9. Kalman Filtering using python ............................................................................................... 9-44
Exercise 10. Installing ROS and other packages, basic programs .......................................................... 10-51
Exercise 11. Testing the Simulator ........................................................................................................ 11-64
Exercise 12. Monitoring Robot motion using Simulator........................................................................... 12-68
Exercise 13. Teleoperating the simulated Robot .................................................................................... 13-70
Exercise 14. Avoiding Simulated obstacles ............................................................................................ 14-73
Exercise 15. Multiple Turtle bot Simulation ............................................................................................ 15-78
Exercise 16. Speech related experiment................................................................................................ 16-81
..... 1-1
.....
Exercises Guide
..... 1-2
.....
Exercises Guide
Result:
..... 1-3
.....
Exercises Guide
End of Exercise.
..... 1-4
.....
Exercises Guide
..... 2-5
.....
Exercises Guide
where u is the force applied to the cart, ε is m2/(m1+m2), y is the position of the cart, v is the velocity of the
cart, θ is the angle of the pendulum relative to the cart, m1=10, m2=1, and q is the rate of angle change. Tune
the controller to minimize the use of force applied to the cart either in the forward or reverse direction (i.e.
minimize fuel consumed to perform the maneuver). Explain the tuning and the optimal solution with appropriate
plots that demonstrate that the solution is optimal.
Instructor exercise overview:
Python code:
# Contributed by Everton Colling
import matplotlib.animation as animation
import numpy as np
from gekko import GEKKO
#Defining a model
m = GEKKO()
#################################
#Weight of item
m2 = 1
#################################
#Defining the time, we will go beyond the 6.2s
#to check if the objective was achieved
m.time = np.linspace(0,8,100)
end_loc = int(100.0*6.2/8.0)
#Parameters
m1a = m.Param(value=10)
m2a = m.Param(value=m2)
final = np.zeros(len(m.time))
for i in range(len(m.time)):
if m.time[i] < 6.2:
final[i] = 0
else:
final[i] = 1
final = m.Param(value=final)
..... 2-6
.....
Exercises Guide
#MV
ua = m.Var(value=0)
#State Variables
theta_a = m.Var(value=0)
qa = m.Var(value=0)
ya = m.Var(value=-1)
va = m.Var(value=0)
#Intermediates
epsilon = m.Intermediate(m2a/(m1a+m2a))
m.fix(ya,pos=end_loc,val=0.0)
m.fix(va,pos=end_loc,val=0.0)
m.fix(theta_a,pos=end_loc,val=0.0)
m.fix(qa,pos=end_loc,val=0.0)
#Try to minimize change of MV over all horizon
m.Obj(0.001*ua**2)
m.options.IMODE = 6 #MPC
m.solve() #(disp=False)
..... 2-7
.....
Exercises Guide
plt.figure(figsize=(12,10))
plt.subplot(221)
plt.plot(m.time,ua.value,'m',lw=2)
plt.legend([r'$u$'],loc=1)
plt.ylabel('Force')
plt.xlabel('Time')
plt.xlim(m.time[0],m.time[-1])
plt.subplot(222)
plt.plot(m.time,va.value,'g',lw=2)
plt.legend([r'$v$'],loc=1)
plt.ylabel('Velocity')
plt.xlabel('Time')
plt.xlim(m.time[0],m.time[-1])
plt.subplot(223)
plt.plot(m.time,ya.value,'r',lw=2)
plt.legend([r'$y$'],loc=1)
plt.ylabel('Position')
plt.xlabel('Time')
plt.xlim(m.time[0],m.time[-1])
plt.subplot(224)
plt.plot(m.time,theta_a.value,'y',lw=2)
plt.plot(m.time,qa.value,'c',lw=2)
plt.legend([r'$\theta$',r'$q$'],loc=1)
plt.ylabel('Angle')
plt.xlabel('Time')
plt.xlim(m.time[0],m.time[-1])
plt.rcParams['animation.html'] = 'html5'
x1 = ya.value
y1 = np.zeros(len(m.time))
..... 2-8
.....
Exercises Guide
#suppose that l = 1
x2 = 1*np.sin(theta_a.value)+x1
x2b = 1.05*np.sin(theta_a.value)+x1
y2 = 1*np.cos(theta_a.value)-y1
y2b = 1.05*np.cos(theta_a.value)-y1
fig = plt.figure(figsize=(8,6.4))
ax = fig.add_subplot(111,autoscale_on=False,\
xlim=(-1.5,0.5),ylim=(-0.4,1.2))
ax.set_xlabel('position')
ax.get_yaxis().set_visible(False)
crane_rail, = ax.plot([-1.5,0.5],[-0.2,-0.2],'k-',lw=4)
start, = ax.plot([-1,-1],[-1.5,1.5],'k:',lw=2)
objective, = ax.plot([0,0],[-0.5,1.5],'k:',lw=2)
mass1, = ax.plot([],[],linestyle='None',marker='s',\
markersize=40,markeredgecolor='k',\
color='orange',markeredgewidth=2)
mass2, = ax.plot([],[],linestyle='None',marker='o',\
markersize=20,markeredgecolor='k',\
color='orange',markeredgewidth=2)
line, = ax.plot([],[],'o-',color='orange',lw=4,\
markersize=6,markeredgecolor='k',\
markerfacecolor='k')
time_template = 'time = %.1fs'
time_text = ax.text(0.05,0.9,'',transform=ax.transAxes)
start_text = ax.text(-1.06,-0.3,'start',ha='right')
end_text = ax.text(0.06,-0.3,'objective',ha='left')
def init():
mass1.set_data([],[])
mass2.set_data([],[])
line.set_data([],[])
time_text.set_text('')
..... 2-9
.....
Exercises Guide
def animate(i):
mass1.set_data([x1[i]],[y1[i]-0.1])
mass2.set_data([x2b[i]],[y2b[i]])
line.set_data([x1[i],x2[i]],[y1[i],y2[i]])
time_text.set_text(time_template % m.time[i])
return line, mass1, mass2, time_text
#ani_a.save('Pendulum_Control.mp4',fps=30)
plt.show()
Result:
..... 2-10
.....
Exercises Guide
..... 2-11
.....
Exercises Guide
End of Exercise.
..... 2-12
.....
Exercises Guide
The eight queens puzzle, or the eight queens problem, asks how to place eight queens on a chessboard
without attacking each other. In the below figure, you can see two queens with their attack patterns:
We can generate a solution to the problem by scanning each row of the board and placing one queen per
column, while checking at every step, that no two queens are in the line of attack of the other. A brute force
approach to the problem will be to generate all possible combinations of the eight queens on the chessboard
and reject the invalid states. How many combinations of 8 queens on a 64 cells chessboard are possible ?
..... 3-13
.....
Exercises Guide
We can further reduce the number of potential solutions if we observe that a valid solution can have only one
queen per row, which means that we can represent the board as an array of eight elements, where each entry
represents the column position of the queen from a particular row. Take as an example the next solution of
the problem:
The queens positions on the above board, can be represented as the occupied positions of a two dimensional
8x8 array: [0, 6], [1, 2], [2, 7], [3, 1], [4, 4], [5, 0], [6, 5], [7, 3]. Or, as described above, we can use a one
dimensional 8 elements array: [6, 2, 7, 1, 4, 0, 5, 3].
..... 3-14
.....
Exercises Guide
If we look closely at the example solution [6, 2, 7, 1, 4, 0, 5, 3], we note that a potential solution to the eight
queens puzzle can be constructed by generating all possible permutations of an array of eight numbers, [0, 1,
2, 3, 4, 5, 6, 7], and rejecting the invalid states (the ones in which any two queens can attack each other). The
number of all permutations of n unique objects is n!, which for our particular case is:
n!=40,320
which is more reasonable than the previous 4,426,165,368 situations to analyze for the brute force approach.
A slightly more efficient solution to the puzzle uses a recursive approach: assume that we’ve already
generated all possible ways to place k queens on the first k rows. In order to generate the valid positions for
the k+1 queen we place a queen on all columns of row k+1 and we reject the invalid states. We do the above
steps until all eight queens are placed on the board. This approach will generate all 92 distinct solutions for
the eight queens puzzle.
def solve(self):
"""Solve the n queens puzzle and print the number of solutions"""
positions = [-1] * self.size
self.put_queen(positions, 0)
print("Found", self.solutions, "solutions.")
..... 3-15
.....
Exercises Guide
self.show_full_board(positions)
# self.show_short_board(positions)
self.solutions += 1
else:
# For all N columns positions try to place a queen
for column in range(self.size):
# Reject all invalid positions
if self.check_place(positions, target_row, column):
positions[target_row] = column
self.put_queen(positions, target_row + 1)
return False
return True
..... 3-16
.....
Exercises Guide
print("\n")
def main():
"""Initialize and solve the n queens puzzle"""
NQueens(8)
End of Exercise.
..... 3-17
.....
Exercises Guide
..... 4-18
.....
Exercises Guide
Result:
End of Exercise.
..... 4-19
.....
Exercises Guide
import random
import math
import time
def cost1(x,y):
if x==y:
return 0
elif x<3 and y<3:
return 1
elif x<3:
return 200
elif y<3:
return 200
elif (x%7)==(y%7):
..... 5-20
.....
Exercises Guide
return 2
else:
return abs(x-y)+3
return
def cost2(x,y):
if x==y:
return 0
elif (x+y)<10:
return abs(x-y)+4
elif [(x+y)%11]==0:
return 3
else:
return abs(x-y)**2+10
return
def cost3(x,y):
if x==y:
return 0
else:
return (x+y)**2
return
..... 5-21
.....
Exercises Guide
if(cost_fun=="c2"):
cost_i = cost2(tours[i],tours[0])
if(cost_fun=="c3"):
cost_i = cost3(tours[i],tours[0])
total_cost=total_cost+cost_i
else:
if(cost_fun=="c1"):
cost_i = cost1(tours[i],tours[i+1])
if(cost_fun=="c2"):
cost_i = cost2(tours[i],tours[i+1])
if(cost_fun=="c3"):
cost_i = cost3(tours[i],tours[i+1])
total_cost=total_cost+cost_i
return total_cost
# mutation operator that swaps two cities randomly to create a new path
def mutation_operator(tours):
r1= list(range(len(tours)))
r2= list(range(len(tours)))
random.shuffle(r1)
..... 5-22
.....
Exercises Guide
random.shuffle(r2)
for i in r1:
for j in r2:
if i < j:
next_state =tours[:]
next_state[i],next_state[j]=tours[j],tours[i]
yield next_state
best_path=random_path(no_cities,seed1)
best_cost = tour_cost(best_path,cost_func)
evaluations_count=1
while evaluations_count < MEB:
for next_city in mutation_operator(best_path):
if evaluations_count == MEB:
break
str1 = ''.join(str(e) for e in next_city)
#Skip calculating the cost of repeated paths
..... 5-23
.....
Exercises Guide
if str1 in dict:
evaluations_count+=1
continue
next_tCost=tour_cost(next_city,cost_func)
#store it in the dictionary
dict[str1] = next_tCost
evaluations_count+=1
return best_cost,best_path,evaluations_count
#This function implements simulated annealing for TSP
def simulated_annealing(no_cities,cost_func,MEB,seed1):
start_temp=70
cooling_constant=0.9995
best_path = None
best_cost = None
current_path=random_path(int(no_cities),seed1)
current_cost=tour_cost(current_path,cost_func)
num_evaluations=1
temp_schedule=cooling_schedule(start_temp,cooling_constant)
for temperature in temp_schedule:
flag = False
#examinning moves around our current path
for next_path in mutation_operator(current_path):
if num_evaluations == MEB:
..... 5-24
.....
Exercises Guide
next_cost=tour_cost(next_path,cost_func)
num_evaluations+=1
p=Probability_acceptance(current_cost,next_cost,temperature)
if random.random() < p:
current_path=next_path
current_cost=next_cost
break
if flag:
break
return best_path,best_cost,num_evaluations
keeprunning=True
while keeprunning:
..... 5-25
.....
Exercises Guide
best_path,best_cost,num_evaluations=randomized_hill_climbing(no_cities,cost_
func,MEB,seed1)
elif(search_strat==2):
print("This is the output of simulated annealing - Sophisticated Search
\n", file=open("2runs.txt", "a"))
best_path,best_cost,num_evaluations=simulated_annealing(no_cities,cost_func,
MEB,seed1)
else:
print("Please enter a valid option either 1 or 2 !!")
break
..... 5-26
.....
Exercises Guide
Result:
The cost of best Solution [1, 8, 13, 19, 24, 30, 36, 40, 41, 42, 44, 45, 47, 49, 48, 46, 43, 39, 33, 27, 22, 18, 15,
16, 20, 23, 26, 29, 32, 35, 38, 37, 34, 31, 28, 25, 21, 17, 14, 12, 11, 10, 9, 7, 6, 3, 4, 5, 2, 0]
The path of best solution 978
Value of MEB count is 2000000
********** %s seconds********* 20.66306781768799
End of Exercise.
..... 5-27
.....
Exercises Guide
Here we attempt to teach a robot to reach its destination using the Q-learning technique.
Instructor exercise overview:
Step 1: Importing the required libraries.
import pylab as pl
import networkx as nx
edges = [(0, 1), (1, 5), (5, 6), (5, 4), (1, 2),
(1, 3), (9, 10), (2, 4), (0, 6), (6, 7),
(8, 9), (7, 8), (1, 7), (3, 9)]
goal = 10
G = nx.Graph()
G.add_edges_from(edges)
pos = nx.spring_layout(G)
nx.draw_networkx_nodes(G, pos)
nx.draw_networkx_edges(G, pos)
..... 6-28
.....
Exercises Guide
nx.draw_networkx_labels(G, pos)
pl.show()
MATRIX_SIZE = 11
M = np.matrix(np.ones(shape =(MATRIX_SIZE, MATRIX_SIZE)))
M *= -1
for point in edges:
print(point)
if point[1] == goal:
M[point] = 100
else:
M[point] = 0
if point[0] == goal:
M[point[::-1]] = 100
else:
M[point[::-1]]= 0
# reverse of point
# This function returns all available actions in the state given as an argument
..... 6-29
.....
Exercises Guide
def available_actions(state):
current_state_row = M[state,]
available_action = np.where(current_state_row >= 0)[1]
return available_action
# This function chooses at random which action to be performed within the range
next_action = int(np.random.choice(available_action,1))
return next_action
# This function updates the Q matrix according to the path selected and the Q
# learning algorithm
def update(current_state, action, gamma):
if max_index.shape[0] > 1:
max_index = int(np.random.choice(max_index, size = 1))
else:
max_index = int(max_index)
max_value = Q[action, max_index]
# Q learning formula
Q[current_state, action] = M[current_state, action] + gamma * max_value
..... 6-30
.....
Exercises Guide
return (0)
# Updates the Q-Matrix according to the path chosen
# Update Q matrix
update(initial_state,action,gamma)
#------------------------------------------------------------------------------
-
# Training
scores = []
#------------------------------------------------------------------------------
-
# Testing
# Goal state = 5
# Best sequence path starting from 2 -> 2, 3, 1, 5
current_state = 0
steps = [current_state]
..... 6-31
.....
Exercises Guide
if next_step_index.shape[0] > 1:
next_step_index = int(np.random.choice(next_step_index, size = 1))
else:
next_step_index = int(next_step_index)
steps.append(next_step_index)
current_state = next_step_index
# Print selected sequence of steps
print("Most efficient path:")
print(steps)
pl.plot(scores)
pl.xlabel('No of iterations')
pl.ylabel('Reward gained')
pl.show()
#------------------------------------------------------------------------------
# OUTPUT
#------------------------------------------------------------------------------
#
# Trained Q matrix:
#[[ 0. 0. 0. 0. 80. 0. ]
# [ 0. 0. 0. 64. 0. 100. ]
# [ 0. 0. 0. 64. 0. 0. ]
# [ 0. 80. 51.2 0. 80. 0. ]
# [ 0. 80. 51.2 0. 0. 100. ]
#
# Selected path:
# [2, 3, 1, 5]#
..... 6-32
.....
Exercises Guide
Results:
..... 6-33
.....
Exercises Guide
..... 6-34
.....
Exercises Guide
End of Exercise.
..... 6-35
.....
Exercises Guide
Input Output
Training data 1 0 0 1 0
Training data 2 1 1 1 1
Training data 3 1 0 1 1
Training data 4 0 1 1 0
New Situation 1 0 0 ?
We are going to train the neural network such that it can predict the correct output value when provided with
a new set of data.
As you can see on the table, the value of the output is always equal to the first value in the input section.
Therefore, we expect the value of the output (?) to be 1.
Instructor exercise overview:
Python code:
import numpy as np
class NeuralNetwork():
..... 7-36
.....
Exercises Guide
self.synaptic_weights += adjustments
inputs = inputs.astype(float)
output = self.sigmoid(np.dot(inputs, self.synaptic_weights))
return output
if name == " main ":
..... 7-37
.....
Exercises Guide
neural_network = NeuralNetwork()
training_outputs = np.array([[0,1,1,0]]).T
..... 7-38
.....
Exercises Guide
Results:
End of Exercise.
..... 7-39
.....
Exercises Guide
Estimated time:
90.00 minutes
What this exercise is about:
Comparing various learning strategies for MLP classifier for various datasets.
What you should be able to do:
Compare various learning strategies for MLP classifier for various datasets.
Introduction:
This example visualizes some training loss curves for different stochastic learning strategies, including SGD
and Adam. Because of time-constraints, we use several small datasets, for which L-BFGS might be more
suitable. The general trend shown in these examples seems to carry over to large datasets, however.
Note that those results can be highly dependent on the value of learning_rate_init.
Instructor exercise overview:
Python code:
"""
========================================================
Compare Stochastic learning strategies for MLPClassifier
========================================================
This example visualizes some training loss curves for different stochastic
learning strategies, including SGD and Adam. Because of time-constraints, we
use several small datasets, for which L-BFGS might be more suitable. The
general trend shown in these examples seems to carry over to larger datasets,
however.
print( doc )
import warnings
..... 8-40
.....
Exercises Guide
..... 8-41
.....
Exercises Guide
X = MinMaxScaler().fit_transform(X)
mlps = []
if name == "digits":
# digits is larger but converges fairly quickly
max_iter = 15
else:
max_iter = 400
mlps.append(mlp)
print("Training set score: %f" % mlp.score(X, y))
print("Training set loss: %f" % mlp.loss_)
for mlp, label, args in zip(mlps, labels, plot_args):
ax.plot(mlp.loss_curve_, label=label, **args)
..... 8-42
.....
Exercises Guide
Result:
End of Exercise.
..... 8-43
.....
Exercises Guide
• A vector containing the most current control state (vector "u"). This is the system's guess as to what
it did to affect the situation (such as steering commands).
• A vector containing the most current measurements that can be used to calculate the state (vector
"z").
..... 9-44
.....
Exercises Guide
The Equations
NOTE: The equations are here for exposition and reference. You aren't expected to understand the equations on the first read.
The Kalman Filter is like a function in a programming language: it's a process of sequential equations with
inputs, constants, and outputs. Here I've color-coded the filter equations to illustrate which parts are which. If
you are using the Kalman Filter like a black box, you can ignore the gray intermediary variables.
Information: State
Prediction
Information: (Predict
where we're going to Information:
be)
Information: Covarian
ce Prediction
Information: (Predict
how much error) Information:
Information: Innovatio
n
Information: (Compar
e reality against Information:
prediction)
Information: Innovatio
n Covariance
Information: (Compar
e real error against
prediction) Information:
Information: Kalman
Gain
Information: (Moderat
e the prediction) Information:
Information: State
Update
Information:
..... 9-45
.....
Exercises Guide
Information: (New
estimate of where we
are)
Information: Covarian
ce Update
Information: (New
estimate of error) Information:
Inputs:
Un = Control vector. This indicates the magnitude of any control system's or user's control on the situation.
Zn = Measurement vector. This contains the real-world measurement we received in this time step.
Outputs:
A = State transition matrix. Basically, multiply state by this and add control factors, and you get a prediction of
the state for the next time step.
B = Control matrix. This is used to define linear equations for any control factors.
R = Estimated measurement error covariance. Finding precise values for Q and R are beyond the scope of
this guide.
To program a Kalman Filter class, your constructor should have all the constant matrices, as well as an initial
estimate of the state (x) and error (P). The step function should have the inputs (measurement and control
vectors). In my version, you access outputs through "getter" functions.
We will take an example of Single- Variable
Situation
We will attempt to measure a constant DC voltage with a noisy voltmeter. We will use the Kalman filter to filter
out the noise and converge toward the true value.
..... 9-46
.....
Exercises Guide
Since the voltage never changes, it is a very simple equation. The objective of the Kalman filter is to mitigate
the influence of Wn in this equation.
Instructor exercise overview:
Python code:
# kalman1.py
# Note: This code is part of a larger tutorial "Kalman Filters for Undergrads"
# located at http://greg.czerniak.info/node/5.
import random
import numpy
import pylab
class KalmanFilterLinear:
..... 9-47
.....
Exercises Guide
def GetCurrentState(self):
return self.current_state_estimate
def Step(self,control_vector,measurement_vector):
#---------------------------Prediction step-----------------------------
#--------------------------Observation step-----------------------------
innovation_covariance =
self.H*predicted_prob_estimate*numpy.transpose(self.H) + self.R
#-----------------------------Update step-------------------------------
size = self.current_prob_estimate.shape[0]
self.current_prob_estimate = (numpy.eye(size)-
kalman_gain*self.H)*predicted_prob_estimate
class Voltmeter:
self.truevoltage = _truevoltage
self.noiselevel = _noiselevel
def GetVoltage(self):
..... 9-48
.....
Exercises Guide
return self.truevoltage
def GetVoltageWithNoise(self):
return random.gauss(self.GetVoltage(),self.noiselevel)
numsteps = 60
A = numpy.matrix([1])
H = numpy.matrix([1])
B = numpy.matrix([0])
Q = numpy.matrix([0.00001])
R = numpy.matrix([0.1])
xhat = numpy.matrix([3])
P = numpy.matrix([1])
filter = KalmanFilterLinear(A,B,H,xhat,P,Q,R)
voltmeter = Voltmeter(1.25,0.25)
measuredvoltage = []
truevoltage = []
kalman = []
for i in range(numsteps):
measured = voltmeter.GetVoltageWithNoise()
measuredvoltage.append(measured)
truevoltage.append(voltmeter.GetVoltage())
kalman.append(filter.GetCurrentState()[0,0])
filter.Step(numpy.matrix([0]),numpy.matrix([measured]))
..... 9-49
.....
Exercises Guide
pylab.plot(range(numsteps),measuredvoltage,'b',range(numsteps),truevoltage,'r',
range(numsteps),kalman,'g')
pylab.xlabel('Time')
pylab.ylabel('Voltage')
pylab.legend(('measured','true voltage','kalman'))
pylab.show()
Result:
End of Exercise.
..... 9-50
.....
Exercises Guide
rosdep update
..... 10-51
.....
Exercises Guide
mkdir -p ~/catkin_ws/src
cd ~/catkin_ws/
catkin_make
source devel/setup.bash
/home/youruser/catkin_ws/src:/opt/ros/melodic/share
..... 10-52
.....
Exercises Guide
SUMMARY
======
PARAMETERS
* /rosdistro: melodic
* /rosversion: 1.14.3
NODES
b) using rosnode:
Open up a new terminal, and let's use rosnode to see what running roscore did... Bear in mind to keep the
previous terminal open either by opening a new tab or simply minimizing it.
Key in the commnad rosnode list
we should see
/rosout
This showed us that there is only one node running: rosout. This is always running as it collects and logs
nodes' debugging output.
Lets try to get some info by the command rosnode info /rosout
..... 10-53
.....
Exercises Guide
we should see
------------------------------------------------------------------------
Node [/rosout]
Publications:
* /rosout_agg [rosgraph_msgs/Log]
Subscriptions:
* /rosout [unknown type]
Services:
* /rosout/get_loggers
* /rosout/set_logger_level
c) using rosrun:
rosrun allows you to use the package name to directly run a node within a package (without having to know
the package path).
Run the command : rosrun turtlesim turtlesim_node
we should have output in two terminals as below:
and
..... 10-54
.....
Exercises Guide
Now if we open another terminal and see the output for the command rosnode list
we will see:
/rosout
/turtlesim
now lets use a remapping argument to change the nodes name as:
rosrun turtlesim turtlesim_node name:=my_turtle
we should see:
rosnode: node is [/my_turtle]
pinging /my_turtle with a timeout of 3.0s
xmlrpc reply from http://your home:42831/ time=0.482082ms
xmlrpc reply from http://your home:42831/ time=0.992060ms
xmlrpc reply from http://your home:42831/ time=0.972986ms
6. Understanding ROS Topics
a) Let's start by making sure that we have roscore running, in a new terminal:
roscore
If you left roscore running , you may get the error message:
and
..... 10-55
.....
Exercises Guide
when the turtle hits the walls we also see this output
now lets try to see the communication bewtween the various nodes
..... 10-56
.....
Exercises Guide
d) ROS topics
In the new terminal run rosrun rqt_graph rqt_graph we should see this
If we hover our mouse over /turtle1/cmd_vel it will highlight the ROS nodes (here blue and green) and topics
(here red). As you can see, the turtlesim_node and the turtle_teleop_key nodes are communicating on the
topic named /turtle1/cmd_vel.
..... 10-57
.....
Exercises Guide
..... 10-58
.....
Exercises Guide
..... 10-59
.....
Exercises Guide
Now let's look at rqt_graph again. Press the refresh button in the upper-left to show the new node. As you can
see rostopic echo, shown here in red, is now also subscribed to the turtle1/cmd_vel topic.
e) rostopic list
This tell us the whole list of topics : the published and the subscribed
f) ROS Messages
Communication on topics happens by sending ROS messages between nodes. For the publisher
(turtle_teleop_key) and subscriber (turtlesim_node) to communicate, the publisher and subscriber must send
and receive the same type of message. This means that a topic type is defined by the message type
published on it. The type of the message sent on a topic can be determined using rostopic type.
geometry_msgs/Twist
We can look at the details of the message using rosmsg:
..... 10-60
.....
Exercises Guide
also
rostopic pub publishes data on to a topic currently advertised.
..... 10-61
.....
Exercises Guide
We can also look at what is happening in rqt_graph. Press the refresh button in the upper-left. The rostopic
pub node (here in red) is communicating with the rostopic echo node (here in green):
rostopic hz /turtle1/pose
we can use the rqt_plot to display a scrolling time plot of the data published on topics as
..... 10-62
.....
Exercises Guide
End of Exercise.
..... 10-63
.....
Exercises Guide
..... 11-64
.....
Exercises Guide
..... 11-65
.....
Exercises Guide
..... 11-66
.....
Exercises Guide
Variou settings, parameters, views need to be adjsuted and outputs and variations
needs to be recorded.
End of Exercise.
..... 11-67
.....
Exercises Guide
..... 12-68
.....
Exercises Guide
We then use the keyboard to move the robot around, this also is teleoperating the robot (Rviz).
End of Exercise.
..... 12-69
.....
Exercises Guide
Let’s look at our TurtleBot3 in a different environment. This environment is often used for testing SLAM and
navigation algorithms. Simultaneous localization and mapping (SLAM) concerns the problem of a robot
building or updating a map of an unknown environment while simultaneously keeping track its location in that
environment.
In a new window type the command:
roslaunch turtlebot3_gazebo turtlebot3_world.launch
..... 13-70
.....
Exercises Guide
To move the TurtleBot with your keyboard, use this command in another terminal tab:
roslaunch turtlebot3_teleop turtlebot3_teleop_key.launch
..... 13-71
.....
Exercises Guide
End of Exercise.
..... 13-72
.....
Exercises Guide
..... 14-73
.....
Exercises Guide
We can open RViz to visualize the LaserScan topic while TurtleBot3 is moving about in the world. In a new
terminal tab type:
roslaunch turtlebot3_gazebo turtlebot3_gazebo_rviz.launch
..... 14-74
.....
Exercises Guide
..... 14-75
.....
Exercises Guide
..... 14-76
.....
Exercises Guide
End of Exercise.
..... 14-77
.....
Exercises Guide
..... 15-78
.....
Exercises Guide
Step 3. Use the teleoperation to control the three turtle robots using the commands
..... 15-79
.....
Exercises Guide
End of Exercise.
..... 15-80
.....
Exercises Guide
""" RATE"""
rate = engine.getProperty('rate') # getting details of current speaking rate
print (rate) #printing current voice rate
engine.setProperty('rate', 125) # setting up new voice rate
"""VOLUME"""
volume = engine.getProperty('volume') #getting to know current volume level
(min=0 and max=1)
print (volume) #printing current volume level
engine.setProperty('volume',1.0) # setting up volume level between 0 and 1
"""VOICE"""
voices = engine.getProperty('voices') #getting details of current voice
#engine.setProperty('voice', voices[0].id) #changing index, changes voices. o
for male
engine.setProperty('voice', voices[1].id) #changing index, changes voices. 1
for female
engine.say("Hello World!")
engine.say('My current speaking rate is ' + str(rate))
engine.runAndWait()
engine.stop()
End of Exercise.
..... 16-81
.....
-- --