Tutorial AWR Robot
Tutorial AWR Robot
Resources Links
RobotName: Adeept_AWR
RobotURL: https://github.com/adeept/Adeept_AWR
RobotGit: https://github.com/adeept/Adeept_AWR.git
[Official Raspberry Pi website] https://www.raspberrypi.org/downloads/
[Official website] https://www.adeept.com/
[GitHub] https://github.com/adeept/Adeept_AWR
[Image file and Documentation for structure assembly] https://www.adeept.com/learn/detail-35.html
2
Components List
Acrylic Plates
The acrylic plates are fragile, so please be careful when assembling them in case of breaking. The acrylic plate
is covered with a layer of protective film. You need to remove it first. Some holes in the acrylic may have
residues, so you need to clean them before the use.
3
Machinery Parts
4
Electronic Parts
Motor X4 Raspberry Pi Camera X1
Servo x1 Wheel x4
4 PIN WIRE X1
5-Pin Wire X1
3-Pin Wire X2
Tools
Hex Wrench-2.0mm X1 Cross Screwdriver X1 Cross Socket Wrench X1
Winding Pipe X1
Ribbon X1
Self-prepared Parts
Requirements for 18650 lithium battery: 18650 lithium battery is required for normal operation of the robot,
and the current output is above 4A.
1
Content
1. Premise.............................................................................................................................................................................. 1
1.1 STEAM and Raspberry Pi.................................................................................................................................................1
1.2 About The Documentation............................................................................................................................................. 1
2. Raspberry Pi System Installation and Development Environment Establishment...........................................................3
2.1 Install An Operating System for The Raspberry Pi..........................................................................................................3
2.1.1 Method A: Write 'Raspbian' to The SD Card by Raspberry Pi Imager.................................................................3
2.1.2 Method B: Download The Image File Raspbian and Write It to The SD Card Manually.................................... 5
2.1.3 Method C: Manually Download The Image File Provided by Us and Write It to The SD Card (Not
Recommended).............................................................................................................................................................8
2.2 Enable SSH Server of Raspberry Pi............................................................................................................................... 10
2.2.1 Method A: Enable SSH with Peripherals........................................................................................................... 11
2.2.2 Method A: Enable SSH without Peripherals......................................................................................................11
2.3 Configure WiFi on Raspberry Pi....................................................................................................................................12
2.3.1 Method A: WiFi Connection with Peripherals...................................................................................................12
2.3.2 Method A: WiFi Connection without Peripherals............................................................................................. 12
3 Log In to The Raspberry Pi and Install The App............................................................................................................... 14
3.1 Log into Raspberry Pi (Windows 10)............................................................................................................................ 14
3.2 Log into Raspberry Pi (Linux or Mac OS)...................................................................................................................... 15
3.3 Log into Raspberry Pi (Windows 8 or Previous Version)..............................................................................................15
3.4 Download Program of The Raspberry Pi Robot............................................................................................................16
3.5 Install Corresponding Dependent Libraries..................................................................................................................17
3.6 Run the Raspberry Pi Robot's Program........................................................................................................................ 17
4 Assembly and Precautions............................................................................................................................................... 18
4.1 Structure Assembly.......................................................................................................................................................18
4.2 Tips for Structural Assemblage.....................................................................................................................................50
4.3 Tips for Power Provision............................................................................................................................................... 50
5 Controlling Robot via WEB App........................................................................................................................................52
6 Common Problems and Solutions(Q&A)......................................................................................................................... 54
7 Set The Program to Start Automatically.......................................................................................................................... 57
7.1 Set The Specified Program to Run Automatically at Boot............................................................................................57
7.2 Change The Program That Starts Automatically.......................................................................................................... 57
8 Remote Operation of Raspberry Pi Via MobaXterm....................................................................................................... 59
9 How to Control WS2812 RGB LED....................................................................................................................................61
10 How to Control The Servo.............................................................................................................................................. 64
10.1 Control The Steering Gear to Rotate to A Certain Angle............................................................................................64
10.2 Control The Slow Motion of The Steering Gear......................................................................................................... 65
10.3 Non-blocking Control..................................................................................................................................................66
11 How to Control DC Motor.............................................................................................................................................. 67
12 Ultrasonic Module..........................................................................................................................................................70
13 Line Tracking...................................................................................................................................................................72
14 Make A Police Light or Breathing Light.......................................................................................................................... 75
2
1. Premise
STEAM stands for Science, Technology, Engineering, Arts and Mathematics. It's a type of trans disciplinary
education idea focused on practice. As a board designed for computer programming education, Raspberry Pi has
lots of advantages over other robot development boards. Therefore, Raspberry Pi is used for function control of
the robot.
This documentation is for software installation and operation guide for the Python robot product. It describes
every detail of the whole process of fulfilling the robot project by Python and Raspberry Pi from scratch as well as
some precautions. Hope you can get started with the Raspberry Pi robot on Python and make more creations
According to the different situations of different users, there will be some changes in the process of this
document, you can refer to the following process:
2
3
Raspberry Pi Imager is an image writing tool to SD card developed by the Raspberry Pi Organization. It
comes with many versions working on different systems and it's quite easy to use; all you need is choose the
operating system and SD card, Raspberry Pi Imager will download the corresponding image file for the system
Step-by-Step Overview
1. 1. Prepare an SD card (16G or larger) and an SD card reader
7. 4. Write the operating system for Raspberry Pi to the SD card with `Raspberry Pi Imager` `Raspbian Full -
8. 5. Leave the SD card connected after writing is completed, we'll use for configuring SSH and WiFi
connection later.
Detailed Steps:
●Open a web browser on your computer, go to the Raspberry Pi website [Official Raspberry Pi website], find and
download the Raspberry Pi Imager for your computer OS, or click on the links above for the corresponding
●Insert the SD card into the card reader, connect the card reader with your computer.
●Run the Raspberry Pi Imager, select CHOOSE OS -> Raspbian(other) -> Raspbian Full - A port of Debian
●Click on CHOOSE SD CARD for the SD card to write the Raspbian Full, please be noted that the image
●Click on WRITE, wait for the writing. The Raspberry Pi Imager needs to download the Raspbian image file
during the process. You can download the file following the step in 2.1.2.
5
●Do not remove the SD card connected after writing is completed, we'll use for configuring SSH and WiFi
connection later. Otherwise, if you remove the card, insert it into the Raspberry Pi and boot, WiFi configuration
2.1.2 Method B: Download The Image File Raspbian and Write It to The SD Card Manually
●Since the image file is downloaded with Raspberry Pi Imager in 2.1.1, it can take a long time due to a slow
network in some places. You may then manually download the image file Raspbian and write it to the SD card
with th Raspberry Pi Imager.
Step-by-Step Overview
1. Prepare an SD card (16G or larger) and an SD card reader
2. Download the `Raspberry Pi Imager` on the official website [Official Raspberry Pi Website]
- Torrent file:
-Zip file: [Raspbian - Raspbian Buster with desktop and recommended software]
5. Unzip the file, be noted that the path should be in English for the `.img` file extracted, no special
characters allowed.
6. Write the image file `Raspbian` downloaded to the SD card with `Raspberry Pi Imager`
7. Leave the SD card connected after writing is completed, we'll use for configuring SSH and WiFi
connection later.
Detailed Steps:
●Open a web browser on your computer, go to the Raspberry Pi website[Official Raspberry Pi website], find
and download the Raspberry Pi Imager for your computer OS, or click on the links above for the corresponding
system to directly download and install.
7
●On the Raspberry Pi website [Official Raspberry Pi website], select through Downloads -> Raspbian ->
Raspbian Buster with desktop and recommended software, and click on the torrent or zip file to download. Unzip
the file after download, be noted that the path should be in English for the .img file extracted, no special
characters allowed; otherwise Raspberry Pi Imager may not open the .img file. It's recommended to save the .img
file to the root directory of the C:\ or D:\ disk, but do not save .img on the SD card.
●Insert the SD card into the card reader, connect the card reader and your computer.
●Run the Raspberry Pi Imager, select CHOOSE OS, and then Use custom to find the .img extracted, click
Open.
●Select CHOOSE SD CARD for the SD card to write the Raspbian, please be noted that the image writing
will automatically delete all files on the SD card if any.
●Click on WRITE, wait for the writing.
8
●Do not remove the SD card connected after writing is completed, we'll use for configuring SSH and WiFi
connection later. Otherwise, if you remove the card, insert it into the Raspberry Pi and boot it up, WiFi
configuration without any peripherals may fail in the following process.
2.1.3 Method C: Manually Download The Image File Provided by Us and Write It to The SD Card (Not
Recommended)
● The Raspbian image file downloaded in 2.1.1 and 2.1.2 is the official source with some preinstalled
software. To operate the robot, you may need many dependent libraries. Though we provide the simple script to
install them (see details later), failure can happen during installation if the library is not the latest version.
Therefore, despite we provide the downloading of the Raspbian image file, it may happen that our image file and
the dependent libraries are not most updated versions. Please only use when you encounter the most
troublesome situation.
●Step-by-Step Overview
1. Prepare an SD card (16G or larger) and an SD card reader
2. Download the `Raspberry Pi Imager` from the official website [Official Raspberry Pi website]
Detailed Steps:
●Open a web browser on your computer, go to the Raspberry Pi website [Official Raspberry Pi website],
find and download the Raspberry Pi Imager for your computer OS, or click on the links above for the
corresponding system to directly download and install.
● Go to our [official website], find and download the image file [Image file for the Adeept_AWR Robot].
Unzip the file, be noted that the path should be in English for the .img file extracted, no special characters
allowed. otherwise Raspberry Pi Imager may not open the .img file. It's recommended to save the .img file to the
root directory of the C:\ or D:\ disk, but do not save .img on the SD card.
●Insert the SD card into the card reader, connect the card reader and your computer.
●Run the Raspberry Pi Imager, select CHOOSE OS, and then Use custom to find the .img extracted, click
Open.
●Select CHOOSE SD CARD for the SD card to write the Raspbian, please be noted that the image writing
10
●Do not remove the SD card connected after writing is completed, we'll use for configuring WiFi connection
later. Otherwise, if you remove the card, insert it into the Raspberry Pi and boot it up, WiFi configuration without
●By SSH (Secure Shell) server, you can use the command line of Raspberry Pi remotely on
another device. In the subsequent operation and when using the Raspberry Pi, you don't have to
connect a mouse, keyboard, or monitor to it, but simply control it on a computer in the same LAN.
●As of the November 2016 release, Raspbian has the SSH server disabled by default. You
will have to enable it manually.
●The method to enable the SSH in this documentation can be referred to the Raspberry Pi
official website SSH(Secure Shell)
11
●If you use (2.1.3 to manually download the image file we provide and write it to the SD card) to write
the operating system of the Raspberry Pi to the SD card, you do not need to refer to this section to open SSH,
●If you've connected a mouse, keyboard, or monitor to the Raspberry Pi, follow these steps to enable SSH.
1.Remove the SD card from the computer, insert it to the Raspberry Pi, connect a mouse, keyboard, and
5.Click on OK.
●If you use (2.1.3 to manually download the image file we provide and write it to the SD card) to write
the operating system of the Raspberry Pi to the SD card, you do not need to refer to this section to open SSH,
●If you haven't connected any monitor to the Raspberry Pi, follow these steps to enable SSH.
1. Do not remove the SD card after `Raspberry Pi Imager` writes the image file.
2. Create a file named `ssh` under any directory, without any extension name. You may create a `ssh.txt`
and delete the `.txt` (make sure under Folder Options the box of Hide extensions for known file types is
unchecked. Then you have an `ssh` file without extension name.
3. Copy the `ssh` file and paste to the root directory of the SD card. The Raspberry Pi will auto search for
the `ssh` file when booting, and enable SSH if the file is found. You only need to copy for one time because the
Raspberry Pi then will automatically enable SSH at every boot.
4. Do not remove the SD card if you need to configure WiFi.
● If you've connected a mouse, keyboard, or monitor to the Raspberry Pi, follow these steps to configure
WiFi.
1.Remove the SD card from the computer, insert it to the Raspberry Pi, connect a mouse, keyboard, and
monitor to the Raspberry Pi, boot it up.
2. Select the WiFi icon at the top right corner on the monitor, find the WiFi to connect and select.
3. Type in the password for the WiFi, connect.
4. After it's connected successfully, the WiFi will be saved and the Raspberry Pi will auto connect for next
boot, so you don't need to connect peripherals every time.
●If you haven't connected any monitor to the Raspberry Pi, follow these steps to configure WiFi.
●This method is based on the [official documentation]
1. Do not remove the SD card after `Raspberry Pi Imager` has written the image file. (This method works
for the situation that the Raspbian image file has just been written to the SD card; if you've already plugged the
SD card into the Raspberry Pi and got it rebooted after the image file being written, the configuration may fail.)
2. Create a file named `wpa_supplicant.conf` anywhere in your computer.
3. Open the file `wpa_supplicant.conf` created with Textbook, enter the following code:
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
country=Insert country code here
network={
ssid="Name of your WiFi"
psk="Password for your WiFi"
13
}
4. Type in your own information for `Insert country code here`, `Name of your WiFi`, and `Password for
your WiFi`. Pay attention to the capitalization. Refer to the example below:
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
country=US
network={
ssid="MyName"
psk="12345678"
}
5. Save and exit. Copy the `wpa_supplicant.conf` to the root directory of the SD card.
6. If you've already copied the file `ssh` to the SD card as instructed in **2.2**, then both the WiFi and
SSH settings without peripherals are done. You may remove the SD card, insert it into the Raspberry Pi, and boot
it up.
7. For more about the file `wpa_supplicant.conf`, refer to the official documentation [WIRELESS-CLI]
14
● If you followed the steps in 2.2.1 and 2.3.1 for SSH and WiFi configuration, you may remove the
peripherals now and use SSH to remotely control the Raspberry Pi later on.
●If you followed the steps in 2.2.2 and 2.3.2, you may now insert the SD card into the Raspberry Pi
and boot it up. The Raspberry Pi will auto boot and connect WiFi when powered on, with no need of
peripherals.
●If you use the operation steps of 2.1.3 to write to the SD card, you only need to refer to 2.3.1 or 2.3.2
to configure the WIFI, you can install the SD card into the Raspberry Pi, and the robot product program will
Automatic operation, you can skip some content, refer to 5 Use WEB application to control the robot after
the structure is assembled.
●Some steps mentioned below are based on the official Raspberry Pi documentation SSH.
●For power supply of the Raspberry Pi, refer to the official documentation Power supply.
●The Motor HAT board of the Adeept Raspberry Pi Robot can supply power for the Raspberry Pi via
GPIO port. However, since it may take a long time to install software on the Raspberry Pi, it's not
recommended to supply with the batteries during this process. You may skip the installation of the Motor HAT
board or camera during software installation; though you need to make sure the driver board and camera for
the Raspberry Pi when it's ready to run the software installed, or a program error will occur.
of the Raspberry Pi, `raspberry` (pay attention to capitalization). There's no change on the screen when you're
typing in, but it doesn't mean you're not entering the information. Press ‘enter’ after you finish typing in.
●So now you've logged into the Raspberry Pi.
● For lower versions of Windows OS, SSH is not built in, and you may log into the Raspberry Pi by
referring to the official documentation Raspberry Pi[SSH using Windows].
●Before connecting the Raspberry Pi via SSH, you need to know the IP address of the Raspberry Pi.
Check the Management interface for your router, or download the app `Network Scanner` -> search for a
device named `RASPBERRY` or `Raspberry Pi Foundation` to get the IP address.
●For other methods of obtaining the IP address of Raspberry Pi, refer to the official documentation [IP
Address]
●You may need to download the `PuTTY` version for your OS and log into Raspberry Pi with the tool.
[Click here to download PuTTY]
●Run `PuTTY`, type in the IP address of Raspberry Pi for `Host Name`, and click ‘Open’.
●If a prompt of `Network error: Connection timed out` appears, possibly you've entered an incorrect IP
address.
●When the connection works you will see the security warning shown below. You can safely ignore it,
and click the 'Yes' button. You will only see this warning the first time PuTTY connects to a Raspberry Pi that it
has not seen before.
●You will now see the usual login prompt. Log in with the same username and password you would
use on the Pi itself. The default login for Raspbian is `pi` with the password `raspberry`.
● You should now have the Raspberry Pi prompt which will be identical to the one found on the
Raspberry Pi itself.
● If it fails to enter the page, log into the Raspberry Pi via SSH, type in the command below to end the
program auto run at boot to release resources, or else issues like camera initialization failure or occupied ports.
sudo killall python3
●Type in the command below to run `webServer.py`:
sudo python3 Adeept_AWR/server/webServer.py
●Check whether there's any error and solve them based on instructions in the Q&A section below.
(The type of servo and the installation angle of the rocker arm is for reference only. Please refer to the
actual product and assembly part.)
Whether the servo has returned to original position
You can test whether the servo has returned to original position by pulling the rocker arm (don't try too hard to
prevent damage to the servo). The servo that has automatically returned cannot be pulled.
19
If your servo does not return to the original position automatically, you can manually run the server.py file and
then try to connect the servo.
●Preparations before Assembly
Connect the Adeept Ultrasonic Module with 4-Pin wire.
20
1. Connect the 18650 Battery Holder Set to the Adeept Motor HAT.
24
2. Put two 18650 batteries in 18650 Battery Holder Set according to the following method.
25
4. Before switching on, you need to insert the configured SD card into the Raspberry Pi. For details, please refer
to the third chapter of the document. Otherwise, the servo will not rotate to the middle position after booting. If SD
card is not inserted, the servo needs to be rotated to the middle position manually.
After debugging, remove the servo and battery holder, and take the 18650 batteries out of the Holder Set. Do not
rotate the rotation axis before the servo fixed to the rocker arm. Otherwise, you need to re-debug the servo.
27
●Body parts.
1. Fix Raspberry Pi Camera on Acrylic Plates
Assemble the following components
11. Connect the Adeept Ultrasonic Module, Car Light, 3 Tracking Module Set and motor as shown below before
assembling the body part.
Assemble the following components
42
●You can also use a power lithium battery to power Motor HAT. Motor HAT supports the power supply that is
below 15V.
●You can use a USB cable to supply power to Motor HAT when the installing the rocker arm of the servo
during structural assembly. After the robot software is installed, the raspberry pi will control Motor HAT to set all
servo ports to output the neutral signal after it is started up. At this time, you can connect the servo to any servo
port, the servo gear will turn to the neutral position, and then you can install the rocker arm of the servo according
to the specified angle. After the rocker arm is installed, the servo can be disconnected from Motor HAT. When you
need to install the rocker arm of the second servo, you only need to connect the second servo to any servo port
on the drive board.
52
·`MOTION GET`: Motion detection function based on OpenCV. When objects move in the view of the
camera, the program will circle the part in the `Video` window, and the LED light on the robot will show respective
changes.
·`AUTO MATIC`: Obstacle avoidance function based on ultrasonic. When the ultrasonic module on the
robot detects an obstacle, it will automatically turn left, and take a step backward before turning if it's too close to
the obstacle.
·`SPEECH` Based on the control function of the multi-threaded WS2812-LED, the color of the robot's
WS2812-LED lamp can alternately blink.
●`FC Control` window control`s the color lock function of the robot:
·`START`: Enable or disable color searching and tracking function.
·`COLOR`: Select the color to track.
·When the function is on, the robot will automatically lock one particular color in the camera view. By
default it tracks bright yellow objects. You can change the color as you want. When an object is locked, the LED
on the robot will turn orange. The camera of the robot can not only conduct pitching motion, but also can lock
objects of a certain color horizontally.
●`PWM INIT SET`: Used to adjust the initial angle of each servo port of the robot. If you find that the angle
of a certain servo of your robot is not right, you can use this function to fine-tune it to the correct angle.
54
●The servo doesn't return to the central position when connected to the driver board.
In general, the Raspberry Pi will auto run `webServer.py` when booting, and `webServer.py` will run and control
the servo ports to send a signal of rotating to the central position. When assembling the servo, you can connect it to any
servo port anytime. After connecting the servo to the port, the gears will rotate to the central position; assemble the
rocker arm to the servo, disconnect the servo from the port, and insert more servos to repeat rocker arm assembly (all
servos will be in the central position).
When the servo is powered on, try moving the rocker arm. If it can't be moved, it indicates the program for the
servo works; otherwise there's error for the servo program. Run the line `[RobotName]/initPosServos.py` (replace
`[RobotName]` with the folder name of your robot's program) to make the servo rotate to the central position.
When booting (it may take 30-50s), it takes a while for the Raspberry Pi to control PCA9685 to set signal of all
servo ports for central position rotating.
●no cv2 error occurs when I manually run `server.py` or `webServer.py`.
OpenCV is not installed correctly. Type in the command sudo pip3 install opencv-contrib-python in the Raspberry
Pi to manually install OpenCV.
●When using a computer to copy ssh and wpa_supplicant.conf to the SD card, it prompts that there is no
SD card
If this happens, unplug the card reader and connect it to the computer.
●SSH can't connect, error WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!
Enter the following in the command line and press Enter
ssh-keygen -R Add the Raspberry Pi's IP address
For example:
ssh-keygen -R 192.168.3.157
Then you can SSH to the Raspberry Pi again
●Raspberry Pi automatically restarts after booting / restart the robot once it starts to move
If your robot automatically restarts after powering on, or disconnects and restarts when the robot starts to move
after normal power on, it is likely because your power supply does not output enough current, and the robot will
automatically run the program when it starts Put all the servos in the neutral position, the voltage drop caused by this
process causes the Raspberry Pi to restart.
We have tested that when using 7.4V power supply, the peak current of the robot is about 3.75A, so you need to
use support 4A output battery.
●The direction of servo movement is incorrect
Due to the different batches of servos, when the same angle change trend is given to the servos, the actual
direction of motion of the servos may be opposite. We have set an interface to adjust the direction of the servos in the
program. You need to open RPIservo.py, Find the array sc_direction in ServoCtrl. If the direction of the servo of port 3 is
reversed, change the fourth 1 to -1.
(The serial number of the array starts from zero, so port 3 corresponds to the fourth 1).
If the servo direction of port 3 is not correct:
Before modification:
self.sc_direction = [1,1,1,1, 1,1,1,1, 1,1,1,1, 1,1,1,1]
After modification (the serial number of the array starts from zero, so port 3 corresponds to the fourth 1):
self.sc_direction = [1,1,1,-1, 1,1,1,1, 1,1,1,1, 1,1,1,1]
56
●For example, if we want to replace webServer.py with server.py, we only need to edit the following:
Replace
sudo python3 [RobotName]/server/webServer.py
with
sudo python3 [RobotName]/server/server.py
●Save and exit so that the robot will automatically run server.py instead of webServer.py the next time
the robot is turned on.
●server.py is a socket server used when using pythonGUI. We do not recommend it to novices here,
because you need to manually install a lot of dependent libraries in the computer that controls it to allow the
GUI to communicate with it normally. It is recommended to use the WEB application to control the Raspberry Pi
robot.
59
MobaXterm
●To make daily use of the Raspberry Pi more convenient, we usually do not connect peripherals such as
mouse, keyboard, and monitor to the Raspberry Pi. Since our Raspberry Pi is installed inside the robot, often
with peripherals to control the Raspberry Pi, the efficiency of programming and testing will be seriously
●There are many ways to program in the Raspberry Pi. For example, you can use 3.x to log in to the
Raspberry Pi without using a third-party tool. You can also create files in the Raspberry Pi. Almost all
operations can use SSH to connect to the Raspberry Pi in the terminal, but for many people, it will be a
disappointing experience when a lot of codes are written in the terminal. This chapter introduces a method that
can facilitate the transfer of files to the Raspberry Pi. This method can directly edit programs in the Raspberry
Pi.
●This method requires the third-party software MobaXterm,Website address
●MobaXterm is a terminal tool software that can be used to remotely control the Raspberry Pi and remote
control is available when SSH is on. For Raspberry Pi's method of enabling SSH and automatically connecting
to WIFI, please refer to steps 2.2 and 2.3.
●Download and install MobaXterm.
● To obtain the IP address of the Raspberry Pi, you can refer to the method of 3.x and log into the
●To run MobaXterm, firstly, create a new session, click Session in the upper left corner, click SSH in the
pop-up window, fill in the IP address of the Raspberry Pi behind Remote host, and finally click OK, the default
account name of the Raspberry Pi is pi , The default password is raspberry. Just the password doesn't appear
on the screen when you enter it and the * number doesn't mean nothing Enter successfully, press after login to
log in to the Raspberry Pi, MobaXterm will remind you to save the password.You need to choose.
●If the user name and password are correct, you can change the user name and password according to
●After the success of the login, MobaXterm will automatically save the conversation, when connected to
the raspberry pie again next time only need to double click on the left side of the IP address can be connected
60
to the Raspberry Pi again, if there is no save username and password will need to input the user name and
password, if the IP address of the Raspberry Pi changed, you need to start a new dialogue.
● After a successful login, the left column is replaced with a file transfer system, which allows you to
interact with the system inside the Raspberry Pi. If you want to return to session selection, just click Sessions.
●Programs you write on other devices can be transferred to the Raspberry Pi by simple drag and drop,
and then the Raspberry Pi can be controlled in the terminal to execute the program, or the files in the raspberry
●If you want to use another IDE to edit files in Raspberry Pi, you can find the file you want to edit in the file
transfer system on the left side of the MobaXterm. Right-click on this file and select your IDE so you can use
your favorite on other devices IDE to edit the Raspberry Pi file, after editing is completed CTRL+S save the file.
● However, it should be noted that when you use MobaXterm's file transfer system to edit files in the
Raspberry Pi, you need to pay attention to the permissions problem, because the file transfer system does not
have root permissions, so if you are prompted to save after editing the file The permission denied error causes
the file cannot be saved after editing. You need to use the following command to give the file you want to edit
●You can learn more about Linux permissions through maketecheasier article from the article link
●WS2812 LED light is a commonly used module on our robot products. There are three WS2812 lights on
each module. Please pay attention when connecting. The signal line is different in direction, which needs to be
connected to WS2812 after being led out from the Raspberry Pi. The IN end of the LED lamp module, when the
next WS2812 LED module needs to be connected, the signal line is led out from the OUT end of the previous
WS2812 module and connected to the IN end of the next WS2812 LED.
●When using the Raspberry Pi with the driver board Motor HAT installed, the WS2812 LED module can
be connected to the WS2812 interface on the Motor HAT using a 3pin cable.
●We use a third-party library rpi_ws281x to control the WS2812 LED light, you can learn more about this
project on GitHub.
●If you connect the WS2812 LED module to Motor HAT's WS2812 interface, the signal line is equivalent
to the Raspberry Pi On GPIO 12, information about the pin number of the Raspberry Pi can refer to this official
document GPIO
●Use the following command to install rpi_ws281x for the Raspberry Pi. Since the Raspberry Pi has two
versions of Python built in, the Python3 code is used as an example, so pip3 is adopted to install the library.
pip3 install rpi-ws281x
● Next, we will explain the program. This program is written in the Raspberry Pi and executed in the
Raspberry Pi. For the specific method, you can refer to 8 Programming in the Raspberry Pi.
●Import dependencies
import time
class LED:
def __init__(self):
self.LED_COUNT = 16 # Set to the total number of LED lights on the robot product.There are more LED lights on the
Raspberry Pi
self.LED_PIN = 12 # Set to the input pin number of the LED group
self.LED_FREQ_HZ = 800000
self.LED_DMA = 10
self.LED_BRIGHTNESS = 255
self.LED_INVERT = False
self.LED_CHANNEL =0
def colorWipe(self, R, G, B): # This function is used to change the color of the LED light
color = Color(R, G, B)
for i in range(self.strip.numPixels()): # Only one LED light color can be set at a time, so we need to do a loop
self.strip.setPixelColor(i, color)
self.strip.show() # The color will only change after calling the show method
63
●Instantiate the object and execute the method function. The function colorWipe () needs to pass in three
parameters, namely R, G, and B, which correspond to the brightness of the three primary colors of red, green,
and blue. The value range is 0- 255, the larger the value, the higher the brightness of the corresponding color
channel. If the values of the three color channels are the same, white light is emitted. Specific examples are as
follows:
if __name__ == '__main__':
LED = LED()
try:
while 1:
LED.colorWipe(255, 0, 0) # All the lights turn red
time.sleep(1)
LED.colorWipe(0, 255, 0) # All lights turn green
time.sleep(1)
LED.colorWipe(0, 0, 255) # All lights turn blue
time.sleep(1)
except:
LED.colorWipe(Color(0,0,0)) # Turn off all lights
●The above code will control all WS2812 lights to cycle through the three colors, press CTRL+C to exit
the program.
● If you want to control the color of a single lamp, you can use the following code, where i is the serial
number of the lamp, the serial number of the first lamp connected from the driver board is 0, and the second
lamp is 1. By analogy, R, G, B are the brightness of the corresponding three color channels:
LED.strip.setPixelColor(i, Color(R, G, B))
LED.strip.show()
●Note: You must use the Color () method to pack the RGB values and then pass them to setPixelColor ()
64
● Since the servo can use the PWM signal to control the rotation angle of a mechanism, it is a more
commonly used module on robot products. Walking robots, robotic arms and gimbals are all driven by the
servo. In our Raspberry Pi The driver board Motor HAT has a dedicated PCA9685 chip for controlling the servo.
The Raspberry Pi uses I2C to communicate with the PCA9685. You only need to install the Raspberry Pi driver
board Motor HAT on the Raspberry Pi, and the Raspberry Pi will be connected to the PCA9685. No other wires
● The Raspberry Pi uses Python code to control the steering gear, and requires third-party libraries
Adafruit_PCA9685, Adafruit-PCA9685Project address, if you run the installation script of the robot software,
you do not have to manually install it again, if you do not have the security of running the robotTo install the
script, use the following command to install Adafruit_PCA9685 for Python3 in the Raspberry Pi:
●After the installation, you can use the Python3 code in the Raspberry Pi to control the servo:
while 1:# Make the servo connected to the No. 3 servo port on the Motor HAT drive board reciprocate
pwm.set_pwm(3, 0, 300)
time.sleep(1)
pwm.set_pwm(3, 0, 400)
time.sleep(1)
●In the above code, set_pwm_freq (50) is used to set the PWM frequency to 50Hz. This setting depends
on the model of the servo. The servo used by our robot product needs to be controlled by a 50Hz PWM signal.
If you use other The value of the servo needs to be set by referring to the specific servo documentation.
65
● pwm.set_pwm (3, 0, 300) This method is used to control the rotation of a servo to a certain position,
where 3 is the servo port number, which corresponds to the number identified on the Motor HAT driver board,
but pay attention to the rudder When the machine is connected to the drive board, do not insert the reverse
direction of the ground wire, VCC and signal wire, brown to black, red to red, yellow to yellow; 0 is the deviation
of controlling the rotation of the servo Our program does not use this function to correct the deviation (the
reason for the error of the steering gear can refer to 4.2 Structural Assembly Note); 300 is the PWM duty
cycle value you want to set. According to the different servos, this value represents different servo angles.The
PWM duty cycle range of the servos we use is approximately 100 to 560, which corresponds to a rotation
●The above code to control the steering gear does not control the rotation speed of the steering gear. If
we want to make a certain steering gear swing back and forth slowly between two positions, we need to use an
while 1:
for i in range(0,100): # Slowly move the servo from 300 to 400
pwm.set_pwm(3, 0, (300+i))
time.sleep(0.05)
for i in range(0,100): # Slowly move the servo from 400 to 300
pwm.set_pwm(3, 0, (400-i))
time.sleep(0.05)
●The above code can make the steering gear rotate slowly back and forth between 300 and 400, but this
method of controlling the steering gear also has a lot of drawbacks. When the program is executed until the
slow movement of the steering gear will block, this will seriously affect the program Performance, so we
provide a multi-threaded solution in our robot product program to solve this problem.
66
●You can find the RPIservo.py file in the server folder of the robot product, copy it to the same folder as
the program you want to run, and then you can use this method in your program.
import RPIservo # Import a library that uses multiple threads to control the steering gear
import time
while 1:
sc.singleServo(3, -1, 2)
time.sleep(1)
sc.stopWiggle()
sc.singleServo(3, 1, 2)
time.sleep(1)
sc.stopWiggle()
●The above code can be used to control the servo to reciprocate, except for time.sleep (), it will not block
●Call singleServo () to start the movement of the servo, call stopWiggle () to stop the movement of the
servo, where singleServo ()Three parameters are required, which are the port number of the servo to be
controlled (3 is to control the No. 3 servo connected to Motor HAT), the movement direction of the servo (1 or
-1), and the movement speed of the servo (the greater the value, the greater The sooner).
67
●If the Raspbian image version you installed is Raspbian Full provided by the official website, you do not
need to install other dependent libraries. We only need to control the GPIO port of the Raspberry Pi for simple
high and low levels and PWM to control the L298N chip on Motor HAT, thus controlling the direction and speed
of the motor.
●When you use the Motor HAT driver board to connect the motor, because the current required by the
motor is relatively large, try not to use a USB cable to power the Raspberry Pi and the driver board. There is a
import time
import RPi.GPIO as GPIO# Import the library used to control GPIO
GPIO.cleanup() # Reset the high and low levels of the GPIO port
GPIO.setwarnings(False) # Ignore some insignificant errors
GPIO.setmode(GPIO.BCM) # There are three encoding methods for the GPIO port of the Raspberry Pi, we choose
BCM encoding to define the GPIO port
'''
The following code defines the GPIO used to control the L298N chip. This definition is different for different Raspberry Pi
driver boards.
'''
Motor_A_EN =4
Motor_B_EN = 17
Motor_A_Pin1 = 14
Motor_A_Pin2 = 15
Motor_B_Pin1 = 27
Motor_B_Pin2 = 18
def setup(): # GPIO initialization, GPIO motor cannot be controlled without initialization
global pwm_A, pwm_B
GPIO.setwarnings(False)
GPIO.setmode(GPIO.BCM)
GPIO.setup(Motor_A_EN, GPIO.OUT)
GPIO.setup(Motor_B_EN, GPIO.OUT)
GPIO.setup(Motor_A_Pin1, GPIO.OUT)
GPIO.setup(Motor_A_Pin2, GPIO.OUT)
GPIO.setup(Motor_B_Pin1, GPIO.OUT)
GPIO.setup(Motor_B_Pin2, GPIO.OUT)
def motor_A(direction, speed): # The function used to control the motor of port A
if direction == 1:
GPIO.output(Motor_A_Pin1, GPIO.HIGH)
GPIO.output(Motor_A_Pin2, GPIO.LOW)
pwm_A.start(100)
pwm_A.ChangeDutyCycle(speed)
if direction == -1:
GPIO.output(Motor_A_Pin1, GPIO.LOW)
GPIO.output(Motor_A_Pin2, GPIO.HIGH)
pwm_A.start(100)
pwm_A.ChangeDutyCycle(speed)
def motor_B(direction, speed): # The function used to control the motor of port B
if direction == 1:
GPIO.output(Motor_B_Pin1, GPIO.HIGH)
GPIO.output(Motor_B_Pin2, GPIO.LOW)
pwm_B.start(100)
pwm_B.ChangeDutyCycle(speed)
if direction == -1:
GPIO.output(Motor_B_Pin1, GPIO.LOW)
GPIO.output(Motor_B_Pin2, GPIO.HIGH)
pwm_B.start(100)
pwm_B.ChangeDutyCycle(speed)
'''
69
'''
Control A and B motors to rotate in opposite directions at full speed for 3 seconds
'''
motor_A(-1, 100)
motor_B(-1, 100)
time.sleep(3)
'''
Stop the motor rotation of A and B ports
'''
motorStop()
Th
●The above codes can be used to control the motor movement. The structure of the two functions motor_A and
motor_
motor_B is the same, but the control servo port is different. This function requires two parameters, one is the direction
and the
and the other is the speed. 1 or -1. The minimum speed is 0 and the maximum value is 100. Since the speed adjustment
is adjus
is adjusted by PWM, it is actually equivalent to adjusting the voltage value of the motor port. The motor has a deceler
deceleration mechanism as a load. There will be no rotation, so the speed value should not be too low.
●According to the actual situation, changing the speed value of the robot driven by the motor will only make the
robots starting speed slower, which has little effect on the maximum speed, and when the given speed is too low, it will
12 Ultrasonic Module
● The camera used by our Raspberry Pi robot is monocular, which cannot collect depth information.
Therefore, many of our robot products use ultrasonic ranging modules to obtain depth information and detect
whether there is an obstacle in a certain direction to obtain the distance of the obstacle.
● S is the distance of the obstacle, T2 is the time when the echo is received, T1 is the time when the
sound wave is emitted, and Vs is the speed of sound propagation in the air. We can use the above formula to
● The model of the ultrasonic ranging module used in our robot products is HC-SR04. The HC-SR04
module has four pins, namely VCC, GND, Echo and Trig. Distance sensing function 2cm-400cm, ranging
accuracy can reach 3mm; module includes ultrasonic transmitter, receiver and control circuit. The basic
· Trigger distance measurement using TRIG of IO port, automatically send 8 40khz square waves to a
high-level signal
·When a signal returns, a high level is output through the IO port ECHO. The duration of the high level is
the time from the transmission of the ultrasonic wave to the return.
●When the Motor HAT driver board is used, you need to connect the HC-SR04 to the Ultrasonic interface
on the driver board. You must not connect it to the IIC port to avoid burning the ultrasonic module. (IIC is an
interface used to connect I2C devices, and its VCC and GND pin positions are different from Ultrasonic)
GPIO.setmode(GPIO.BCM)
GPIO.setup(Tr, GPIO.OUT,initial=GPIO.LOW)
GPIO.setup(Ec, GPIO.IN)
def checkdist():
GPIO.output(Tr, GPIO.HIGH) # Set the input end of the module to high level and emit an initial sound wave
time.sleep(0.000015)
GPIO.output(Tr, GPIO.LOW)
while not GPIO.input(Ec): # When the module no longer receives the initial sound wave
pass
t1 = time.time() # Note the time when the initial sound wave is emitted
while GPIO.input(Ec): # When the module receives the return sound wave
pass
t2 = time.time() # Note the time when the return sound wave is captured
'''
Output ultrasonic ranging results once per second and output ten times
'''
for i in range(10):
print(checkdist())
time.sleep(1)
●As regard the ultrasonic module, this is a commonly used module in many preliminary projects in order
to reduce the complexity of the program, although there is a blocking part in this code, we did not use
multi-threading to solve it If your project has a high demand for product performance, you can refer to the
●When your project needs to use the ultrasonic function, you don't need to rewrite the above code, just
copy ultra.py in the robot program server folder to the same folder as your own project Use the following code
distance = ultra.checkdist()
72
13 Line Tracking
● Some of our robot products are equipped with a three-channel infrared line patrol module. The line
patrol module is converted to the robot's line patrol function design. The three-channel infrared line patrol
module contains 3 groups of sensors, where each group of sensors consists of an infrared emitting LED and an
infrared sensor photoelectric Transistor composition, the robot determines whether there is a line detected by
detecting the infrared light intensity detected by the infrared sensor phototransistor. It can detect the white line
(reflected infrared light) on a black background (non-reflected infrared light), and can also detect a white
background The black line on (reflects infrared light) (does not reflect infrared light).
● Since the Raspberry Pi can only read digital signals, the three-channel infrared tracking module is
equipped with a potentiometer. You can use a cross screwdriver to adjust the potentiometer on the infrared
● Our program defaults to finding black lines on a white background (reflecting infrared light) (not
● Before using the three-channel infrared line patrol module, you need to connect it to the Tracking
●The three-way infrared line patrol module has an arrow pattern on the back of the sensor. The direction
●The codes of the three-channel infrared line patrol module are as follows:
import RPi.GPIO as GPIO
import time
GPIO.setup(line_pin_middle,GPIO.IN)
GPIO.setup(line_pin_left,GPIO.IN)
def run():
'''
Read the values of three infrared sensor phototransistors (0 is no line detected, 1 is line detected)
This routine takes the black line on white as an example
'''
status_right = GPIO.input(line_pin_right)
status_middle = GPIO.input(line_pin_middle)
status_left = GPIO.input(line_pin_left)
elif status_left == 1:
'''
Control the robot to turn left
'''
print('left')
elif status_right == 1:
'''
Control the robot to turn right
'''
print('right')
else:
'''
If the line is not detected, the robot stops, you can also make the robot go backwards
'''
print('stop')
if __name__ == '__main__':
setup()
while 1:
run()
74
●When your project needs to use the line patrol function, you don't need to rewrite the above code, just
copy findline.py and move.py in the robot program server folder to the same as your own project In the folder,
then use the following code to use the line patrol function:
import findline
findline.setup()
while 1:
findline.run()
● The reason why you need to import move.py is findline.py needs to use the method in move.py to
control the robot movement. If you use other methods, then you only need to rewrite the relevant code in
findline.py.
75
● This chapter introduces the use of multi-threading to achieve some effects related to WS2812 LED
lights. Multi-threading is a commonly used operation in robot projects. Because robots have high requirements
for real-time response, when performing a certain task, try not to block main thread communication.
·Using threads can put long-running tasks in the background for processing.
· To improve the operating efficiency of the program, the subsequent real-time video and OpenCV
processing of video frames use multi-threading to greatly increase the frame rate.
· The encapsulated multi-threaded task is more convenient to call, similar to the 10.3 non-blocking
●We use Python's threading library to provide thread-related work. Threads are the smallest unit of work
in an application. The current version of Python does not provide multi-thread priority, thread group, and thread
●We use the following code to achieve multi-threaded control of LED lights, and when the LED does not
● The wait () method is used here to block the thread, from the realization of the need to control the
thread.
import time
import sys
from rpi_ws281x import *
import threading
76
'''
Use the Threading module to create threads, inherit directly from threading.Thread, and
then override the __init__ method and the run method
'''
class RobotLight(threading.Thread):
def __init__(self, *args, **kwargs):
'''
Here initialize some settings about LED lights
'''
self.LED_COUNT = 16 # Number of LED pixels.
self.LED_PIN = 12 # GPIO pin connected to the pixels (18 uses PWM!).
self.LED_FREQ_HZ = 800000 # LED signal frequency in hertz (usually 800khz)
self.LED_DMA = 10 # DMA channel to use for generating signal (try 10)
self.LED_BRIGHTNESS = 255 # Set to 0 for darkest and 255 for brightest
self.LED_INVERT = False # True to invert the signal (when using NPN transistor
level shift)
self.LED_CHANNEL = 0 # set to '1' for GPIOs 13, 19, 41, 45 or 53
'''
Set the brightness of the three RGB color channels, no need to change here, these
values will be automatically set after the subsequent call of the breathing light function
'''
self.colorBreathR = 0
self.colorBreathG = 0
self.colorBreathB = 0
self.breathSteps = 10
'''
The mode variable, 'none' will make the thread block and hang, the light will not
change;
'police' is a police light mode, red and blue flash alternately;
'breath' breathing light, you can set the specified color.
'''
self.lightMode = 'none' #'none' 'police' 'breath'
def pause(self):
'''
Call this function, set __flag to False, block the thread
'''
self.lightMode = 'none'
self.setColor(0,0,0)
self.__flag.clear()
def resume(self):
'''
Call this function, set __flag to True to start the thread
'''
self.__flag.set()
def police(self):
78
'''
Call this function to turn on the police light mode
'''
self.lightMode = 'police'
self.resume()
def policeProcessing(self):
'''
The specific realization of the police light mode
'''
while self.lightMode == 'police':
'''
Blue flashes 3 times
'''
for i in range(0,3):
self.setSomeColor(0,0,255,[0,1,2,3,4,5,6,7,8,9,10,11])
time.sleep(0.05)
self.setSomeColor(0,0,0,[0,1,2,3,4,5,6,7,8,9,10,11])
time.sleep(0.05)
if self.lightMode != 'police':
break
time.sleep(0.1)
'''
Red flashes 3 times
'''
for i in range(0,3):
self.setSomeColor(255,0,0,[0,1,2,3,4,5,6,7,8,9,10,11])
time.sleep(0.05)
self.setSomeColor(0,0,0,[0,1,2,3,4,5,6,7,8,9,10,11])
time.sleep(0.05)
time.sleep(0.1)
self.colorBreathB = B_input
self.resume()
def breathProcessing(self):
'''
Specific realization method of breathing lamp
'''
while self.lightMode == 'breath':
'''
All lights gradually brighten
'''
for i in range(0,self.breathSteps):
if self.lightMode != 'breath':
break
self.setColor(self.colorBreathR*i/self.breathSteps,
self.colorBreathG*i/self.breathSteps,
self.colorBreathB*i/self.breathSteps)
time.sleep(0.03)
'''
All lights are getting darker
'''
for i in range(0,self.breathSteps):
if self.lightMode != 'breath':
break
self.setColor(self.colorBreathR-(self.colorBreathR*i/self.breathSteps),
self.colorBreathG-(self.colorBreathG*i/self.breathSteps),
self.colorBreathB-(self.colorBreathB*i/self.breathSteps))
time.sleep(0.03)
def lightChange(self):
'''
This function is used to select the task to perform
'''
if self.lightMode == 'none':
self.pause()
elif self.lightMode == 'police':
self.policeProcessing()
elif self.lightMode == 'breath':
self.breathProcessing()
80
def run(self):
'''
Functions for multi-threaded tasks
'''
while 1:
self.__flag.wait()
self.lightChange()
pass
if __name__ == '__main__':
RL=RobotLight() # Instantiate the object that controls the LED light
RL.start() # Start thread
'''
Start breathing light mode and stop after 15 seconds
'''
RL.breath(70,70,255)
time.sleep(15)
RL.pause()
'''
Pause for 2 seconds
'''
time.sleep(2)
'''
Start the police light mode and stop after 15 seconds
'''
RL.police()
time.sleep(15)
RL.pause()
● When your project needs to use LED lights for warning lights or breathing lights, you don’t need to
rewrite the above code, just copy robotLight.py in the robot program server folder to the same In the folder,
then use the following code to use the warning light or breathing light:
import robotLight
81
'''
Start breathing light mode and stop after 15 seconds
'''
RL.breath(70,70,255)
time.sleep(15)
RL.pause()
'''
Pause for 2 seconds
'''
time.sleep(2)
'''
Start the police light mode and stop after 15 seconds
'''
RL.police()
time.sleep(15)
RL.pause()
introduces the method of real-time video. In fact, there are many ways to transfer the images collected by the
Raspberry Pi camera to other devices through the network The robot uses the open source project [ fl
ask-video-streaming] from Github the MIT open source agreement, you can click the link to view the source
●The reason for the selection is flask-video-streaming. This solution is the most convenient and the most
efficient of the many solutions we have tried. The part related to OpenCV also has a good interface to rewrite it
as multi-threaded processing.
● Since this project requires the use of Flask and related dependent libraries, our robot software
installation script contains the content of installing these dependent libraries. If your Raspberry Pi has not run
the robot software installation script, you need to use the following command to install .
sudo pip3 install flask
sudo pip3 install flask_cors
82
●This chapter does not introduce the OpenCV part first, only introduces how to see the real-time picture
of the Raspberry Pi camera on other devices.
●First download flask-video-streaming this project in the Raspberry Pi. You can download it from Clone
on GitHub or download it on your computer and then pass it to the Raspberry Pi. The download command
using the Raspberry Pi console is as follows:
sudo git clone https://github.com/miguelgrinberg/flask-video-streaming.git
● After downloading or transmitting flask-video-streaming in the Raspberry Pi, run the app.py in
flask-video-streaming:
cd flask-video-streaming
sudo python3 app.py
●Not to use sudo python3 flask-video-streaming / app.py to run, there will be an error that * .jpeg is not
found.
●Open the browser on the device on the same local area network as the Raspberry Pi (we use Google
Chrome to test), and enter the IP address of the Raspberry Pi plus the video streaming port number: 5000 in
the address bar, as shown in the following example:
192.168.3.157:5000
● Now you can see the page created by the Raspberry Pi on the browser of your computer or mobile
phone. Note that the default screen is not from the screen of the Raspberry Pi camera, but three digital pictures
cyclically playing 1, 2, 3
● If your page can log in and is playing a picture of 1 \ 2 \ 3 numbers in a loop, it means that the
flask-related programs are running normally. Next, you can make some modifications to app.py so that it can
display the Raspberry Pi on the page in real time. Camera screen.
sudo nano app.py
●Here we use nano that comes with Raspbian to open app.py for editing in the console. Since it is just
some operations for commenting and deleting comments, there is no need to use other IDEs for editing.
●After opening the IDE, we comment out the code:
if os.environ.get('CAMERA'):
Camera = import_module('camera_' + os.environ['CAMERA']).Camera
else:
from camera import Camera
●You can comment out these lines of code by filling in # at the beginning of the code line, or you can write
a ''' at the beginning and end of the entire code to comment out a certain code. The relevant code after the
change is as follows:
# if os.environ.get('CAMERA'):
# Camera = import_module('camera_' + os.environ['CAMERA']).Camera
# else:
# from camera import Camera
or
'''
if os.environ.get('CAMERA'):
Camera = import_module('camera_' + os.environ['CAMERA']).Camera
else:
from camera import Camera
83
'''
●Finally, uncomment the code that imports Camera from camera_pi,
# from camera_pi import Camera
delete in front of #, note that there is a space after the # here, and also delete, the changed code is as
follows:
from camera_pi import Camera
app = Flask(__name__)
@app.route('/')
def index():
"""Video streaming home page."""
return render_template('index.html')
def gen(camera):
"""Video streaming generator function."""
while True:
frame = camera.get_frame()
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n')
@app.route('/video_feed')
def video_feed():
"""Video streaming route. Put this in the src attribute of an img tag."""
return Response(gen(Camera()),
84
mimetype='multipart/x-mixed-replace; boundary=frame')
if __name__ == '__main__':
app.run(host='0.0.0.0', threaded=True)
●After editing, press CTRL+X to launch the editing, and prompt whether to save the changes, press Y
and Entry after saving the changes.
●Then you can run app.py:
sudo app.py
●Open the browser on the device on the same local area network as the Raspberry Pi (we use Google
Chrome to test), and enter the IP address of the Raspberry Pi plus the video streaming port number: 5000 in
the address bar, as shown in the following example:
192.168.3.157:5000
● Now you can see the page created by the Raspberry Pi on the browser of your computer or mobile
phone. After loading successfully, the page will display the real-time image of the Raspberry Pi camera.
●This feature uses projects from GitHub flask-video-streaming.
85
import ultra
import move
import time
import Adafruit_PCA9685
import RPIservo
pwm0_direction = 1
pwm0_init = num_import_int('init_pwm0 = ')
pwm0_max = 520
pwm0_min = 100
pwm0_pos = pwm0_init
pwm1_direction = 1
pwm1_init = num_import_int('init_pwm1 = ')
pwm1_max = 520
pwm1_min = 100
pwm1_pos = pwm1_init
pwm2_direction = 1
pwm2_init = num_import_int('init_pwm2 = ')
pwm2_max = 520
pwm2_min = 100
pwm2_pos = pwm2_init
while True:
# The pan-tit servo rotates to the left, middle and right directions and uses ultrasonic to scan and record the distance
of obstacles
if scanPos == 1:
pwm.set_pwm(scanServo, 0, pwm1_init + scanRange)
time.sleep(0.3)
scanList[0] = ultra.checkdist()
elif scanPos == 2:
pwm.set_pwm(scanServo, 0, pwm1_init)
time.sleep(0.3)
scanList[1] = ultra.checkdist()
elif scanPos == 3:
pwm.set_pwm(scanServo, 0, pwm1_init - scanRange)
time.sleep(0.3)
scanList[2] = ultra.checkdist()
scanPos += scanDir
# If the distance of the nearest obstacle in front is less than the threshold
if min(scanList) < rangeKeep:
# If the closest obstacle is on the left
if scanList.index(min(scanList)) == 0:
# Then, turn right
scGear.moveAngle(2, -30)
# If the closest obstacle is right ahead
elif scanList.index(min(scanList)) == 1:
# See the value on the left and right to determine which is larger
if scanList[0] < scanList[2]:
# Full rudder to the right and move
scGear.moveAngle(2, -45)
else:
# Otherwise, full rudder to the left and move
scGear.moveAngle(2, 45)
# If the closest obstacle is on the right
elif scanList.index(min(scanList)) == 2:
# Then, turn left
scGear.moveAngle(2, 30)
if max(scanList) < rangeKeep or min(scanList) < rangeKeep/3:
move.motor_lef(1, 1, 80)
move.motor_right(1, 1, 80)
# If the distance of the nearest obstacle in front is greater than the threshold, the car moves backward
else:
move.motor_left(1, 0, 80)
move.motor_right(1, 0, 80)
88
● Process explanation: First obtain a frame from the camera, and then use OpenCV to analyze the
content of this frame. After the analysis is completed, the information to be drawn is generated, such as the
position of the center point of the target object, the text lamp information that needs to be generated on the
screen, then Draw those elements on the screen according to the generated drawing information, and finally
display the processed and drawn frame on the page.
●Such a processing flow will cause each frame to be collected to wait for the OpenCV related process to
be processed. After this frame is displayed, the second frame can be collected for processing and analysis to
89
● Process explanation: In order to improve the frame rate, we separate the analysis task of the video
frame from the process of acquisition and display, and place it in a background thread to execute and generate
drawing information.
● We change the complete code of camera_opencv.py with too many threads as follows: (This code is
only for reference of multi-threading principle, and the OpenCV function is intuitively deleted for the sake of
demonstration)
import os
import cv2
from base_camera import BaseCamera
import numpy as np
import datetime
import time
90
import threading
import imutils
class CVThread(threading.Thread):
'''
This class is used to process OpenCV analysis of video frames in the background. For more basic usage principles of
the multi-threaded class, please refer to 14.2
'''
def __init__(self, *args, **kwargs):
self.CVThreading = 0
def elementDraw(self,imgInput):
'''
Draw elements in the picture
'''
return imgInput
def pause(self):
'''
Block the thread and wait for the next frame
'''
self.__flag.clear()
91
self.CVThreading = 0
def resume(self):
'''
Resume the thread
'''
self.__flag.set()
def run(self):
'''
Process video frames in the background thread
'''
while 1:
self.__flag.wait()
self.CVThreading = 1
self.doOpenCV(self.imgCV)
class Camera(BaseCamera):
video_source = 0
def __init__(self):
if os.environ.get('OPENCV_CAMERA_SOURCE'):
Camera.set_video_source(int(os.environ['OPENCV_CAMERA_SOURCE']))
super(Camera, self).__init__()
@staticmethod
def set_video_source(source):
Camera.video_source = source
@staticmethod
def frames():
camera = cv2.VideoCapture(Camera.video_source)
if not camera.isOpened():
raise RuntimeError('Could not start camera.')
'''
Instantiation CVThread()
'''
cvt = CVThread()
cvt.start()
while True:
# read current frame
92
img = camera.read()
if cvt.CVThreading:
'''
If OpenCV is processing video frames, skip
'''
pass
else:
'''
If OpenCV is not processing video frames, give the video frame processing thread a new video frame
and resume the processing thread
'''
cvt.mode(img)
cvt.resume()
'''
Draw elements on the screen
'''
img = cvt.elementDraw(img)
●The above is the code principle of using multi-threading to process OpenCV. The following introduction
of specific functions of OpenCV will skip the explanation of multi-threading and directly introduce OpenCV's
method of processing video frames.
93
●The real-time video transmission function comes from the open source project of Github the MIT open
source agreement flask-video-streaming.
●First, prepare two .py files in the same folder in the Raspberry Pi. The code is as follows:
·app.py
#!/usr/bin/env python3
app = Flask(__name__)
def gen(camera):
while True:
frame = camera.get_frame()
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n')
@app.route('/')
def video_feed():
return Response(gen(Camera()),
mimetype='multipart/x-mixed-replace; boundary=frame')
if __name__ == '__main__':
app.run(host='0.0.0.0', threaded=True)
·base_camera.py
import time
import threading
try:
from greenlet import getcurrent as get_ident
except ImportError:
try:
94
class CameraEvent(object):
"""An Event-like class that signals all active clients when a new frame is
available.
"""
def __init__(self):
self.events = {}
def wait(self):
"""Invoked from each client's thread to wait for the next frame."""
ident = get_ident()
if ident not in self.events:
# this is a new client
# add an entry for it in the self.events dict
# each entry has two elements, a threading.Event() and a timestamp
self.events[ident] = [threading.Event(), time.time()]
return self.events[ident][0].wait()
def set(self):
"""Invoked by the camera thread when a new frame is available."""
now = time.time()
remove = None
for ident, event in self.events.items():
if not event[0].isSet():
# if this client's event is not set, then set it
# also update the last set timestamp to now
event[0].set()
event[1] = now
else:
# if the client's event is already set, it means the client
# did not process a previous frame
# if the event stays set for more than 5 seconds, then assume
# the client is gone and remove it
if now - event[1] > 5:
remove = ident
if remove:
del self.events[remove]
def clear(self):
95
class BaseCamera(object):
thread = None # background thread that reads frames from camera
frame = None # current frame is stored here by background thread
last_access = 0 # time of last client access to the camera
event = CameraEvent()
def __init__(self):
"""Start the background camera thread if it isn't running yet."""
if BaseCamera.thread is None:
BaseCamera.last_access = time.time()
def get_frame(self):
"""Return the current camera frame."""
BaseCamera.last_access = time.time()
return BaseCamera.frame
@staticmethod
def frames():
""""Generator that returns frames from the camera."""
raise RuntimeError('Must be implemented by subclasses.')
@classmethod
def _thread(cls):
"""Camera background thread."""
print('Starting camera thread.')
frames_iterator = cls.frames()
96
●When you use the follow-up tutorial to develop a certain OpenCV related function, you only need to put
the corresponding camera_opencv.py in the same folder as this app.py and base_camera.py, and then run it in
the Raspberry Pi console just app.py.
● Open the browser with a device on the same local area network as the Raspberry Pi, enter the IP
address of the Raspberry Pi in the address bar, and access port 5000. The address is shown in the following
example:
192.168.3.161:5000
97
'''
99
font = cv2.FONT_HERSHEY_SIMPLEX
class Camera(BaseCamera):
video_source = 0
def __init__(self):
if os.environ.get('OPENCV_CAMERA_SOURCE'):
Camera.set_video_source(int(os.environ['OPENCV_CAMERA_SOURCE']))
super(Camera, self).__init__()
@staticmethod
def set_video_source(source):
Camera.video_source = source
@staticmethod
def frames():
camera = cv2.VideoCapture(Camera.video_source)
if not camera.isOpened():
raise RuntimeError('Could not start camera.')
while True:
# read current frame
img = camera.read() #Get the picture captured by the camera
'''
c = max(cnts, key=cv2.contourArea)
((box_x, box_y), radius) = cv2.minEnclosingCircle(c)
M = cv2.moments(c)
center = (int(M["m10"] / M["m00"]), int(M["m01"] / M["m00"]))
X = int(box_x)
Y = int(box_y)
'''
Get the center point coordinates of the target color object and output
'''
print('Target color object detected')
print('X:%d'%X)
print('Y:%d'%Y)
print('-------')
'''
Write text on the screen:Target Detected
'''
cv2.putText(img,'Target Detected',(40,60), font, 0.5,(255,255,255),1,cv2.LINE_AA)
'''
Draw a frame around the target color object
'''
cv2.rectangle(img,(int(box_x-radius),int(box_y+radius)),
(int(box_x+radius),int(box_y-radius)),(255,255,255),1)
else:
cv2.putText(img,'Target Detecting',(40,60), font, 0.5,(255,255,255),1,cv2.LINE_AA)
print('No target color object detected')
HSV\color
Black Grey White Red Orange Yellow Green Cyan Blue Purple
S_min 0 0 0 43 43 43 43 43 43 43
V_min 0 46 221 46 46 46 46 46 46 46
V_max 46 220 255 255 255 255 255 255 255 255
102
●For the development preparation and operation of OpenCV function, please refer to 18.
●Create camera_opencv.py in the folder where app.py and base_camera.py in 18, the code related to the
OpenCV visual line tracking function to be introduced in this chapter is written in camera_opencv.py.
●For safety reasons, this routine does not control the motor or servo motion, and only outputs OpenCV
calculation results.
103
'''
Set the color of the line, 255 is the white line, 0 is the black line
'''
lineColorSet = 255
'''
Set the horizontal position of the reference, the larger the value, the lower, but not greater than the vertical resolution
of the video (default 480)
'''
linePos = 380
class Camera(BaseCamera):
video_source = 0
def __init__(self):
if os.environ.get('OPENCV_CAMERA_SOURCE'):
Camera.set_video_source(int(os.environ['OPENCV_CAMERA_SOURCE']))
super(Camera, self).__init__()
@staticmethod
def set_video_source(source):
Camera.video_source = source
@staticmethod
def frames():
camera = cv2.VideoCapture(Camera.video_source)
if not camera.isOpened():
raise RuntimeError('Could not start camera.')
while True:
img = camera.read() #Get the picture captured by the camera
'''
104
Convert the picture to black and white, and then binarize (the value of each pixel in the picture is 255
except 0)
'''
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
retval, img = cv2.threshold(img, 0, 255, cv2.THRESH_OTSU)
img = cv2.erode(img, None, iterations=6) #Use Corrosion Denoising
colorPos = img[linePos] #Get an array of pixel values for linePos
try:
lineColorCount_Pos = np.sum(colorPos == lineColorSet) #Get the number of pixels of line color
(line width)
lineIndex_Pos = np.where(colorPos == lineColorSet) #Get the horizontal position of the end
point of the line in the linePos line
'''
Use the endpoint position and line width to calculate the position of the center point of the line
'''
left_Pos = lineIndex_Pos[0][lineColorCount_Pos-1]
right_Pos = lineIndex_Pos[0][0]
center_Pos = int((left_Pos+right_Pos)/2)
'''
Draw a horizontal reference line
'''
cv2.line(img,(0,linePos),(640,linePos),(255,255,64),1)
if center_Pos:
'''
If a line is detected, draw the center point of the line
'''
cv2.line(img,(center_Pos,linePos+300),(center_Pos,linePos-300),(255,255,64),1)
the standard program. If you are interested in this, you can try to expand more. We will offer the installation and
application methods of other functions in the follow-up tutorials. Please subscribe our Youtube channel for
more.
●Change LED Color:You can control the colors of the LEDs on the robot in real time by dragging these
three sliders. These three sliders correspond to the brightness of the three channels of RGB. In theory, you can
create 16,777,216 (256^ 3) kinds of colors through these three sliders.
'''
These two libraries are used to control WS2812 LED lights
'''
from rpi_ws281x import *
import argparse
'''
Import socket library to be used for TCP communication
'''
109
import socket
'''
Some settings related to LED lights come from the WS281X routine
Source Code:https://github.com/rpi-ws281x/rpi-ws281x-python/
'''
LED_COUNT = 24
LED_PIN = 18
LED_FREQ_HZ = 800000
LED_DMA = 10
LED_BRIGHTNESS = 255
LED_INVERT = False
LED_CHANNEL =0
'''
Process arguments
'''
parser = argparse.ArgumentParser()
parser.add_argument('-c', '--clear', action='store_true', help='clear the display on exit')
args = parser.parse_args()
'''
Create NeoPixel object with appropriate configuration.
'''
strip = Adafruit_NeoPixel(LED_COUNT, LED_PIN, LED_FPEQ_HZ, LED_DMA, LED_INVERT, LED_BRIGHTNESS,
LED_CHANNEL)
'''
Intialize the library
'''
strip.begin()
'''
Next is the configuration related to TCP communication, where PORT is the defined port number, you can choose freely
from 0-65535, it is recommended to select the number after 1023, which needs to be consistent with the port number
defined by the client in the PC
'''
HOST = ''
PORT = 10223
BUFSIZ = 1024
ADDR = (HOST, PORT)
tcpSerSock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR,1)
tcpSerSock.bind(ADDR)
tcpSerSock.listen(5)
'''
Start listening to the client connection, after the client connection is successful, start to receive the information sent
from the client
'''
tcpCliSock, addr = tcpSerSock.accept()
while True:
data = ''
'''
Receive information from the client
'''
data = str(tcpCliSock.recv(BUFSIZ).decode())
if not data:
continue
'''
Turn on the light if the information content is on
If the information content is off, turn off the light
'''
elif 'on' == data:
for i in range(strip.numPixels()):
strip.setPixelColor(i, Color(255, 0, 255))
strip.show()
elif 'off' == data:
for i in range(strip.numPixels()):
strip.setPixelColor(i, Color(0, 0, 0))
strip.show()
'''
Finally print out the received data and start to continue listening to the next message from the client
'''
print(data)
'''
Python uses Tkinter to quickly create GUI applications and instantiate them while importing
'''
import tkinter as tk
def lights_on():
'''
Call this method to send the light-on command 'on'
'''
tcpClicSock.send(('on').encode())
def lights_off():
'''
Call this method to send the light off command 'off'
'''
tcpClicSock.send(('off').encode())
'''
Enter the IP address of the Raspberry Pi here
'''
SERVER_IP = '192.168.3.35'
'''
Next is the configuration related to TCP communication, where PORT is a defined port number, you can choose freely
from 0-65535, it is recommended to choose the number after 1023, which needs to be consistent with the port number
defined by the server in the Raspberry Pi
'''
SERVER_PORT = 10223
BUFSIZ = 1024
ADDR = (SERVER_IP, SERVER_PORT)
tcpClicSock = socket(AF_INET, SOCK_STREAM)
tcpClicSock.connect(ADDR)
'''
The following is the GUI part
'''
root = tk.Tk() # Define a window
root.title('Lights') # Window title
root.geometry('175x55')# The size of the window, the middle x is the English letter x
root.config(bg='#000000') # Define the background color of the window
112
'''
Use Tkinter's Button method to define a button. The button is on the root window. The name of the button is 'ON'. The
text color of the button is # E1F5FE. The background color of the button is # 0277BD. )function
'''
btn_on = tk.Button(root, width=8, text='ON', fg='#E1F5FE', bg='#0277BD', command=lights_on)
'''
Choose a location to place this button
'''
btn_on.place(x=15, y=15)
'''
The same method defines another key, the difference is that the text above the key is changed to 'OFF'. When the key is
pressed, the lights_off () function is called
'''
btn_off = tk.Button(root, width=8, text='OFF', fg='#E1F5FE', bg='#0277BD', command=lights_off)
btn_off.place(x=95, y=15)
'''
Finally open the message loop
'''
root.mainloop()
●We first run the program in the Raspberry Pi, and then open the program on the PC (first run the server
and then the client).
● Click 'ON', the light is on, and 'on' is printed in the terminal of the Raspberry Pi, indicating that the
program runs successfully.
113
'''
Here we need to fill in the IP address of the video receiver (the IP address of the PC)
114
'''
IP = '192.168.3.11'
'''
Then initialize the camera, you can change these parameters according to your needs
'''
camera = picamera.PiCamera()
camera.resolution = (640, 480)
camera.framerate = 20
rawCapture = PiRGBArray(camera, size=(640, 480))
'''
Here we instantiate the zmq object used to send the frame, using the tcp communication protocol, where 5555 is the
port number
The port number can be customized, as long as the port number of the sending end and the receiving end are the same
'''
context = zmq.Context()
footage_socket = context.socket(zmq.PAIR)
footage_socket.connect('tcp://%s:5555'%IP)
print(IP)
'''
Next, loop to collect images from the camera, because we are using a Raspberry Pi camera, so use_video_port is True
'''
for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
'''
Since imencode () function needs to pass in numpy array or scalar to encode the image
Here we convert the collected frame to numpy array
'''
frame_image = frame.array
'''
We encode the frame into stream data and save it in the memory buffer
'''
encoded, buffer = cv2.imencode('.jpg', frame_image)
jpg_as_text = base64.b64encode(buffer)
'''
Here we send the stream data in the buffer through base64 encoding to the video receiving end
'''
footage_socket.send(jpg_as_text)
115
'''
Clear the stream in preparation for the next frame
'''
rawCapture.truncate(0)
● In the following, we explain the program on the receiving end. Since the libraries used here are
cross-platform, PC.py can be run on a Windows computer or another Linux computer.
●PC.py :
'''
First import the required libraries
'''
import cv2
import zmq
import base64
import numpy as np
'''
Here we instantiate the zmq object used to receive the frame
Note that the port number needs to be consistent with the sender's
'''
context = zmq.Context()
footage_socket = context.socket(zmq.PAIR)
footage_socket.bind('tcp://*:5555')
while True:
'''
Received video frame data
'''
frame = footage_socket.recv_string()
'''
Decode and save it to the cache
'''
img = base64.b64decode(frame)
'''
Interpret a buffer as a 1-dimensional array
'''
npimg = np.frombuffer(img, dtype=np.uint8)
'''
Decode a one-dimensional array into an image
'''
source = cv2.imdecode(npimg, 1)
116
'''
Display image
'''
cv2.imshow("Stream", source)
'''
Generally, waitKey () should be used after imshow () to leave time for image drawing, otherwise the window will
appear unresponsive and the image cannot be displayed
'''
cv2.waitKey(1)
●When running the program, we first run RPiCam.py in the Raspberry Pi and PC.py in the PC to see the
real-time picture of the Raspberry Pi in the PC.
'''
Here we instantiate the zmq object used to receive the frame
Note that the port number needs to be consistent with the sender's
'''
context = zmq.Context()
117
footage_socket = context.socket(zmq.PAIR)
footage_socket.bind('tcp://*:5555')
while True:
'''
Received video frame data
'''
frame = footage_socket.recv_string()
'''
Decode and save it to the cache
'''
img = base64.b64decode(frame)
'''
Interpret a buffer as a 1-dimensional array
'''
npimg = np.frombuffer(img, dtype=np.uint8)
'''
Decode a one-dimensional array into an image
'''
source = cv2.imdecode(npimg, 1)
'''
Display image
'''
cv2.imshow("Stream", source)
'''
Generally, waitKey () should be used after imshow () to leave time for image drawing, otherwise the window will
appear unresponsive and the image cannot be displayed
'''
cv2.waitKey(1)
● After source = cv2.imdecode (npimg, 1), you can use OpenCV to process the source, as shown below is the
routine for binarizing the real-time video image from the Raspberry Pi using the host computer:
'''
First import the required libraries
'''
import cv2
import zmq
import base64
118
import numpy as np
'''
Here we instantiate the zmq object used to receive the frame
Note that the port number needs to be consistent with the sender's
'''
context = zmq.Context()
footage_socket = context.socket(zmq.PAIR)
footage_socket.bind('tcp://*:5555')
while True:
'''
Received video frame data
'''
frame = footage_socket.recv_string()
'''
Decode and save it to the cache
'''
img = base64.b64decode(frame)
'''
Interpret a buffer as a 1-dimensional array
'''
npimg = np.frombuffer(img, dtype=np.uint8)
'''
Decode a one-dimensional array into an image
'''
source = cv2.imdecode(npimg, 1)
'''
Convert image to grayscale
'''
source = cv2.cvtColor(source, cv2.COLOR_BGR2GRAY)
'''
Binary image
'''
retval, source = cv2.threshold(source, 0, 255, cv2.THRESH_OTSU)
'''
Remove small noise in the image
119
'''
source = cv2.erode(source, None, iterations=6)
'''
Display image
'''
cv2.imshow("Stream", source)
'''
Generally, waitKey () should be used after imshow () to leave time for image drawing, otherwise the window will
appear unresponsive and the image cannot be displayed
'''
cv2.waitKey(1)
120
27 Enable UART
● UART is a more commonly used communication protocol between devices. Using UART, you can allow MCUs
such as Arduino, STM32, or ESP32 to communicate with the Raspberry Pi, which can make your robot more powerful.
●However, for some Raspberry Pis, the UART that is enabled by default is not a full-featured UART, so you need to
refer to the following steps to enable the full-featured UART. The following parts are from the official documentation of
the Raspberry Pi The Raspberry Pi UARTs.
●The SoCs used on the Raspberry Pis have two built-in UARTs, a PL011 and a mini UART. They are implemented
using different hardware blocks, so they have slightly different characteristics. However, both are 3.3V devices, which
means extra care must be taken when connecting up to an RS232 or other system that utilises different voltage levels.
An adapter must be used to convert the voltage levels between the two protocols. Alternatively, 3.3V USB UART
adapters can be purchased for very low prices.
● By default, on Raspberry Pis equipped with the wireless/Bluetooth module (Raspberry Pi 3 and Raspberry Pi
Zero W), the PL011 UART is connected to the Bluetooth module, while the mini UART is used as the primary UART and
will have a Linux console on it. On all other models, the PL011 is used as the primary UART.
●In Linux device terms, by default, /dev/ttyS0 refers to the mini UART, and /dev/ttyAMA0 refers to the PL011. The
primary UART is the one assigned to the Linux console, which depends on the Raspberry Pi model as described above.
There are also symlinks:
/dev/serial0, which always refers to the primary UART (if enabled), and /dev/serial1, which similarly always refers
to the secondary UART (if enabled).
for other purposes requires this default behaviour to be changed. On startup, systemd checks the Linux kernel
command line for any console entries, and will use the console defined therein. To stop this behaviour, the serial
console setting needs to be removed from command line.
●This can be done by using the raspi-config utility, or manually.
sudo raspi-config
●Select option 5, Interfacing options, then option P6, Serial, and select No. Exit raspi-config.
●To manually change the settings, edit the kernel command line with sudo nano
/boot/cmdline.txt . Find the console entry that refers to the serial0 device, and remove it, including the baud rate
setting. It will look something like console=serial0,115200 . Make sure the rest of the line remains the same, as errors in
this configuration can stop the Raspberry Pi from booting.
●Reboot the Raspberry Pi for the change to take effect.
●It should be noted that the port number when using the WEB application is 5000, the port number when
using the GUI program is 10223, and the port number when using the mobile APP is 10123.
●The controller on the left can control the robot to move back and forth, left and right, and the controller
on the right can control other movements of the robot. You can change the specific operation by editing
appserver.py.
124
Conclusion
Through the above operations on DarkPaw, you should learn how to use python language programming to
control DarkPaw work on the Raspberry Pi, And also learned how to assemble a DarkPaw.
If you have any questions about this product, please contact us via email or forum, we will reply to your
questions within one working day:
[email protected]
https://www.adeept.com/forum/
If you want to try our other products, you can visit our website:
www.adeept.com
For more product information, please visit:
https://www.adeept.com/learn/
For more product latest video updates, please visit:
https://www.adeept.com/video/
Thank you for using Adeept products.
125