0% found this document useful (0 votes)
233 views51 pages

AI Voice Assistant

The document discusses building an AI voice assistant using a Raspberry Pi with a ReSpeaker 2-Mic PiHAT. It provides steps to set up the hardware, install required software packages, configure audio settings, and run a Python script to activate the voice assistant.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
233 views51 pages

AI Voice Assistant

The document discusses building an AI voice assistant using a Raspberry Pi with a ReSpeaker 2-Mic PiHAT. It provides steps to set up the hardware, install required software packages, configure audio settings, and run a Python script to activate the voice assistant.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 51

link:-

AI voice Assistant using Raspberry Pi


Voice recognition using raspberry Pi
Make Your Raspberry Pi Speak
Materials required:-
26 Sep
Mycroft: NLP
Chat bot using Chat GPT api
How to Resolve the 'npm' is not recognized Error in Windows 10
Check Current Version of Node.js
Adding the Node.js Installation Directory to your System Path
Step 1: Find the path to your Node.js installation
Step 2: Open the Environment Variables settings
Step 3: Edit the System Path variable
Step 4: Save the changes and test your installation
How to Build Your Own AI Chatbot With ChatGPT API
1/12/2023
https://youtu.be/9T3MOY9GivQ youtube tutorial
https://techiesms.com/google-assistant-on-raspberry-pi/ code
Sir this is new pycroft open source ai software
https://youtu.be/eg_V56BUmh0
Spare
https://youtu.be/4luTnRpfgbs
Introduction:-
An AI Voice assistant is a software agent that can perform a
range of tasks or services for a user based on user input such as
commands or questions, including verbal ones. Such
technologies often incorporate chatbot capabilities to simulate
human conversation, such as via online chat, to facilitate
interaction with their users. The interaction may be via text,
graphical interface, or voice - as some virtual assistants are able
to interpret human speech and respond via synthesized voices.
In many cases users can ask their virtual assistants questions,
control home automation devices and media playback, and
manage other basic tasks such as email, to-do lists, and
calendars - all with verbal commands.In recent years, prominent
virtual assistants for direct consumer use have included Amazon's
Alexa, Apple's Siri, Microsoft's Cortana, and Google
Assistant.Also, companies in various industries often incorporate
some kind of virtual assistant technology into their customer
service or support.
Recently, the emergence of recent artificial intelligence based
chatbots, such as ChatGPT, has brought increased capability and
interest to the field of virtual assistant products and services.

Input:-
Host device:-

.
link:-

AI voice Assistant using Raspberry Pi


Link:-
https://youtu.be/Q-CaEJ2qvl4

Finally I am able to build my AI Voice Assistant on Raspberry Pi. I used Raspberry Pi 4 (2GB)
along with Re-Speaker 2-Mic PiHAT and an external speaker. You can find the source code and
step by step process in my github page.

This is an open source startup project, you can contribute your ideas to make that project big.

Source Code:
https://github.com/avseng/ai_voice_as…

ReSpeaker 2-Mic PiHAT Wiki:


https://wiki.seeedstudio.com/ReSpeake…

Please Like this video and Subscribe my channel.


#aivoiceassistantonraspberrypi #virtualassistant #personalvoiceassistant
#raspberrypivoiceassistant
Code
1)Readme.md

1. Creating a RAM Disk in raspberry pi. This will help to


increase the longsivity of micro sd card. we will use this
ramdisk to store temp files created by the voice assistant.
This file will be rempved after reboot.

a. sudo mkdir -p /mnt/ramdisk


b. sudo chown -R pi:pi /mnt/ramdisk
c. sudo mount -osize=200M -t tmpfs tmpfs
/mnt/ramdisk
d. sudo nano /etc/fstab
e. tmpfs /mnt/ramdisk tmpfs
defaults,noatime,mode=755,uid=pi,gid=pi,size=200
m 0 0
f. sudo mount -a
g. df -h

2. Install git

sudo apt-get install git

3. Clone the git repository

git clone
https://github.com/avseng/ai_voice_assistant.git

4. Run setup.sh to install all the pre-requisite packages and


RE-SPEAKER sound card.

cd ai_voice_assistant/
sudo chmod u+x setup.sh
sudo bash setup.sh

5. check the sound card


a. Run below command to get the attached microphone details.
arecord -l
**** List of CAPTURE Hardware Devices ****
card 1: seeed2micvoicec [seeed-2mic-
voicecard], device 0: bcm2835-i2s-wm8960-hifi
wm8960-hifi-0 [bcm2835-i2s-wm8960-hifi wm8960-
hifi-0]
Subdevices: 0/1
Subdevice #0: subdevice #0

b. Run below command to get attached speaker details.


aplay -l
**** List of PLAYBACK Hardware Devices
****
card 0: ALSA [bcm2835 ALSA], device 0:
bcm2835 ALSA [bcm2835 ALSA]
Subdevices: 6/7
Subdevice #0: subdevice #0
Subdevice #1: subdevice #1
Subdevice #2: subdevice #2
Subdevice #3: subdevice #3
Subdevice #4: subdevice #4
Subdevice #5: subdevice #5
Subdevice #6: subdevice #6
card 0: ALSA [bcm2835 ALSA], device 1:
bcm2835 IEC958/HDMI [bcm2835 IEC958/HDMI]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 0: ALSA [bcm2835 ALSA], device 2:
bcm2835 IEC958/HDMI1 [bcm2835 IEC958/HDMI1]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 1: seeed2micvoicec [seeed-2mic-
voicecard], device 0: bcm2835-i2s-wm8960-hifi
wm8960-hifi-0 [bcm2835-i2s-wm8960-hifi wm8960-
hifi-0]
Subdevices: 1/1
Subdevice #0: subdevice #0

c. Create a file on /home/pi directory (setting up the record and


playback device as default PCM device).
sudo nano .asoundrc

e. Add below lines to the file and save it.


pcm.!default {
type asym
capture.pcm "mic"
playback.pcm "speaker"
}
pcm.mic {
type plug
slave {
pcm "hw: 1, 0"
}
}
pcm.speaker {
type plug
slave {
pcm "hw: 0, 0"
}
}

f. use alsamixer command to control mic sensitivity and speaker


vloume.
alsamixer

g. Run below command to record your voice


arecord --format=S16_LE --duration=5 --
rate=16000 --file-type=raw out.raw (record your
voice).

h. Run below command to playback voice and check whether you


are able to listen or not
aplay --format=S16_LE --rate=16000 out.raw

i. store the alsamixer setting to /etc/asound.state file with below


command
sudo alsactl store -f /etc/asound.state
(file name must be the same as it is mentioned.)
6. Update the microphone index number to main.py

cd ai_voice_assistant/
python3 check_device_id.py

Note down the input device index number from your sound card.
This code can be used to check any input device index number.
sudo nano main.py

Update the RESPEAKER_INDEX value with exact input device


index number.

7. Add launcher.sh file to crontab

type below command


contab -e

add below line to at the end of the file and save.


@reboot nohup bash
/home/pi/ai_voice_assistant/launcher.sh &

8. Update the microphone index number to main.py

Reboot the raspberry pi.


2)Action.py
import weathercom
import json
from bs4 import BeautifulSoup
import lxml
import requests
from googletrans import Translator
import wikipedia
import urllib.request
import subprocess
import logging

logger = logging.getLogger('voice assistant')

"""
Getting weather report from weather.com
"""
def weatherReport(city):
weatherDetails = weathercom.getCityWeatherDetails(city)
humidity =json.loads(weatherDetails)["vt1observation"]
["humidity"]
temp = json.loads(weatherDetails)["vt1observation"]
["temperature"]
phrase = json.loads(weatherDetails)["vt1observation"]["phrase"]
return humidity, temp, phrase

"""
Perform search operation
if the content reffer any person or group, the it will check in
wikipedia
otherwise it will search in google.
"""
def google_search(search_text):
translator = Translator()
result = ''
search_data = search_text
logger.info("google_search : "+search_data)
if "who is" in search_data or "who are" in search_data:
search_data = search_data.split(" ")[2:]
search_data = " ".join(search_data)
try:
result = wikipedia.summary(search_data, sentences=2)
except Exception as e:
pass
else:
url = "https://www.google.co.in/search?q="+search_data
logger.info("google_search : URL : "+url)
try:
search_result = requests.get(url).text
soup = BeautifulSoup(search_result, 'html.parser')

result_div = soup.find_all('div', class_='BNeawe s3v9rd


AP7Wnd')

if "born" in search_data:
for i in result_div:
s = translator.translate(dest='en', text = i.text)
a = str(s).split("=")[3].split(",")
b = a[:len(a)-1]
b = " ".join(b)

if "Born" in b:
result = b.split(":")[1:].__str__().replace("['
","").replace("']","")
#print(result)
break

else:
for i in result_div:
s = translator.translate(dest='en', text=i.text)
a = str(s).split("=")[3].split(",")
b = a[:len(a) - 1]
result = " ".join(b)
#print(result)
break
except Exception as e:
pass
logger.info("google_search : Search Result ::"+result)
return result

"""
get the current date and time.
"""
def current_datetime(type):
returndata = ''
timeData =
urllib.request.urlopen("http://worldtimeapi.org/api/ip").read()
datetime = json.loads(timeData)["datetime"]
date = datetime.split("T")[0]
time = datetime.split("T")[1]

if type == "time":
time = time.split(".")[0]

hr = int(time.split(":")[0])
min = time.split(":")[1]
suffix = ''
if hr >12:
hr = hr - 12
suffix="PM"
else:
suffix="AM"

if hr == 0:
hr=12
suffix="AM"

final_time = str(hr)+":"+min+" "+suffix


logger.info("current_datetime : current time : "+final_time)
returndata = final_time

if type == "date":
year = date.split("-")[0]
month_int=int(date.split("-")[1])
day = date.split("-")[2]
month = ''

if month_int == 1:
month = 'Janiary'
elif month_int == 2:
month = "February"
elif month_int == 3:
month = "March"
elif month_int == 4:
month = "April"
elif month_int == 5:
month = "May"
elif month_int == 6:
month = "June"
elif month_int == 7:
month = "July"
elif month_int == 8:
month = "August"
elif month_int == 9:
month = "September"
elif month_int == 10:
month = "October"
elif month_int == 11:
month = "Novenber"
elif month_int == 12:
month = "December"

logger.info("current_datetime : today's date : "+month+" "


+day+", "+year)
returndata = month+" " +day+", "+year
return returndata

"""
Reboot raspberry pi.
"""
def reboot_server():
command = "/usr/bin/sudo /sbin/shutdown -r now"
process = subprocess.Popen(command.split(),
stdout=subprocess.PIPE)

3)Apa102.py
"""
The code is based on https://github.com/tinue/APA102_Pi
This is the main driver module for APA102 LEDs

License: GPL V2
"""

import spidev
from math import ceil

RGB_MAP = { 'rgb': [3, 2, 1], 'rbg': [3, 1, 2], 'grb': [2, 3, 1],
'gbr': [2, 1, 3], 'brg': [1, 3, 2], 'bgr': [1, 2, 3] }

class APA102:
"""
Driver for APA102 LEDS (aka "DotStar").
(c) Martin Erzberger 2016-2017

My very first Python code, so I am sure there is a lot to be


optimized ;)

Public methods are:


- set_pixel
- set_pixel_rgb
- show
- clear_strip
- cleanup

Helper methods for color manipulation are:


- combine_color
- wheel

The rest of the methods are used internally and should not be
used by the
user of the library.

Very brief overview of APA102: An APA102 LED is addressed


with SPI. The bits
are shifted in one by one, starting with the least significant bit.

An LED usually just forwards everything that is sent to its data-


in to
data-out. While doing this, it remembers its own color and
keeps glowing
with that color as long as there is power.
An LED can be switched to not forward the data, but instead
use the data
to change it's own color. This is done by sending (at least) 32
bits of
zeroes to data-in. The LED then accepts the next correct 32 bit
LED
frame (with color information) as its new color setting.

After having received the 32 bit color frame, the LED changes
color ,
and then resumes to just copying data-in to data-out.

The really clever bit is this: While receiving the 32 bit LED
frame,
the LED sends zeroes on its data-out line. Because a color
frame is
32 bits, the LED sends 32 bits of zeroes to the next LED.
As we have seen above, this means that the next LED is now
ready
to accept a color frame and update its color.

So that's really the entire protocol:


- Start by sending 32 bits of zeroes. This prepares LED 1 to
update
its color.
- Send color information one by one, starting with the color for
LED 1,
then LED 2 etc.
- Finish off by cycling the clock line a few times to get all data
to the very last LED on the strip
The last step is necessary, because each LED delays
forwarding the data
a bit. Imagine ten people in a row. When you yell the last color
information, i.e. the one for person ten, to the first person in
the line, then you are not finished yet. Person one has to turn
around
and yell it to person 2, and so on. So it takes ten additional
"dummy"
cycles until person ten knows the color. When you look closer,
you will see that not even person 9 knows its own color yet.
This
information is still with person 2. Essentially the driver sends
additional
zeroes to LED 1 as long as it takes for the last color frame to
make it
down the line to the last LED.
"""
# Constants
MAX_BRIGHTNESS = 0b11111 # Safeguard: Set to a value
appropriate for your setup
LED_START = 0b11100000 # Three "1" bits, followed by 5
brightness bits

def __init__(self, num_led,


global_brightness=MAX_BRIGHTNESS,
order='rgb', bus=0, device=1,
max_speed_hz=8000000):
self.num_led = num_led # The number of LEDs in the Strip
order = order.lower()
self.rgb = RGB_MAP.get(order, RGB_MAP['rgb'])
# Limit the brightness to the maximum if it's set higher
if global_brightness > self.MAX_BRIGHTNESS:
self.global_brightness = self.MAX_BRIGHTNESS
else:
self.global_brightness = global_brightness

self.leds = [self.LED_START,0,0,0] * self.num_led # Pixel


buffer
self.spi = spidev.SpiDev() # Init the SPI device
self.spi.open(bus, device) # Open SPI port 0, slave device
(CS) 1
# Up the speed a bit, so that the LEDs are painted faster
if max_speed_hz:
self.spi.max_speed_hz = max_speed_hz

def clock_start_frame(self):
"""Sends a start frame to the LED strip.

This method clocks out a start frame, telling the receiving


LED
that it must update its own color now.
"""
self.spi.xfer2([0] * 4) # Start frame, 32 zero bits

def clock_end_frame(self):
"""Sends an end frame to the LED strip.
As explained above, dummy data must be sent after the last
real colour
information so that all of the data can reach its destination
down the line.
The delay is not as bad as with the human example above.
It is only 1/2 bit per LED. This is because the SPI clock line
needs to be inverted.

Say a bit is ready on the SPI data line. The sender


communicates
this by toggling the clock line. The bit is read by the LED
and immediately forwarded to the output data line. When the
clock goes
down again on the input side, the LED will toggle the clock
up
on the output to tell the next LED that the bit is ready.

After one LED the clock is inverted, and after two LEDs it is
in sync
again, but one cycle behind. Therefore, for every two LEDs,
one bit
of delay gets accumulated. For 300 LEDs, 150 additional bits
must be fed to
the input of LED one so that the data can reach the last LED.

Ultimately, we need to send additional numLEDs/2 arbitrary


data bits,
in order to trigger numLEDs/2 additional clock changes. This
driver
sends zeroes, which has the benefit of getting LED one
partially or
fully ready for the next update to the strip. An optimized
version
of the driver could omit the "clockStartFrame" method if
enough zeroes have
been sent as part of "clockEndFrame".
"""

self.spi.xfer2([0xFF] * 4)

# Round up num_led/2 bits (or num_led/16 bytes)


#for _ in range((self.num_led + 15) // 16):
# self.spi.xfer2([0x00])

def clear_strip(self):
""" Turns off the strip and shows the result right away."""

for led in range(self.num_led):


self.set_pixel(led, 0, 0, 0)
self.show()

def set_pixel(self, led_num, red, green, blue,


bright_percent=100):
"""Sets the color of one pixel in the LED stripe.

The changed pixel is not shown yet on the Stripe, it is only


written to the pixel buffer. Colors are passed individually.
If brightness is not set the global brightness setting is used.
"""
if led_num < 0:
return # Pixel is invisible, so ignore
if led_num >= self.num_led:
return # again, invisible

# Calculate pixel brightness as a percentage of the


# defined global_brightness. Round up to nearest integer
# as we expect some brightness unless set to 0
brightness =
int(ceil(bright_percent*self.global_brightness/100.0))

# LED startframe is three "1" bits, followed by 5 brightness


bits
ledstart = (brightness & 0b00011111) | self.LED_START

start_index = 4 * led_num
self.leds[start_index] = ledstart
self.leds[start_index + self.rgb[0]] = red
self.leds[start_index + self.rgb[1]] = green
self.leds[start_index + self.rgb[2]] = blue

def set_pixel_rgb(self, led_num, rgb_color,


bright_percent=100):
"""Sets the color of one pixel in the LED stripe.

The changed pixel is not shown yet on the Stripe, it is only


written to the pixel buffer.
Colors are passed combined (3 bytes concatenated)
If brightness is not set the global brightness setting is used.
"""
self.set_pixel(led_num, (rgb_color & 0xFF0000) >> 16,
(rgb_color & 0x00FF00) >> 8, rgb_color &
0x0000FF,
bright_percent)

def rotate(self, positions=1):


""" Rotate the LEDs by the specified number of positions.

Treating the internal LED array as a circular buffer, rotate it


by
the specified number of positions. The number could be
negative,
which means rotating in the opposite direction.
"""
cutoff = 4 * (positions % self.num_led)
self.leds = self.leds[cutoff:] + self.leds[:cutoff]

def show(self):
"""Sends the content of the pixel buffer to the strip.

Todo: More than 1024 LEDs requires more than one xfer
operation.
"""
self.clock_start_frame()
# xfer2 kills the list, unfortunately. So it must be copied first
# SPI takes up to 4096 Integers. So we are fine for up to
1024 LEDs.
data = list(self.leds)
while data:
self.spi.xfer2(data[:32])
data = data[32:]
self.clock_end_frame()

def cleanup(self):
"""Release the SPI device; Call this method at the end"""

self.spi.close() # Close SPI port

@staticmethod
def combine_color(red, green, blue):
"""Make one 3*8 byte color value."""

return (red << 16) + (green << 8) + blue

def wheel(self, wheel_pos):


"""Get a color from a color wheel; Green -> Red -> Blue ->
Green"""

if wheel_pos > 255:


wheel_pos = 255 # Safeguard
if wheel_pos < 85: # Green -> Red
return self.combine_color(wheel_pos * 3, 255 - wheel_pos
* 3, 0)
if wheel_pos < 170: # Red -> Blue
wheel_pos -= 85
return self.combine_color(255 - wheel_pos * 3, 0,
wheel_pos * 3)
# Blue -> Green
wheel_pos -= 170
return self.combine_color(0, wheel_pos * 3, 255 - wheel_pos
* 3)

def dump_array(self):
"""For debug purposes: Dump the LED array onto the
console."""

print(self.leds)

4)check_device_id.py
import pyaudio
p = pyaudio.PyAudio()
info = p.get_host_api_info_by_index(0)
numdevices = info.get('deviceCount')
for i in range(0, numdevices):
if (p.get_device_info_by_host_api_device_index(0,
i).get('maxInputChannels')) > 0:
print("Input Device id: "+ str(i)+ " -
"+str(p.get_device_info_by_host_api_device_index(0,
i).get('name')))

5)launcher.sh
#!/bin/sh
sleep 3
sudo alsactl restore -f /etc/asound.state
sleep 10
python3 /home/pi/ai_voice_assistant/main.py >
/mnt/ramdisk/voice.txt 2>&1

6)main.py
import speech_recognition as sr
import pyaudio
import time
import wave
import threading
import os
from pixels import Pixels
import valib
import response
import glob
import logging

r = sr.Recognizer()

RESPEAKER_RATE = 44100 # Sample rate of the


mic.
RESPEAKER_CHANNELS = 1 # Number of channel of
the input device.
RESPEAKER_WIDTH = 2
RESPEAKER_INDEX = 0 # run the
check_device_id.py to get the mic index.
CHUNK = 1024 # Number of frames per buffer.
WAVE_OUTPUT_FILEPATH = "/mnt/ramdisk/" # Directory
location ocation of all the output files.
recognized_text = '' # Global variable for storing
audio converted text

class voice:
"""
__init__ method will create pyaudio stream object
for the entire session. This stream will be used
every time for voice detection from microphone.
"""
def __init__(self):
self.p = pyaudio.PyAudio()
self.stream = self.p.open(
rate=RESPEAKER_RATE,
format=pyaudio.paInt16,
input_device_index=RESPEAKER_INDEX,
channels=RESPEAKER_CHANNELS,
input=True,
frames_per_buffer=CHUNK)

"""
process() method reads data from pyaudio stream for given
duration.
After read, it creates audio frame and save it to .wav file.
it generates new WAV file every time it gets called.
"""
def process(self, RECORD_SECONDS):
frames = []
for i in range(0, int(RESPEAKER_RATE / CHUNK *
RECORD_SECONDS)):
data = self.stream.read(CHUNK,
exception_on_overflow=False)
frames.append(data)

out_filename = WAVE_OUTPUT_FILEPATH +
str(time.time()) + ".wav"
wf = wave.open(out_filename, 'wb')
wf.setnchannels(RESPEAKER_CHANNELS)

wf.setsampwidth(self.p.get_sample_size(self.p.get_format_from_
width(RESPEAKER_WIDTH)))
wf.setframerate(RESPEAKER_RATE)
wf.writeframes(b''.join(frames))
wf.close()
return out_filename

"""
voice_command_processor() method reads data from .wav file
and convert into text.
it is using speech_recognition library and recognize_google
option to convert speech
into text.
"""
def voice_command_processor(self, filename):
global recognized_text
with sr.AudioFile(filename) as source:
#r.adjust_for_ambient_noise(source=source,
duration=0.5)
wait_time = 3
while True:
audio = r.record(source, duration=3)
if audio:
break
time.sleep(1)
wait_time = wait_time - 1
if wait_time == 0:
break

try:
recognized_text = r.recognize_google(audio)
except sr.UnknownValueError as e:
pass
except sr.RequestError as e:
logger.error("service is down")
pass
os.remove(filename)
return recognized_text

px = Pixels() # Initializing the Pixel class for RE-SPEAKER


PiHAT LED.
px.wakeup()
time.sleep(2)
px.off()

a = voice() # Initializing the voice class.

"""
Infinite loop:
1. Reading microphone for 3 sec and generation .wav file.
2. Creating thread with voice_command_processor() method
for converting speech to text.
3. IF wake word is detected (in my case Gideon):

a. Clearing recognized_text global variable.


b. Turing on the LED.
c. Audio reply with "how can i help you"
d. Start reading from pyaudio stream for next 5 sec for
question.
e. Convert the audio to text using
voice_command_processor().
f. Process the text using process_text() method from
response.py.
g. once the processing done, it will remove all the files from
the output directory.
f. turn off the LED.
"""
if __name__ == '__main__':

logger = logging.getLogger('voice assistant')


logger.setLevel(logging.DEBUG)
fh = logging.FileHandler("/mnt/ramdisk/voice.log")
fh.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %
(levelname)s - %(message)s')
fh.setFormatter(formatter)
logger.addHandler(fh)
while True:
file_name = a.process(3)
logger.info("wake_word said :: " + recognized_text)
#print("wake_word said :: " + recognized_text)
if "Gideon" in recognized_text:
logger.info("wake word detected...")
recognized_text = ''
px.wakeup()
valib.audio_playback('how can i help you')
time.sleep(0.5)
command_file_name = a.process(5)
a.voice_command_processor(command_file_name)
logger.info("you said :: " + recognized_text)
px.think()
status = response.process_text(recognized_text, a)
while status != 'done':
pass

files = glob.glob(os.path.join(WAVE_OUTPUT_FILEPATH
+ '*.wav'))
for file in files:
os.remove(file)
recognized_text = ''
px.off()
else:
t1 =
threading.Thread(target=a.voice_command_processor,
args=(file_name,))
t1.start()
7)pixels.py
"""
LED light pattern like Google Home
"""

import apa102
import time
import threading
try:
import queue as Queue
except ImportError:
import Queue as Queue

class Pixels:
PIXELS_N = 3

def __init__(self):
self.basis = [0] * 3 * self.PIXELS_N
self.basis[0] = 2
self.basis[3] = 1
self.basis[4] = 1
self.basis[7] = 2

self.colors = [0] * 3 * self.PIXELS_N


self.dev = apa102.APA102(num_led=self.PIXELS_N)

self.next = threading.Event()
self.queue = Queue.Queue()
self.thread = threading.Thread(target=self._run)
self.thread.daemon = True
self.thread.start()

def wakeup(self, direction=0):


def f():
self._wakeup(direction)

self.next.set()
self.queue.put(f)

def listen(self):
self.next.set()
self.queue.put(self._listen)

def think(self):
self.next.set()
self.queue.put(self._think)

def speak(self):
self.next.set()
self.queue.put(self._speak)

def off(self):
self.next.set()
self.queue.put(self._off)

def _run(self):
while True:
func = self.queue.get()
func()
def _wakeup(self, direction=0):
for i in range(1, 25):
colors = [i * v for v in self.basis]
self.write(colors)
time.sleep(0.01)

self.colors = colors

def _listen(self):
for i in range(1, 25):
colors = [i * v for v in self.basis]
self.write(colors)
time.sleep(0.01)

self.colors = colors

def _think(self):
colors = self.colors

self.next.clear()
while not self.next.is_set():
colors = colors[3:] + colors[:3]
self.write(colors)
time.sleep(0.2)

t = 0.1
for i in range(0, 5):
colors = colors[3:] + colors[:3]
self.write([(v * (4 - i) / 4) for v in colors])
time.sleep(t)
t /= 2

# time.sleep(0.5)

self.colors = colors

def _speak(self):
colors = self.colors
gradient = -1
position = 24

self.next.clear()
while not self.next.is_set():
position += gradient
self.write([(v * position / 24) for v in colors])

if position == 24 or position == 4:
gradient = -gradient
time.sleep(0.2)
else:
time.sleep(0.01)

while position > 0:


position -= 1
self.write([(v * position / 24) for v in colors])
time.sleep(0.01)

# self._off()
def _off(self):
self.write([0] * 3 * self.PIXELS_N)

def write(self, colors):


for i in range(self.PIXELS_N):
self.dev.set_pixel(i, int(colors[3*i]), int(colors[3*i + 1]),
int(colors[3*i + 2]))

self.dev.show()

8)response.py
"""
This file process converted text and perform actions accordingly.
This file can be extended with more action.
"""
import valib as va
import action as a
import time
import logging

logger = logging.getLogger('voice assistant')

def process_text(text, pa):

"""
asking who are you?
"""
if "who are you" in text:
va.audio_playback("i am a i voice assistant system")
"""
asking about weather information.
"""
if "weather" in text:
va.audio_playback("which city")
time.sleep(0.5)
file_name = pa.process(3)
city = pa.voice_command_processor(file_name)
logger.info("process_text : City :: " + city)
try:
humidity, temp, phrase = a.weatherReport(city)
va.audio_playback(
"currently in " + city + " temperature is " + str(temp) + "
degree celsius, " + "humidity is " + str(
humidity) + " percent and sky is " + phrase)
logger.info("currently in " + city + " temperature is " +
str(temp) + "degree celsius, " + "humidity is " + str(
humidity) + " percent and sky is " + phrase)
except KeyError as e:
va.audio_playback("sorry, i couldn't get the location")

"""
asking for search somthing like:
what is raspberry pi
who is isac newton etc.
"""
if "search" in text or "Search" in text:
va.audio_playback("tell me what to search")
time.sleep(0.5)
file_name = pa.process(5)
search_data = pa.voice_command_processor(file_name)
try:
result = a.google_search(search_data)
if result:
va.audio_playback(result)
else:
va.audio_playback("sorry, i couldn't find any result for "
+ search_data)
except KeyError as e:
va.audio_playback("sorry, i couldn't find any result for " +
search_data)
pass

"""
asking aboout current time.
"""
if "time" in text or "Time" in text:
current_time = a.current_datetime("time")
va.audio_playback("right now it is " + current_time)

"""
asking about today's date.
"""
if "date" in text or "Date" in text:
date = a.current_datetime("date")
va.audio_playback("today it is " + date)
"""
asking for rebooting the voice assistant system.
"""
if "reboot" in text or "Reboot" in text:
va.audio_playback("ok.. rebooting the server")
a.reboot_server()

return "done"

9)setup.sh
#!/bin/bash

sudo apt-get update


yes | sudo apt-get upgrade

echo "Installing packages ............................................................"


echo "Installing package: python3-pip ......................"
yes | sudo apt-get install python3-pip
echo "Installing package: speechrecognition ......................"
yes | sudo pip3 install speechrecognition
echo "Installing package: weathercom ......................"
yes | sudo pip3 install weathercom
echo "Installing package: portaudio19-dev ......................"
yes | sudo apt-get install portaudio19-dev
echo "Installing package: gTTS ......................"
yes | sudo pip3 install gTTS
echo "Installing package: pyaudio ......................"
yes | sudo pip3 install pyaudio
echo "Installing package: libsdl-ttf2.0-0 ......................"
yes | sudo apt-get install libsdl-ttf2.0-0
echo "Installing package: libsdl-mixer1.2 ......................"
yes | sudo apt-get install libsdl-mixer1.2
echo "Installing package: flac ......................"
yes | sudo apt-get install flac
echo "Installing package: pygame ......................"
yes | sudo pip3 install pygame
echo "Installing package: beautifulsoup4 ......................"
yes | sudo pip3 install beautifulsoup4
echo "Installing package: lxml ......................"
yes | sudo pip3 install lxml
echo "Installing package: googletrans ......................"
yes | sudo pip3 install googletrans
echo "Installing package: wikipedia ......................"
yes | sudo pip3 install wikipedia
echo "Installing package: spidev ......................"
yes | sudo pip3 install spidev

echo "Installing sound card: seeed ....................."


git clone https://github.com/respeaker/seeed-voicecard.git
cd seeed-voicecard
sudo ./install.sh --compat-kernel

10)valib.py
from gtts import gTTS
from pygame import mixer
import os
AUDIO_PLAYBACK_FILENAME =
"/mnt/ramdisk/audio_play_back.mp3"

def audio_playback(text):
tts = gTTS(text=text, lang='en-us')
tts.save(AUDIO_PLAYBACK_FILENAME)
mixer.init()
mixer.music.load(AUDIO_PLAYBACK_FILENAME)
mixer.music.play()
while mixer.music.get_busy():
pass
os.remove(AUDIO_PLAYBACK_FILENAME)
Voice recognition using raspberry Pi
https://www.instructables.com/Voice-Recognition-Raspberry-Pi/
16GB Micro SD – https://goo.gl/FDqZal

Voice Recognition Kit: https://goo.gl/TgtFk5

(ALT) Voice Recognition Kit: https://goo.gl/HdgVA2

RPI 3 – https://goo.gl/CdVNoH

4 Amp Power Adapter – https://goo.gl/js4Uc7

Make Your Raspberry Pi Speak


https://www.instructables.com/Make-your-Raspberry-Pi-speak/

Raspberry Pi 4 Animatronic BD-1 Companion Robot #StarWars


Video link:-https://youtu.be/I2XKRSir-Oo
Website link:https://bit.ly/2WduZ00
Raspberry PI Amazon Alexa Voice Service |
Artificial Intelligence
https://youtu.be/htVdRZN5JjY Asb
Hey friends in this video we will make Amazon Alexa with Raspberry pi Visit my website for detailed
Procedure and Commands:- https://www.viralsciencecreativity.co...

Materials required:-
Raspberry pi 3B+ :- https://amzn.to/2EvusgE
USB Microphone:- https://amzn.to/2VZzZH3
Speakers:- https://amzn.to/2EIvwOr
SD Card 32GB:- https://amzn.to/2HBTCfh
Card Reader:- https://amzn.to/2JD1f7C
HDMI Cable:- https://amzn.to/2M8VZdO
5v Power Adapter:- https://amzn.to/2ExV7JF

how to create a raspberry pi voice chat bot using chat gpt - YouTube : Search Link
Text b

How to Use ChatGPT Voice Assistant raspberry pi 4 | install ChatGPT on raspberry pi 4 - YouTube

Unit Use/
SL No Components Quantity Price Total Link Comment
Main
1 Raspberry Pi 4 4GB 1 4,617.93 4617.93 Buy Now computer
Micro SD card 64GB Buy
2 class 10 1 439 439 Now
Power Bank Buy
3 10000mAh 1 1249 1249 Now
4 USB Microphone compact 1 1621 1621 Buy Now
Compact Speakers for
5 Rasp PI 1 349 349 Buy Now
6 Card reader 1 399 399 Buy Now
7 USB C Cable 1 529 529 Buy Now
8 Optional humanoid face 0 0 0
Mycroft link
Introductio video:https://www.youtube.com/watch?v=tcI8ibjUOzg
26 Sep

Mycroft: NLP
Mycroft is more than a stand alone device. It is an open platform that will allow developers to add
natural language processing to anything. Here we show what Mycroft will be capable of in 2018.
Mycroft will span all of your devices and provide seamless interaction on your desktop, mobile
device, embedded speaker or automobile. It is more than a voice interface or a simply voice control
system, Mycroft is an AI for everyone.

Simplified 3D Printed Animatronic Dual Eye Mechanism


http://www.nilheim.co.uk/latest-projects-and-blog/simplified-3d-printed-animatronic-dual-eye-
mechanism

Servo Smoothing
https://github.com/XRobots/ServoSmoothing
Chat bot using Chat GPT api
https://youtu.be/4qNwoAAfnk4 : Madhusudhan watch this
● 18 dollar free credit

How to Resolve the 'npm' is not recognized Error in


Windows 10

Fist Download Node js for windows


https://nodejs.org/en/download/current

Check Current Version of Node.js

Important: If you still have a command prompt (or any type of terminal) open that was
opened before you installed Node.js, close and re-open your terminal.

After you have successfully installed Node.js, run the following command in your
terminal to check that Node.js is install properly:

node -v

You should get an output of the current installation version of Node.js:

v16.13.2

Note: If you get the error, The term 'node' is not recognized as the name, ensure that
you have closed and re-opened all terminals or text editors. A system restart may be
beneficial at this point if you are running into a node error

Adding the Node.js Installation Directory to your System


Path

If the issue still isn't resolved you will need to add Node.js to your system path.

Here's how to add Node.js to your system path:

Step 1: Find the path to your Node.js installation

The first step is to find the path to your Node.js installation. Depending on how you
installed Node.js, the location of the installation files may vary. If you used the official
Node.js installer, the default installation directory should be C:\Program Files\
nodejs.

If you installed Node.js in a different location, you'll need to locate the installation
directory manually. You can do this by searching your computer for the node.exe file.

Step 2: Open the Environment Variables settings

Next, you need to open the Environment Variables settings on your computer. To do
this in Windows 10, follow these steps:

1. Click on the Start menu and search for "Environment Variables".


2. Click on the "Edit the system environment variables" option.
3. In the System Properties window that appears, click on the "Environment
Variables" button.

Step 3: Edit the System Path variable

In the Environment Variables window, locate the "System Variables" section and scroll
down until you find the "Path" variable. Select the "Path" variable and click on the
"Edit" button.

In the Edit Environment Variable window that appears, click on the "New" button and
enter the path to your Node.js installation directory. Make sure to separate the path
from any existing paths in the variable with a semicolon (;).
For example, if your Node.js installation directory is C:\Program Files\nodejs, you
would add the following path to the System Path variable:
C:\Program Files\nodejs;

Step 4: Save the changes and test your installation

Once you've added the Node.js installation directory to your system path, click on the
"OK" button to save the changes. You may need to restart your command prompt or
terminal for the changes to take effect.

To test that Node.js is properly added to your system path, open a new command
prompt or terminal and run the following command:
node -v

This should output the current version of Node.js that is installed on your computer.

And that's it! You've successfully added Node.js to your system path. Now you can
start using Node.js commands in the terminal or command prompt without having to
specify the full path to the node.exe file every time.

How to Build Your Own AI Chatbot With ChatGPT API

https://beebom.com/how-build-own-ai-chatbot-with-chatgpt-api/ sir check this website

Python PermissionError: [WinError 5] Access Is Denied


● Python PermissionError: [WinError 5] Access Is Denied | Delft Stack
● Run cmd as admin

key : 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
C:\Users\User\Desktop>python open_ai.py
Traceback (most recent call last):
File "C:\Users\User\Desktop\open_ai.py", line 20, in <module>
inputs = gr.inputs.Textbox(lines=7, label="Chat with AI")
^^^^^^^^^
AttributeError: module 'gradio' has no attribute 'inputs'

import openai
import gradio as gr

openai.api_key = "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"

messages = [
{"role": "system", "content": "You are a helpful and kind AI Assistant."},
]

def chatbot(input):
if input:
messages.append({"role": "user", "content": input})
chat = openai.ChatCompletion.create(
model="gpt-3.5-turbo", messages=messages
)
reply = chat.choices[0].message['content']
messages.append({"role": "assistant", "content": reply})
return reply

# Check if you have the correct version of Gradio installed


# pip install -U gradio

inputs = gr.Textbox(lines=7, label="Chat with AI")


outputs = gr.Textbox(label="Reply")

gr.Interface(fn=chatbot, inputs=inputs, outputs=outputs, title="AI Chatbot",


description="Ask anything you want",
theme="compact").launch(share=True)

C:\Users\User\Desktop>python open_ai2.py
C:\Python312\Lib\site-packages\gradio\blocks.py:528: UserWarning: Cannot load compact.
Caught Exception: The space compact does not exist
warnings.warn(f"Cannot load {theme}. Caught Exception: {str(e)}")
Running on local URL: http://127.0.0.1:7860
Running on public URL: https://0be27924034504f356.gradio.live

This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run
`gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces)
Additional resources for gradio
Gradio tutorial (Build machine learning applications) (machinelearningnuggets.com)

1/12/2023
RASPBERRY PI GOOGLE ASSISTANT

https://youtu.be/9T3MOY9GivQ youtube tutorial

https://techiesms.com/google-assistant-on-raspberry-pi/ code

(env) robot@raspberrypi:~ $ python -m pip install google-assistant-library


Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple
ERROR: Could not find a version that satisfies the requirement google-assistant-library (from
versions: none)
ERROR: No matching distribution found for google-assistant-library

python -m pip install --upgrade google-assistant-library

Sir this is new pycroft open source ai software

https://youtu.be/eg_V56BUmh0

Spare

https://youtu.be/4luTnRpfgbs

You might also like