Igate and HA Into Proxmox v2 00
Igate and HA Into Proxmox v2 00
Igate and HA Into Proxmox v2 00
Preamble
This is my best recollection of how I completed the installation of Frigate and home assistant on
Proxmox. Unfortunately, there are many steps to the install, so please take care to follow them carefully
and be wary that I may have documented some steps poorly.
Some of the following material I have created, but most of it is copied and pasted and edited from a
variety of sources that I believe to be reliable. Use at your own risk but I have tried to be as accurate and
clear as possible for the modestly familiar user.
Before you start, here are some references that might be helpful:
• Running Frigate in Docker in Proxmox LXC with remote nas Share (cifs) · Discussion #1111 ·
blakeblackshear/frigate (github.com)
• Get started with the M.2 or Mini PCIe Accelerator | Coral
• https://github.com/blakeblackshear/frigate/discussions/1111#discussioncomment-2085971
• Google Coral USB + Frigate + PROXMOX - Third party integrations - Home Assistant Community
(home-assistant.io)
i.
c. See: M2 EdgeTPU (Coral AI) Problem | Proxmox Support Forum
d. Reboot!
13. Verify the coral drivers are working with:
a. ls /dev/apex*
you should get something like this: /dev/apex_0, if not, then stop here and figure
out what is not working. I had to repeat the dkms install for some reason before I
could get it to work. Also, don’t forget to reboot!
apt update
apt upgrade
o Run the below commands and refer to this: Install Docker Engine on Debian | Docker
Documentation 113
(Remove “sudo” from commands since you are already logged in as root)
mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o
/etc/apt/keyrings/docker.gpg
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o
/etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-
by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
apt-get update
Identify the timezone you want to configure and create the symbolic link for it,
for instance, I would like to set America/Chicago timezone:
ln -s /usr/share/zoneinfo/America/Chicago /etc/localtime
date
3. Confirm container’s IP
ip addr
4. Wait a few moments for portainer to start then open your browser and
log in to portainer:
http://ipaddress:9000
5. Log in as “admin” and set the password
▪ For some reason adding the share to the container from the gui does not
work so I used this command:
Via the proxmox host shell, go to /etc/pve/lxc and edit the container file via nano 10x.conf (choose
right number of LXC container.) Adjust as needed to match your mount point, coral devices, memory,
etc.
(I have little knowledge for how this file is interpreted but for me it works )
arch: amd64
cores: 4
features: mount=nfs;cifs,nesting=1
hostname: docker
memory: 8196
mp0: hubfiles:100/vm-100-disk-0.raw,mp=/media/hubfiles,backup=1,size=1000G
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=0A:87:64:D6:C1:80,ip=dhcp,type=veth
ostype: debian
rootfs: local-lvm:vm-100-disk-0,size=128G
swap: 2048
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.cgroup2.devices.allow: c 29:0 rwm
lxc.cgroup2.devices.allow: c 189:* rwm
lxc.cgroup2.devices.allow: c 120:* rwm
lxc.cgroup2.devices.allow: a
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file 0, 0
lxc.mount.entry: /dev/bus/usb/002/ dev/bus/usb/002/ none bind,optional,create=dir 0, 0
lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir
lxc.mount.entry: /dev/fb0 dev/fb0 none bind,optional,create=file
lxc.mount.entry: /dev/apex_0 dev/apex_0 none bind,optional,create=file 0, 0
lxc.mount.entry: /dev/apex_1 dev/apex_1 none bind,optional,create=file 0, 0
lxc.apparmor.profile: unconfined
lxc.cap.drop:
lxc.mount.auto: cgroup:rw
services:
frigate:
container_name: frigate
privileged: true # this may not be necessary for all setups
restart: unless-stopped
image: blakeblackshear/frigate:stable
devices:
- /dev/bus/usb:/dev/bus/usb # passes the USB Coral, needs to be modified for other versions
- /dev/apex_0:/dev/apex_0 # passes a PCIe Coral
- /dev/apex_1:/dev/apex_1 # passes a PCIe Coral
- /dev/dri/renderD128 # for intel hwaccel, needs to be updated for your hardware
volumes:
- /etc/localtime:/etc/localtime:ro
- /media/hubfiles/frigate/frigate.yml:/config/config.yml:ro # modify to match your setup
- /media/hubfiles/frigate:/media/frigate # modify to match your setup
- /root/frigate:/db # modify to match your setup
- type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
target: /tmp/cache
tmpfs:
size: 1000000000
ports:
- "5000:5000"
environment:
FRIGATE_RTSP_PASSWORD: "password" #modify to whatever if using rtsp
TZ: America/Chicago
Note: until home assistant and thus MQTT are running, frigate will not run
xz -d -v (yourfilename).qcow2.xz
5. Import the file into the VM, e.g. 102, (From the same proxmox host command line):
qm importdisk 102 ./haos_ova-9.3.qcow2 local-lvm --format qcow2
6. From ProxMox UI, go back into VM to attach this new disk to your VM
Double click on the unused disk
Check SSD emulation, and add the disk
7. For the new vm, select “options” and update VM boot options
Set SCSI drive to first and enable it, disable other boot options
8. Start VM
9. Open the VM console – you’ll note that the boot might fail because secure boot is
turned on in the bios.
1. If so then, in the VM’s console, type “exit”. This should bring you to the BIOS
config page. Navigate to the secure bios setting and turn it off.
10. Let the start up process proceed. Note the ip address for the homeassistant in the VM’s
console. Open a browser at the IP:8123 and you should be good to go.
Refer to this site: How to: Update/Upgrade Proxmox VE 7.x (PVE 7.x) > Blog-D without Nonsense
(dannyda.com)
• login to Proxmox VE web gui, Navigate to Datacenter -> node/cluster Name -> Updates, click on
“Refresh” then click on “>_ Upgrade”
• If the kernel changes, then follow these steps to reinstall the coral drivers:
1. First see if the coral drivers are installed with “ls /dev/apex*”. If apex_0 and/or apex_1
exist then you are done - there is no need to reinstall the coral drivers.
2. The pve-headers must be installed on the proxmox host before installing the coral
drivers:
a. open: nano /etc/apt/sources.list
b. add this line to the file (refer to Package Repositories - Proxmox VE):
c. If problems the see: M2 EdgeTPU (Coral AI) Problem | Proxmox Support Forum
d. Reboot!
1. Log into docker using the console on promox for the docker instance, user=root.
2. Referring to: Upgrading on Docker Standalone - Portainer Documentation follow these steps:
a. docker stop portainer
b. docker rm portainer
c. docker pull portainer/portainer-ce:latest
d. docker run -d -p 8000:8000 -p 9443:9443 -p 9000:9000 --name=portainer --
restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data
portainer/portainer-ce:latest
3. At this point, portainer should be running and the newest version of Portainer will now be
deployed on your system, using the persistent data from the previous version, and will also
upgrade the Portainer database to the new version.
4. Done
Docker
Updating is as easy as portainer, follow these steps:
1. Log into docker using the console on promox for the docker instance, user=root.
2. Enter these commands:
a. apt-get update
b. apt upgrade
c. if a text input screen pops up during the update process, select “no configuration
change”
3. check that docker is running with “systemctl status docker”
Individual containers
To update a container with a new “latest” build (e.g., frigate, wyze-bridge), simply go to the container’s
“stack”, and at the top of the page select the “editor” tab and then down at the bottom select “Update
the stack”. In the resulting pop-up, select “re-pull image and redeploy”
Note, however, that for frigate 12.x, The compose file needs a modification to pull the code from a
new , non-docker, location: ghcr.io/blakeblackshear/frigate:stable
My docker compose file that I use to create the stack for frigate 12.x looks like this:
services:
frigate:
container_name: frigate
privileged: true # this may not be necessary for all setups
restart: unless-stopped
image: ghcr.io/blakeblackshear/frigate:stable
devices:
- /dev/bus/usb:/dev/bus/usb # passes the USB Coral, needs to be modified for other versions
- /dev/apex_0:/dev/apex_0 # passes a PCIe Coral
- /dev/apex_1:/dev/apex_1 # passes a PCIe Coral
- /dev/dri/renderD128 # for intel hwaccel, needs to be updated for your hardware
volumes:
- /etc/localtime:/etc/localtime:ro
- /media/hubfiles/frigate/frigate.yml:/config/config.yml # modify to match your setup
- /media/hubfiles/frigate:/media/frigate # modify to match your setup
- /root/frigate:/db # modify to match your setup
- type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
target: /tmp/cache
tmpfs:
size: 1000000000
ports:
- "5000:5000"
environment:
FRIGATE_RTSP_PASSWORD: "password" #modify to whatever if using rtsp
TZ: America/Chicago