0% found this document useful (0 votes)
63 views56 pages

Kubeinstall

The document provides instructions on using the 'sudo' command to execute commands as the root user in a Linux environment. It includes the output of the 'lsblk' command, showing the block devices and their mount points, as well as the process of updating and upgrading packages using 'apt'. The output indicates that 51 packages can be upgraded, with detailed information on the packages being fetched and installed.

Uploaded by

karankaruna.mca
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
63 views56 pages

Kubeinstall

The document provides instructions on using the 'sudo' command to execute commands as the root user in a Linux environment. It includes the output of the 'lsblk' command, showing the block devices and their mount points, as well as the process of updating and upgrading packages using 'apt'. The output indicates that 51 packages can be upgraded, with detailed information on the packages being fetched and installed.

Uploaded by

karankaruna.mca
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 56

To run a command as administrator (user "root"), use "sudo <command>".

See "man sudo_root" for details.

admin-tccl@Admin:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 4K 1 loop /snap/bare/5
loop1 7:1 0 74.3M 1 loop /snap/core22/1564
loop2 7:2 0 269.8M 1 loop /snap/firefox/4793
loop3 7:3 0 10.7M 1 loop /snap/firmware-updater/127
loop4 7:4 0 505.1M 1 loop /snap/gnome-42-2204/176
loop5 7:5 0 91.7M 1 loop /snap/gtk-common-themes/1535
loop6 7:6 0 10.5M 1 loop /snap/snap-store/1173
loop7 7:7 0 500K 1 loop /snap/snapd-desktop-integration/178
loop8 7:8 0 38.8M 1 loop /snap/snapd/21759
nvme0n1 259:0 0 476.9G 0 disk
├─nvme0n1p1 259:1 0 1G 0 part /boot/efi
└─nvme0n1p2 259:2 0 475.9G 0 part /
admin-tccl@Admin:~$
admin-tccl@Admin:~$
admin-tccl@Admin:~$ sudo apt update && sudo apt upgrade -y
[sudo] password for admin-tccl:
Hit:1 http://in.archive.ubuntu.com/ubuntu noble InRelease
Hit:2 http://security.ubuntu.com/ubuntu noble-security InRelease
Get:3 http://in.archive.ubuntu.com/ubuntu noble-updates InRelease [126 kB]
Get:4 http://in.archive.ubuntu.com/ubuntu noble-backports InRelease [126 kB]
Get:5 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 Packages [599
kB]
Get:6 http://in.archive.ubuntu.com/ubuntu noble-updates/main Translation-en [146
kB]
Get:7 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 Components [114
kB]
Get:8 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 c-n-f Metadata
[10.3 kB]
Get:9 http://in.archive.ubuntu.com/ubuntu noble-updates/restricted amd64 Components
[212 B]
Get:10 http://in.archive.ubuntu.com/ubuntu noble-updates/universe amd64 Packages
[707 kB]
Get:11 http://in.archive.ubuntu.com/ubuntu noble-updates/universe Translation-en
[210 kB]
Get:12 http://in.archive.ubuntu.com/ubuntu noble-updates/universe amd64 Components
[305 kB]
Get:13 http://in.archive.ubuntu.com/ubuntu noble-updates/multiverse amd64
Components [940 B]
Get:14 http://in.archive.ubuntu.com/ubuntu noble-backports/main amd64 Components
[208 B]
Get:15 http://in.archive.ubuntu.com/ubuntu noble-backports/restricted amd64
Components [216 B]
Get:16 http://in.archive.ubuntu.com/ubuntu noble-backports/universe amd64
Components [21.1 kB]
Get:17 http://in.archive.ubuntu.com/ubuntu noble-backports/multiverse amd64
Components [212 B]
Fetched 2,366 kB in 12s (192 kB/s)
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
51 packages can be upgraded. Run 'apt list --upgradable' to see them.
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Calculating upgrade... Done
The following upgrades have been deferred due to phasing:
python3-distupgrade ubuntu-release-upgrader-core ubuntu-release-upgrader-gtk
The following packages will be upgraded:
apparmor cloud-init distro-info-data dmsetup fwupd gir1.2-gtk-3.0 gir1.2-mutter-
14 gnome-initial-setup gnome-shell gnome-shell-common gnome-shell-extension-
appindicator
gnome-shell-extension-ubuntu-dock gtk-update-icon-cache initramfs-tools
initramfs-tools-bin initramfs-tools-core libapparmor1 libcryptsetup12
libdevmapper1.02.1 libegl-mesa0
libfwupd2 libgbm1 libgl1-mesa-dri libglapi-mesa libglx-mesa0 libgtk-3-0t64
libgtk-3-bin libgtk-3-common libmutter-14-0 libproc2-0 libxatracker2 login mesa-
vulkan-drivers
mutter-common mutter-common-bin openvpn passwd procps python3-update-manager
snapd systemd-hwe-hwdb ubuntu-drivers-common ubuntu-pro-client ubuntu-pro-client-
l10n ubuntu-settings
update-manager update-manager-core xdg-desktop-portal
48 upgraded, 0 newly installed, 0 to remove and 3 not upgraded.
Need to get 69.7 MB of archives.
After this operation, 2,310 kB of additional disk space will be used.
Get:1 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 login amd64
1:4.13+dfsg1-4ubuntu3.2 [202 kB]
Get:2 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 ubuntu-drivers-
common amd64 1:0.9.7.6ubuntu3.1 [64.5 kB]
Get:3 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 passwd amd64
1:4.13+dfsg1-4ubuntu3.2 [845 kB]
Get:4 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 libproc2-0 amd64
2:4.0.4-4ubuntu3.2 [59.5 kB]
Get:5 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 procps amd64
2:4.0.4-4ubuntu3.2 [707 kB]
Get:6 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 distro-info-data
all 0.60ubuntu0.2 [6,590 B]
Get:7 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64
libdevmapper1.02.1 amd64 2:1.02.185-3ubuntu3.1 [139 kB]
Get:8 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 dmsetup amd64
2:1.02.185-3ubuntu3.1 [79.2 kB]
Get:9 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 libapparmor1
amd64 4.0.1really4.0.1-0ubuntu0.24.04.3 [50.3 kB]
Get:10 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 libcryptsetup12
amd64 2:2.7.0-1ubuntu4.1 [266 kB]
Get:11 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 systemd-hwe-
hwdb all 255.1.4 [3,200 B]
Get:12 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 ubuntu-pro-
client-l10n amd64 34~24.04 [18.4 kB]
Get:13 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 ubuntu-pro-
client amd64 34~24.04 [241 kB]
Get:14 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 apparmor amd64
4.0.1really4.0.1-0ubuntu0.24.04.3 [641 kB]
Get:15 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 libgtk-3-common
all 3.24.41-4ubuntu1.2 [1,202 kB]
Get:16 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 libgtk-3-0t64
amd64 3.24.41-4ubuntu1.2 [2,901 kB]
Get:17 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 gir1.2-gtk-3.0
amd64 3.24.41-4ubuntu1.2 [245 kB]
Get:18 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 gir1.2-mutter-
14 amd64 46.2-1ubuntu0.24.04.2 [131 kB]
Get:19 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 mutter-common
all 46.2-1ubuntu0.24.04.2 [49.1 kB]
Get:20 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 libmutter-14-0
amd64 46.2-1ubuntu0.24.04.2 [1,392 kB]
Get:21 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 mutter-common-
bin amd64 46.2-1ubuntu0.24.04.2 [52.1 kB]
Get:22 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 libgl1-mesa-dri
amd64 24.0.9-0ubuntu0.2 [8,950 kB]
Get:23 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 libglx-mesa0
amd64 24.0.9-0ubuntu0.2 [154 kB]
Get:24 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 libegl-mesa0
amd64 24.0.9-0ubuntu0.2 [115 kB]
Get:25 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 libglapi-mesa
amd64 24.0.9-0ubuntu0.2 [42.3 kB]
Get:26 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 libgbm1 amd64
24.0.9-0ubuntu0.2 [42.7 kB]
Get:27 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 gnome-shell
amd64 46.0-0ubuntu6~24.04.5 [952 kB]
Get:28 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 gnome-shell-
common all 46.0-0ubuntu6~24.04.5 [248 kB]
Get:29 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 python3-update-
manager all 1:24.04.9 [43.1 kB]
Get:30 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 update-manager-
core all 1:24.04.9 [11.6 kB]
Get:31 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 update-manager
all 1:24.04.9 [554 kB]
Get:32 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 libfwupd2 amd64
1.9.24-1~24.04.1 [136 kB]
Get:33 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 fwupd amd64
1.9.24-1~24.04.1 [4,553 kB]
Get:34 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 gnome-initial-
setup amd64 46.3-1ubuntu3~24.04.1 [603 kB]
Get:35 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 gnome-shell-
extension-appindicator all 58-1ubuntu24.04.1 [49.7 kB]
Get:36 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 gnome-shell-
extension-ubuntu-dock all 90ubuntu2 [119 kB]
Get:37 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 gtk-update-
icon-cache amd64 3.24.41-4ubuntu1.2 [51.8 kB]
Get:38 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 initramfs-tools
all 0.142ubuntu25.4 [9,078 B]
Get:39 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 initramfs-
tools-core all 0.142ubuntu25.4 [50.3 kB]
Get:40 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 initramfs-
tools-bin amd64 0.142ubuntu25.4 [21.3 kB]
Get:41 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 libgtk-3-bin
amd64 3.24.41-4ubuntu1.2 [73.9 kB]
Get:42 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 libxatracker2
amd64 24.0.9-0ubuntu0.2 [2,165 kB]
Get:43 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 mesa-vulkan-
drivers amd64 24.0.9-0ubuntu0.2 [11.0 MB]
Get:44 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 openvpn amd64
2.6.12-0ubuntu0.24.04.1 [681 kB]
Get:45 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 snapd amd64
2.65.3+24.04 [28.8 MB]
Get:46 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 ubuntu-settings
all 24.04.5 [10.1 kB]
Get:47 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 xdg-desktop-
portal amd64 1.18.4-1ubuntu2.24.04.1 [298 kB]
Get:48 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 cloud-init all
24.3.1-0ubuntu0~24.04.2 [602 kB]
Fetched 69.7 MB in 22s (3,124 kB/s)
Extracting templates from packages: 100%
Preconfiguring packages ...
(Reading database ... 148072 files and directories currently installed.)
Preparing to unpack .../login_1%3a4.13+dfsg1-4ubuntu3.2_amd64.deb ...
Unpacking login (1:4.13+dfsg1-4ubuntu3.2) over (1:4.13+dfsg1-4ubuntu3) ...
Setting up login (1:4.13+dfsg1-4ubuntu3.2) ...
(Reading database ... 148072 files and directories currently installed.)
Preparing to unpack .../ubuntu-drivers-common_1%3a0.9.7.6ubuntu3.1_amd64.deb ...
Unpacking ubuntu-drivers-common (1:0.9.7.6ubuntu3.1) over (1:0.9.7.6ubuntu3) ...
Preparing to unpack .../passwd_1%3a4.13+dfsg1-4ubuntu3.2_amd64.deb ...
Unpacking passwd (1:4.13+dfsg1-4ubuntu3.2) over (1:4.13+dfsg1-4ubuntu3) ...
Setting up passwd (1:4.13+dfsg1-4ubuntu3.2) ...
(Reading database ... 148072 files and directories currently installed.)
Preparing to unpack .../00-libproc2-0_2%3a4.0.4-4ubuntu3.2_amd64.deb ...
Unpacking libproc2-0:amd64 (2:4.0.4-4ubuntu3.2) over (2:4.0.4-4ubuntu3) ...
Preparing to unpack .../01-procps_2%3a4.0.4-4ubuntu3.2_amd64.deb ...
Unpacking procps (2:4.0.4-4ubuntu3.2) over (2:4.0.4-4ubuntu3) ...
Preparing to unpack .../02-distro-info-data_0.60ubuntu0.2_all.deb ...
Unpacking distro-info-data (0.60ubuntu0.2) over (0.60ubuntu0.1) ...
Preparing to unpack .../03-libdevmapper1.02.1_2%3a1.02.185-3ubuntu3.1_amd64.deb ...
Unpacking libdevmapper1.02.1:amd64 (2:1.02.185-3ubuntu3.1) over (2:1.02.185-
3ubuntu3) ...
Preparing to unpack .../04-dmsetup_2%3a1.02.185-3ubuntu3.1_amd64.deb ...
Unpacking dmsetup (2:1.02.185-3ubuntu3.1) over (2:1.02.185-3ubuntu3) ...
Preparing to unpack .../05-libapparmor1_4.0.1really4.0.1-0ubuntu0.24.04.3_amd64.deb
...
Unpacking libapparmor1:amd64 (4.0.1really4.0.1-0ubuntu0.24.04.3) over
(4.0.1really4.0.0-beta3-0ubuntu0.1) ...
Preparing to unpack .../06-libcryptsetup12_2%3a2.7.0-1ubuntu4.1_amd64.deb ...
Unpacking libcryptsetup12:amd64 (2:2.7.0-1ubuntu4.1) over (2:2.7.0-1ubuntu4) ...
Preparing to unpack .../07-systemd-hwe-hwdb_255.1.4_all.deb ...
Unpacking systemd-hwe-hwdb (255.1.4) over (255.1.3) ...
Preparing to unpack .../08-ubuntu-pro-client-l10n_34~24.04_amd64.deb ...
Unpacking ubuntu-pro-client-l10n (34~24.04) over (32.3.1~24.04) ...
Preparing to unpack .../09-ubuntu-pro-client_34~24.04_amd64.deb ...
Unpacking ubuntu-pro-client (34~24.04) over (32.3.1~24.04) ...
Preparing to unpack .../10-apparmor_4.0.1really4.0.1-0ubuntu0.24.04.3_amd64.deb ...
Unpacking apparmor (4.0.1really4.0.1-0ubuntu0.24.04.3) over (4.0.1really4.0.0-
beta3-0ubuntu0.1) ...
Preparing to unpack .../11-libgtk-3-common_3.24.41-4ubuntu1.2_all.deb ...
Unpacking libgtk-3-common (3.24.41-4ubuntu1.2) over (3.24.41-4ubuntu1.1) ...
Preparing to unpack .../12-libgtk-3-0t64_3.24.41-4ubuntu1.2_amd64.deb ...
Unpacking libgtk-3-0t64:amd64 (3.24.41-4ubuntu1.2) over (3.24.41-4ubuntu1.1) ...
Preparing to unpack .../13-gir1.2-gtk-3.0_3.24.41-4ubuntu1.2_amd64.deb ...
Unpacking gir1.2-gtk-3.0:amd64 (3.24.41-4ubuntu1.2) over (3.24.41-4ubuntu1.1) ...
Preparing to unpack .../14-gir1.2-mutter-14_46.2-1ubuntu0.24.04.2_amd64.deb ...
Unpacking gir1.2-mutter-14:amd64 (46.2-1ubuntu0.24.04.2) over (46.2-
1ubuntu0.24.04.1) ...
Preparing to unpack .../15-mutter-common_46.2-1ubuntu0.24.04.2_all.deb ...
Unpacking mutter-common (46.2-1ubuntu0.24.04.2) over (46.2-1ubuntu0.24.04.1) ...
Preparing to unpack .../16-libmutter-14-0_46.2-1ubuntu0.24.04.2_amd64.deb ...
Unpacking libmutter-14-0:amd64 (46.2-1ubuntu0.24.04.2) over (46.2-1ubuntu0.24.04.1)
...
Preparing to unpack .../17-mutter-common-bin_46.2-1ubuntu0.24.04.2_amd64.deb ...
Unpacking mutter-common-bin (46.2-1ubuntu0.24.04.2) over (46.2-
1ubuntu0.24.04.1) ...
Preparing to unpack .../18-libgl1-mesa-dri_24.0.9-0ubuntu0.2_amd64.deb ...
Unpacking libgl1-mesa-dri:amd64 (24.0.9-0ubuntu0.2) over (24.0.9-0ubuntu0.1) ...
Preparing to unpack .../19-libglx-mesa0_24.0.9-0ubuntu0.2_amd64.deb ...
Unpacking libglx-mesa0:amd64 (24.0.9-0ubuntu0.2) over (24.0.9-0ubuntu0.1) ...
Preparing to unpack .../20-libegl-mesa0_24.0.9-0ubuntu0.2_amd64.deb ...
Unpacking libegl-mesa0:amd64 (24.0.9-0ubuntu0.2) over (24.0.9-0ubuntu0.1) ...
Preparing to unpack .../21-libglapi-mesa_24.0.9-0ubuntu0.2_amd64.deb ...
Unpacking libglapi-mesa:amd64 (24.0.9-0ubuntu0.2) over (24.0.9-0ubuntu0.1) ...
Preparing to unpack .../22-libgbm1_24.0.9-0ubuntu0.2_amd64.deb ...
Unpacking libgbm1:amd64 (24.0.9-0ubuntu0.2) over (24.0.9-0ubuntu0.1) ...
Preparing to unpack .../23-gnome-shell_46.0-0ubuntu6~24.04.5_amd64.deb ...
Unpacking gnome-shell (46.0-0ubuntu6~24.04.5) over (46.0-0ubuntu6~24.04.4) ...
Preparing to unpack .../24-gnome-shell-common_46.0-0ubuntu6~24.04.5_all.deb ...
Unpacking gnome-shell-common (46.0-0ubuntu6~24.04.5) over (46.0-
0ubuntu6~24.04.4) ...
Preparing to unpack .../25-python3-update-manager_1%3a24.04.9_all.deb ...
Unpacking python3-update-manager (1:24.04.9) over (1:24.04.6) ...
Preparing to unpack .../26-update-manager-core_1%3a24.04.9_all.deb ...
Unpacking update-manager-core (1:24.04.9) over (1:24.04.6) ...
Preparing to unpack .../27-update-manager_1%3a24.04.9_all.deb ...
Unpacking update-manager (1:24.04.9) over (1:24.04.6) ...
Preparing to unpack .../28-libfwupd2_1.9.24-1~24.04.1_amd64.deb ...
Unpacking libfwupd2:amd64 (1.9.24-1~24.04.1) over (1.9.16-1) ...
Preparing to unpack .../29-fwupd_1.9.24-1~24.04.1_amd64.deb ...
Unpacking fwupd (1.9.24-1~24.04.1) over (1.9.16-1) ...
Preparing to unpack .../30-gnome-initial-setup_46.3-1ubuntu3~24.04.1_amd64.deb ...
Unpacking gnome-initial-setup (46.3-1ubuntu3~24.04.1) over (46.2-
1ubuntu0.24.04.1) ...
Preparing to unpack .../31-gnome-shell-extension-appindicator_58-
1ubuntu24.04.1_all.deb ...
Unpacking gnome-shell-extension-appindicator (58-1ubuntu24.04.1) over (58-1) ...
Preparing to unpack .../32-gnome-shell-extension-ubuntu-dock_90ubuntu2_all.deb ...
Unpacking gnome-shell-extension-ubuntu-dock (90ubuntu2) over (90ubuntu1) ...
Preparing to unpack .../33-gtk-update-icon-cache_3.24.41-4ubuntu1.2_amd64.deb ...
Unpacking gtk-update-icon-cache (3.24.41-4ubuntu1.2) over (3.24.41-4ubuntu1.1) ...
Preparing to unpack .../34-initramfs-tools_0.142ubuntu25.4_all.deb ...
Unpacking initramfs-tools (0.142ubuntu25.4) over (0.142ubuntu25.1) ...
Preparing to unpack .../35-initramfs-tools-core_0.142ubuntu25.4_all.deb ...
Unpacking initramfs-tools-core (0.142ubuntu25.4) over (0.142ubuntu25.1) ...
Preparing to unpack .../36-initramfs-tools-bin_0.142ubuntu25.4_amd64.deb ...
Unpacking initramfs-tools-bin (0.142ubuntu25.4) over (0.142ubuntu25.1) ...
Preparing to unpack .../37-libgtk-3-bin_3.24.41-4ubuntu1.2_amd64.deb ...
Unpacking libgtk-3-bin (3.24.41-4ubuntu1.2) over (3.24.41-4ubuntu1.1) ...
Preparing to unpack .../38-libxatracker2_24.0.9-0ubuntu0.2_amd64.deb ...
Unpacking libxatracker2:amd64 (24.0.9-0ubuntu0.2) over (24.0.9-0ubuntu0.1) ...
Preparing to unpack .../39-mesa-vulkan-drivers_24.0.9-0ubuntu0.2_amd64.deb ...
Unpacking mesa-vulkan-drivers:amd64 (24.0.9-0ubuntu0.2) over (24.0.9-
0ubuntu0.1) ...
Preparing to unpack .../40-openvpn_2.6.12-0ubuntu0.24.04.1_amd64.deb ...
Unpacking openvpn (2.6.12-0ubuntu0.24.04.1) over (2.6.9-1ubuntu4.1) ...
Preparing to unpack .../41-snapd_2.65.3+24.04_amd64.deb ...
Unpacking snapd (2.65.3+24.04) over (2.63.1+24.04) ...
Preparing to unpack .../42-ubuntu-settings_24.04.5_all.deb ...
Unpacking ubuntu-settings (24.04.5) over (24.04.4) ...
Preparing to unpack .../43-xdg-desktop-portal_1.18.4-1ubuntu2.24.04.1_amd64.deb ...
Unpacking xdg-desktop-portal (1.18.4-1ubuntu2.24.04.1) over (1.18.4-1ubuntu2) ...
Preparing to unpack .../44-cloud-init_24.3.1-0ubuntu0~24.04.2_all.deb ...
Unpacking cloud-init (24.3.1-0ubuntu0~24.04.2) over (24.1.3-0ubuntu3.3) ...
dpkg: warning: unable to delete old directory '/etc/systemd/system/sshd-
[email protected]': Directory not empty
Setting up mesa-vulkan-drivers:amd64 (24.0.9-0ubuntu0.2) ...
Setting up gtk-update-icon-cache (3.24.41-4ubuntu1.2) ...
Setting up libapparmor1:amd64 (4.0.1really4.0.1-0ubuntu0.24.04.3) ...
Setting up xdg-desktop-portal (1.18.4-1ubuntu2.24.04.1) ...
Setting up openvpn (2.6.12-0ubuntu0.24.04.1) ...
Setting up ubuntu-drivers-common (1:0.9.7.6ubuntu3.1) ...
Setting up libgbm1:amd64 (24.0.9-0ubuntu0.2) ...
Setting up distro-info-data (0.60ubuntu0.2) ...
Setting up libfwupd2:amd64 (1.9.24-1~24.04.1) ...
Setting up gnome-shell-common (46.0-0ubuntu6~24.04.5) ...
Setting up ubuntu-settings (24.04.5) ...
Setting up libxatracker2:amd64 (24.0.9-0ubuntu0.2) ...
Setting up apparmor (4.0.1really4.0.1-0ubuntu0.24.04.3) ...
Installing new version of config file
/etc/apparmor.d/abstractions/authentication ...
Installing new version of config file /etc/apparmor.d/abstractions/samba ...
Installing new version of config file /etc/apparmor.d/firefox ...
Reloading AppArmor profiles
Warning: found usr.sbin.sssd in /etc/apparmor.d/force-complain, forcing complain
mode
Warning from /etc/apparmor.d (/etc/apparmor.d/usr.sbin.sssd line 63): Caching
disabled for: 'usr.sbin.sssd' due to force complain
Setting up mutter-common (46.2-1ubuntu0.24.04.2) ...
Setting up libproc2-0:amd64 (2:4.0.4-4ubuntu3.2) ...
Setting up mutter-common-bin (46.2-1ubuntu0.24.04.2) ...
Setting up libglapi-mesa:amd64 (24.0.9-0ubuntu0.2) ...
Setting up systemd-hwe-hwdb (255.1.4) ...
Setting up libdevmapper1.02.1:amd64 (2:1.02.185-3ubuntu3.1) ...
Setting up dmsetup (2:1.02.185-3ubuntu3.1) ...
Setting up gnome-initial-setup (46.3-1ubuntu3~24.04.1) ...
Setting up python3-update-manager (1:24.04.9) ...
Setting up procps (2:4.0.4-4ubuntu3.2) ...
Setting up libcryptsetup12:amd64 (2:2.7.0-1ubuntu4.1) ...
Setting up ubuntu-pro-client (34~24.04) ...
Installing new version of config file /etc/apparmor.d/ubuntu_pro_apt_news ...
Installing new version of config file /etc/apt/apt.conf.d/20apt-esm-hook.conf ...
Setting up fwupd (1.9.24-1~24.04.1) ...
fwupd-offline-update.service is a disabled or a static unit not running, not
starting it.
fwupd-refresh.service is a disabled or a static unit not running, not starting it.
fwupd.service is a disabled or a static unit not running, not starting it.
Setting up libgtk-3-common (3.24.41-4ubuntu1.2) ...
Setting up initramfs-tools-bin (0.142ubuntu25.4) ...
Setting up ubuntu-pro-client-l10n (34~24.04) ...
Setting up snapd (2.65.3+24.04) ...
Installing new version of config file /etc/apparmor.d/usr.lib.snapd.snap-
confine.real ...
snapd.failure.service is a disabled or a static unit not running, not starting it.
snapd.snap-repair.service is a disabled or a static unit not running, not starting
it.
Setting up cloud-init (24.3.1-0ubuntu0~24.04.2) ...
Installing new version of config file /etc/cloud/cloud.cfg ...
Installing new version of config file
/etc/cloud/templates/sources.list.ubuntu.deb822.tmpl ...
Setting up libgl1-mesa-dri:amd64 (24.0.9-0ubuntu0.2) ...
Setting up libegl-mesa0:amd64 (24.0.9-0ubuntu0.2) ...
Setting up update-manager-core (1:24.04.9) ...
Setting up initramfs-tools-core (0.142ubuntu25.4) ...
Setting up libglx-mesa0:amd64 (24.0.9-0ubuntu0.2) ...
Setting up initramfs-tools (0.142ubuntu25.4) ...
update-initramfs: deferring update (trigger activated)
Processing triggers for man-db (2.12.0-4build2) ...
Processing triggers for libglib2.0-0t64:amd64 (2.80.0-6ubuntu3.1) ...
Setting up libmutter-14-0:amd64 (46.2-1ubuntu0.24.04.2) ...
Processing triggers for dbus (1.14.10-4ubuntu4.1) ...
Setting up libgtk-3-0t64:amd64 (3.24.41-4ubuntu1.2) ...
Processing triggers for udev (255.4-1ubuntu8.4) ...
Setting up gir1.2-mutter-14:amd64 (46.2-1ubuntu0.24.04.2) ...
Processing triggers for desktop-file-utils (0.27-2build1) ...
Processing triggers for hicolor-icon-theme (0.17-2) ...
Processing triggers for gnome-menus (3.36.0-1.1ubuntu3) ...
Processing triggers for libc-bin (2.39-0ubuntu8.3) ...
Processing triggers for rsyslog (8.2312.0-3ubuntu9) ...
Warning: The unit file, source configuration file or drop-ins of rsyslog.service
changed on disk. Run 'systemctl daemon-reload' to reload units.
Setting up gir1.2-gtk-3.0:amd64 (3.24.41-4ubuntu1.2) ...
Setting up libgtk-3-bin (3.24.41-4ubuntu1.2) ...
Setting up gnome-shell (46.0-0ubuntu6~24.04.5) ...
Setting up gnome-shell-extension-appindicator (58-1ubuntu24.04.1) ...
Setting up update-manager (1:24.04.9) ...
Setting up gnome-shell-extension-ubuntu-dock (90ubuntu2) ...
Processing triggers for initramfs-tools (0.142ubuntu25.4) ...
update-initramfs: Generating /boot/initrd.img-6.8.0-47-generic
admin-tccl@Admin:~$
admin-tccl@Admin:~$
admin-tccl@Admin:~$ sudo swapoff -a
admin-tccl@Admin:~$ sudo sed -i '/ swap / s/^/#/' /etc/fstab
admin-tccl@Admin:~$ sudo apt-get install -y ca-certificates curl gnupg
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
E: Unable to locate package ca-certificates
admin-tccl@Admin:~$ sudo apt-get install -y ca-certificates curl gnupg
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
ca-certificates is already the newest version (20240203).
ca-certificates set to manually installed.
gnupg is already the newest version (2.4.4-2ubuntu17).
gnupg set to manually installed.
The following NEW packages will be installed:
curl
0 upgraded, 1 newly installed, 0 to remove and 3 not upgraded.
Need to get 227 kB of archives.
After this operation, 534 kB of additional disk space will be used.
Get:1 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 curl amd64
8.5.0-2ubuntu10.4 [227 kB]
Fetched 227 kB in 1s (185 kB/s)
Selecting previously unselected package curl.
(Reading database ... 148099 files and directories currently installed.)
Preparing to unpack .../curl_8.5.0-2ubuntu10.4_amd64.deb ...
Unpacking curl (8.5.0-2ubuntu10.4) ...
Setting up curl (8.5.0-2ubuntu10.4) ...
Processing triggers for man-db (2.12.0-4build2) ...
admin-tccl@Admin:~$ sudo install -m 0755 -d /etc/apt/keyrings
admin-tccl@Admin:~$
admin-tccl@Admin:~$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo
tee
/etc/apt/keyrings/docker.asc
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBFit2ioBEADhWpZ8/wvZ6hUTiXOwQHXMAlaFHcPH9hAtr4F1y2+OYdbtMuth
lqqwp028AqyY+PRfVMtSYMbjuQuu5byyKR01BbqYhuS3jtqQmljZ/bJvXqnmiVXh
38UuLa+z077PxyxQhu5BbqntTPQMfiyqEiU+BKbq2WmANUKQf+1AmZY/IruOXbnq
L4C1+gJ8vfmXQt99npCaxEjaNRVYfOS8QcixNzHUYnb6emjlANyEVlZzeqo7XKl7
UrwV5inawTSzWNvtjEjj4nJL8NsLwscpLPQUhTQ+7BbQXAwAmeHCUTQIvvWXqw0N
cmhh4HgeQscQHYgOJjjDVfoY5MucvglbIgCqfzAHW9jxmRL4qbMZj+b1XoePEtht
ku4bIQN1X5P07fNWzlgaRL5Z4POXDDZTlIQ/El58j9kp4bnWRCJW0lya+f8ocodo
vZZ+Doi+fy4D5ZGrL4XEcIQP/Lv5uFyf+kQtl/94VFYVJOleAv8W92KdgDkhTcTD
G7c0tIkVEKNUq48b3aQ64NOZQW7fVjfoKwEZdOqPE72Pa45jrZzvUFxSpdiNk2tZ
XYukHjlxxEgBdC/J3cMMNRE1F4NCA3ApfV1Y7/hTeOnmDuDYwr9/obA8t016Yljj
q5rdkywPf4JF8mXUW5eCN1vAFHxeg9ZWemhBtQmGxXnw9M+z6hWwc6ahmwARAQAB
tCtEb2NrZXIgUmVsZWFzZSAoQ0UgZGViKSA8ZG9ja2VyQGRvY2tlci5jb20+iQI3
BBMBCgAhBQJYrefAAhsvBQsJCAcDBRUKCQgLBRYCAwEAAh4BAheAAAoJEI2BgDwO
v82IsskP/iQZo68flDQmNvn8X5XTd6RRaUH33kXYXquT6NkHJciS7E2gTJmqvMqd
tI4mNYHCSEYxI5qrcYV5YqX9P6+Ko+vozo4nseUQLPH/ATQ4qL0Zok+1jkag3Lgk
jonyUf9bwtWxFp05HC3GMHPhhcUSexCxQLQvnFWXD2sWLKivHp2fT8QbRGeZ+d3m
6fqcd5Fu7pxsqm0EUDK5NL+nPIgYhN+auTrhgzhK1CShfGccM/wfRlei9Utz6p9P
XRKIlWnXtT4qNGZNTN0tR+NLG/6Bqd8OYBaFAUcue/w1VW6JQ2VGYZHnZu9S8LMc
FYBa5Ig9PxwGQOgq6RDKDbV+PqTQT5EFMeR1mrjckk4DQJjbxeMZbiNMG5kGECA8
g383P3elhn03WGbEEa4MNc3Z4+7c236QI3xWJfNPdUbXRaAwhy/6rTSFbzwKB0Jm
ebwzQfwjQY6f55MiI/RqDCyuPj3r3jyVRkK86pQKBAJwFHyqj9KaKXMZjfVnowLh
9svIGfNbGHpucATqREvUHuQbNnqkCx8VVhtYkhDb9fEP2xBu5VvHbR+3nfVhMut5
G34Ct5RS7Jt6LIfFdtcn8CaSas/l1HbiGeRgc70X/9aYx/V/CEJv0lIe8gP6uDoW
FPIZ7d6vH+Vro6xuWEGiuMaiznap2KhZmpkgfupyFmplh0s6knymuQINBFit2ioB
EADneL9S9m4vhU3blaRjVUUyJ7b/qTjcSylvCH5XUE6R2k+ckEZjfAMZPLpO+/tF
M2JIJMD4SifKuS3xck9KtZGCufGmcwiLQRzeHF7vJUKrLD5RTkNi23ydvWZgPjtx
Q+DTT1Zcn7BrQFY6FgnRoUVIxwtdw1bMY/89rsFgS5wwuMESd3Q2RYgb7EOFOpnu
w6da7WakWf4IhnF5nsNYGDVaIHzpiqCl+uTbf1epCjrOlIzkZ3Z3Yk5CM/TiFzPk
z2lLz89cpD8U+NtCsfagWWfjd2U3jDapgH+7nQnCEWpROtzaKHG6lA3pXdix5zG8
eRc6/0IbUSWvfjKxLLPfNeCS2pCL3IeEI5nothEEYdQH6szpLog79xB9dVnJyKJb
VfxXnseoYqVrRz2VVbUI5Blwm6B40E3eGVfUQWiux54DspyVMMk41Mx7QJ3iynIa
1N4ZAqVMAEruyXTRTxc9XW0tYhDMA/1GYvz0EmFpm8LzTHA6sFVtPm/ZlNCX6P1X
zJwrv7DSQKD6GGlBQUX+OeEJ8tTkkf8QTJSPUdh8P8YxDFS5EOGAvhhpMBYD42kQ
pqXjEC+XcycTvGI7impgv9PDY1RCC1zkBjKPa120rNhv/hkVk/YhuGoajoHyy4h7
ZQopdcMtpN2dgmhEegny9JCSwxfQmQ0zK0g7m6SHiKMwjwARAQABiQQ+BBgBCAAJ
BQJYrdoqAhsCAikJEI2BgDwOv82IwV0gBBkBCAAGBQJYrdoqAAoJEH6gqcPyc/zY
1WAP/2wJ+R0gE6qsce3rjaIz58PJmc8goKrir5hnElWhPgbq7cYIsW5qiFyLhkdp
YcMmhD9mRiPpQn6Ya2w3e3B8zfIVKipbMBnke/ytZ9M7qHmDCcjoiSmwEXN3wKYI
mD9VHONsl/CG1rU9Isw1jtB5g1YxuBA7M/m36XN6x2u+NtNMDB9P56yc4gfsZVES
KA9v+yY2/l45L8d/WUkUi0YXomn6hyBGI7JrBLq0CX37GEYP6O9rrKipfz73XfO7
JIGzOKZlljb/D9RX/g7nRbCn+3EtH7xnk+TK/50euEKw8SMUg147sJTcpQmv6UzZ
cM4JgL0HbHVCojV4C/plELwMddALOFeYQzTif6sMRPf+3DSj8frbInjChC3yOLy0
6br92KFom17EIj2CAcoeq7UPhi2oouYBwPxh5ytdehJkoo+sN7RIWua6P2WSmon5
U888cSylXC0+ADFdgLX9K2zrDVYUG1vo8CX0vzxFBaHwN6Px26fhIT1/hYUHQR1z
VfNDcyQmXqkOnZvvoMfz/Q0s9BhFJ/zU6AgQbIZE/hm1spsfgvtsD1frZfygXJ9f
irP+MSAI80xHSf91qSRZOj4Pl3ZJNbq4yYxv0b1pkMqeGdjdCYhLU+LZ4wbQmpCk
SVe2prlLureigXtmZfkqevRz7FrIZiu9ky8wnCAPwC7/zmS18rgP/17bOtL4/iIz
QhxAAoAMWVrGyJivSkjhSGx1uCojsWfsTAm11P7jsruIL61ZzMUVE2aM3Pmj5G+W
9AcZ58Em+1WsVnAXdUR//bMmhyr8wL/G1YO1V3JEJTRdxsSxdYa4deGBBY/Adpsw
24jxhOJR+lsJpqIUeb999+R8euDhRHG9eFO7DRu6weatUJ6suupoDTRWtr/4yGqe
dKxV3qQhNLSnaAzqW/1nA3iUB4k7kCaKZxhdhDbClf9P37qaRW467BLCVO/coL3y
Vm50dwdrNtKpMBh3ZpbB1uJvgi9mXtyBOMJ3v8RZeDzFiG8HdCtg9RvIt/AIFoHR
H3S+U79NT6i0KPzLImDfs8T7RlpyuMc4Ufs8ggyg9v3Ae6cN3eQyxcK3w0cbBwsh
/nQNfsA6uu+9H7NhbehBMhYnpNZyrHzCmzyXkauwRAqoCbGCNykTRwsur9gS41TQ
M8ssD1jFheOJf3hODnkKU+HKjvMROl1DK7zdmLdNzA1cvtZH/nCC9KPj1z8QC47S
xx+dTZSx4ONAhwbS/LN3PoKtn8LPjY9NP9uDWI+TWYquS2U+KHDrBDlsgozDbs/O
jCxcpDzNmXpWQHEtHU7649OXHP7UeNST1mCUCH5qdank0V1iejF6/CfTFU4MfcrG
YT90qFF93M3v01BbxP+EIY2/9tiIPbrd
=0YYh
-----END PGP PUBLIC KEY BLOCK-----
bash: /etc/apt/keyrings/docker.asc: No such file or directory
admin-tccl@Admin:~$ sudo chmod a+r /etc/ap
apg.conf apm/ apparmor/ apparmor.d/ apport/ apt/
admin-tccl@Admin:~$ sudo chmod a+r /etc/apt/keyrings/docker.asc
chmod: cannot access '/etc/apt/keyrings/docker.asc': No such file or directory
admin-tccl@Admin:~$ a
a: command not found
admin-tccl@Admin:~$
admin-tccl@Admin:~$ sudo install -m 0755 -d /etc/apt/keyrings
admin-tccl@Admin:~$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg| sudo
tee /etc/apt/keyrings/docker.asc
-----BEGIN PGP PUBLIC KEY BLOCK-----

mQINBFit2ioBEADhWpZ8/wvZ6hUTiXOwQHXMAlaFHcPH9hAtr4F1y2+OYdbtMuth
lqqwp028AqyY+PRfVMtSYMbjuQuu5byyKR01BbqYhuS3jtqQmljZ/bJvXqnmiVXh
38UuLa+z077PxyxQhu5BbqntTPQMfiyqEiU+BKbq2WmANUKQf+1AmZY/IruOXbnq
L4C1+gJ8vfmXQt99npCaxEjaNRVYfOS8QcixNzHUYnb6emjlANyEVlZzeqo7XKl7
UrwV5inawTSzWNvtjEjj4nJL8NsLwscpLPQUhTQ+7BbQXAwAmeHCUTQIvvWXqw0N
cmhh4HgeQscQHYgOJjjDVfoY5MucvglbIgCqfzAHW9jxmRL4qbMZj+b1XoePEtht
ku4bIQN1X5P07fNWzlgaRL5Z4POXDDZTlIQ/El58j9kp4bnWRCJW0lya+f8ocodo
vZZ+Doi+fy4D5ZGrL4XEcIQP/Lv5uFyf+kQtl/94VFYVJOleAv8W92KdgDkhTcTD
G7c0tIkVEKNUq48b3aQ64NOZQW7fVjfoKwEZdOqPE72Pa45jrZzvUFxSpdiNk2tZ
XYukHjlxxEgBdC/J3cMMNRE1F4NCA3ApfV1Y7/hTeOnmDuDYwr9/obA8t016Yljj
q5rdkywPf4JF8mXUW5eCN1vAFHxeg9ZWemhBtQmGxXnw9M+z6hWwc6ahmwARAQAB
tCtEb2NrZXIgUmVsZWFzZSAoQ0UgZGViKSA8ZG9ja2VyQGRvY2tlci5jb20+iQI3
BBMBCgAhBQJYrefAAhsvBQsJCAcDBRUKCQgLBRYCAwEAAh4BAheAAAoJEI2BgDwO
v82IsskP/iQZo68flDQmNvn8X5XTd6RRaUH33kXYXquT6NkHJciS7E2gTJmqvMqd
tI4mNYHCSEYxI5qrcYV5YqX9P6+Ko+vozo4nseUQLPH/ATQ4qL0Zok+1jkag3Lgk
jonyUf9bwtWxFp05HC3GMHPhhcUSexCxQLQvnFWXD2sWLKivHp2fT8QbRGeZ+d3m
6fqcd5Fu7pxsqm0EUDK5NL+nPIgYhN+auTrhgzhK1CShfGccM/wfRlei9Utz6p9P
XRKIlWnXtT4qNGZNTN0tR+NLG/6Bqd8OYBaFAUcue/w1VW6JQ2VGYZHnZu9S8LMc
FYBa5Ig9PxwGQOgq6RDKDbV+PqTQT5EFMeR1mrjckk4DQJjbxeMZbiNMG5kGECA8
g383P3elhn03WGbEEa4MNc3Z4+7c236QI3xWJfNPdUbXRaAwhy/6rTSFbzwKB0Jm
ebwzQfwjQY6f55MiI/RqDCyuPj3r3jyVRkK86pQKBAJwFHyqj9KaKXMZjfVnowLh
9svIGfNbGHpucATqREvUHuQbNnqkCx8VVhtYkhDb9fEP2xBu5VvHbR+3nfVhMut5
G34Ct5RS7Jt6LIfFdtcn8CaSas/l1HbiGeRgc70X/9aYx/V/CEJv0lIe8gP6uDoW
FPIZ7d6vH+Vro6xuWEGiuMaiznap2KhZmpkgfupyFmplh0s6knymuQINBFit2ioB
EADneL9S9m4vhU3blaRjVUUyJ7b/qTjcSylvCH5XUE6R2k+ckEZjfAMZPLpO+/tF
M2JIJMD4SifKuS3xck9KtZGCufGmcwiLQRzeHF7vJUKrLD5RTkNi23ydvWZgPjtx
Q+DTT1Zcn7BrQFY6FgnRoUVIxwtdw1bMY/89rsFgS5wwuMESd3Q2RYgb7EOFOpnu
w6da7WakWf4IhnF5nsNYGDVaIHzpiqCl+uTbf1epCjrOlIzkZ3Z3Yk5CM/TiFzPk
z2lLz89cpD8U+NtCsfagWWfjd2U3jDapgH+7nQnCEWpROtzaKHG6lA3pXdix5zG8
eRc6/0IbUSWvfjKxLLPfNeCS2pCL3IeEI5nothEEYdQH6szpLog79xB9dVnJyKJb
VfxXnseoYqVrRz2VVbUI5Blwm6B40E3eGVfUQWiux54DspyVMMk41Mx7QJ3iynIa
1N4ZAqVMAEruyXTRTxc9XW0tYhDMA/1GYvz0EmFpm8LzTHA6sFVtPm/ZlNCX6P1X
zJwrv7DSQKD6GGlBQUX+OeEJ8tTkkf8QTJSPUdh8P8YxDFS5EOGAvhhpMBYD42kQ
pqXjEC+XcycTvGI7impgv9PDY1RCC1zkBjKPa120rNhv/hkVk/YhuGoajoHyy4h7
ZQopdcMtpN2dgmhEegny9JCSwxfQmQ0zK0g7m6SHiKMwjwARAQABiQQ+BBgBCAAJ
BQJYrdoqAhsCAikJEI2BgDwOv82IwV0gBBkBCAAGBQJYrdoqAAoJEH6gqcPyc/zY
1WAP/2wJ+R0gE6qsce3rjaIz58PJmc8goKrir5hnElWhPgbq7cYIsW5qiFyLhkdp
YcMmhD9mRiPpQn6Ya2w3e3B8zfIVKipbMBnke/ytZ9M7qHmDCcjoiSmwEXN3wKYI
mD9VHONsl/CG1rU9Isw1jtB5g1YxuBA7M/m36XN6x2u+NtNMDB9P56yc4gfsZVES
KA9v+yY2/l45L8d/WUkUi0YXomn6hyBGI7JrBLq0CX37GEYP6O9rrKipfz73XfO7
JIGzOKZlljb/D9RX/g7nRbCn+3EtH7xnk+TK/50euEKw8SMUg147sJTcpQmv6UzZ
cM4JgL0HbHVCojV4C/plELwMddALOFeYQzTif6sMRPf+3DSj8frbInjChC3yOLy0
6br92KFom17EIj2CAcoeq7UPhi2oouYBwPxh5ytdehJkoo+sN7RIWua6P2WSmon5
U888cSylXC0+ADFdgLX9K2zrDVYUG1vo8CX0vzxFBaHwN6Px26fhIT1/hYUHQR1z
VfNDcyQmXqkOnZvvoMfz/Q0s9BhFJ/zU6AgQbIZE/hm1spsfgvtsD1frZfygXJ9f
irP+MSAI80xHSf91qSRZOj4Pl3ZJNbq4yYxv0b1pkMqeGdjdCYhLU+LZ4wbQmpCk
SVe2prlLureigXtmZfkqevRz7FrIZiu9ky8wnCAPwC7/zmS18rgP/17bOtL4/iIz
QhxAAoAMWVrGyJivSkjhSGx1uCojsWfsTAm11P7jsruIL61ZzMUVE2aM3Pmj5G+W
9AcZ58Em+1WsVnAXdUR//bMmhyr8wL/G1YO1V3JEJTRdxsSxdYa4deGBBY/Adpsw
24jxhOJR+lsJpqIUeb999+R8euDhRHG9eFO7DRu6weatUJ6suupoDTRWtr/4yGqe
dKxV3qQhNLSnaAzqW/1nA3iUB4k7kCaKZxhdhDbClf9P37qaRW467BLCVO/coL3y
Vm50dwdrNtKpMBh3ZpbB1uJvgi9mXtyBOMJ3v8RZeDzFiG8HdCtg9RvIt/AIFoHR
H3S+U79NT6i0KPzLImDfs8T7RlpyuMc4Ufs8ggyg9v3Ae6cN3eQyxcK3w0cbBwsh
/nQNfsA6uu+9H7NhbehBMhYnpNZyrHzCmzyXkauwRAqoCbGCNykTRwsur9gS41TQ
M8ssD1jFheOJf3hODnkKU+HKjvMROl1DK7zdmLdNzA1cvtZH/nCC9KPj1z8QC47S
xx+dTZSx4ONAhwbS/LN3PoKtn8LPjY9NP9uDWI+TWYquS2U+KHDrBDlsgozDbs/O
jCxcpDzNmXpWQHEtHU7649OXHP7UeNST1mCUCH5qdank0V1iejF6/CfTFU4MfcrG
YT90qFF93M3v01BbxP+EIY2/9tiIPbrd
=0YYh
-----END PGP PUBLIC KEY BLOCK-----
admin-tccl@Admin:~$
admin-tccl@Admin:~$
admin-tccl@Admin:~$ sudo chmod a+r /etc/apt/keyrings/docker,asc
chmod: cannot access '/etc/apt/keyrings/docker,asc': No such file or directory
admin-tccl@Admin:~$ sudo chmod a+r /etc/apt/keyrings/docker.asc
admin-tccl@Admin:~$ sudo echo \
"deb [arch=\"$(dpkg --print-architecture)\" signed-by=/etc/apt/keyrings/docker.asc]
https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo \"$VERSION_CODENAME\") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
admin-tccl@Admin:~$ cat /etc/apt/sources.list.d/docker.list
deb [arch="amd64" signed-by=/etc/apt/keyrings/docker.asc]
https://download.docker.com/linux/ubuntu "noble" stable
admin-tccl@Admin:~$ sudo apt-get update
E: Malformed entry 1 in list file /etc/apt/sources.list.d/docker.list (URI)
E: The list of sources could not be read.
admin-tccl@Admin:~$ cat /etc/apt/sources.list.d/docker.list
deb [arch="amd64" signed-by=/etc/apt/keyrings/docker.asc]
https://download.docker.com/linux/ubuntu "noble" stable
admin-tccl@Admin:~$ . /etc/os-release && echo "$VERSION_CODENAME"
noble
admin-tccl@Admin:~$ sudo nano /etc/apt/sources.list.d/docker.list
admin-tccl@Admin:~$ sudo apt-get update
Get:1 https://download.docker.com/linux/ubuntu noble InRelease [48.8 kB]
Get:2 https://download.docker.com/linux/ubuntu noble/stable amd64 Packages [15.3
kB]
Get:3 http://security.ubuntu.com/ubuntu noble-security InRelease [126 kB]
Get:4 http://security.ubuntu.com/ubuntu noble-security/main amd64 Components [7,184
B]
Get:5 http://security.ubuntu.com/ubuntu noble-security/restricted amd64 Components
[212 B]
Get:6 http://security.ubuntu.com/ubuntu noble-security/universe amd64 Components
[51.9 kB]
Get:7 http://security.ubuntu.com/ubuntu noble-security/multiverse amd64 Components
[212 B]
Hit:8 http://in.archive.ubuntu.com/ubuntu noble InRelease
Get:9 http://in.archive.ubuntu.com/ubuntu noble-updates InRelease [126 kB]
Hit:10 http://in.archive.ubuntu.com/ubuntu noble-backports InRelease
Get:11 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 Components [114
kB]
Get:12 http://in.archive.ubuntu.com/ubuntu noble-updates/restricted amd64
Components [212 B]
Get:13 http://in.archive.ubuntu.com/ubuntu noble-updates/universe amd64 Components
[306 kB]
Get:14 http://in.archive.ubuntu.com/ubuntu noble-updates/multiverse amd64
Components [940 B]
Fetched 797 kB in 5s (147 kB/s)
Reading package lists... Done
admin-tccl@Admin:~$ ^C
admin-tccl@Admin:~$ apt-cache madison docker-ce
docker-ce | 5:27.3.1-1~ubuntu.24.04~noble |
https://download.docker.com/linux/ubuntu noble/stable amd64 Packages
docker-ce | 5:27.3.0-1~ubuntu.24.04~noble |
https://download.docker.com/linux/ubuntu noble/stable amd64 Packages
docker-ce | 5:27.2.1-1~ubuntu.24.04~noble |
https://download.docker.com/linux/ubuntu noble/stable amd64 Packages
docker-ce | 5:27.2.0-1~ubuntu.24.04~noble |
https://download.docker.com/linux/ubuntu noble/stable amd64 Packages
docker-ce | 5:27.1.2-1~ubuntu.24.04~noble |
https://download.docker.com/linux/ubuntu noble/stable amd64 Packages
docker-ce | 5:27.1.1-1~ubuntu.24.04~noble |
https://download.docker.com/linux/ubuntu noble/stable amd64 Packages
docker-ce | 5:27.1.0-1~ubuntu.24.04~noble |
https://download.docker.com/linux/ubuntu noble/stable amd64 Packages
docker-ce | 5:27.0.3-1~ubuntu.24.04~noble |
https://download.docker.com/linux/ubuntu noble/stable amd64 Packages
docker-ce | 5:27.0.2-1~ubuntu.24.04~noble |
https://download.docker.com/linux/ubuntu noble/stable amd64 Packages
docker-ce | 5:27.0.1-1~ubuntu.24.04~noble |
https://download.docker.com/linux/ubuntu noble/stable amd64 Packages
docker-ce | 5:26.1.4-1~ubuntu.24.04~noble |
https://download.docker.com/linux/ubuntu noble/stable amd64 Packages
docker-ce | 5:26.1.3-1~ubuntu.24.04~noble |
https://download.docker.com/linux/ubuntu noble/stable amd64 Packages
docker-ce | 5:26.1.2-1~ubuntu.24.04~noble |
https://download.docker.com/linux/ubuntu noble/stable amd64 Packages
docker-ce | 5:26.1.1-1~ubuntu.24.04~noble |
https://download.docker.com/linux/ubuntu noble/stable amd64 Packages
docker-ce | 5:26.1.0-1~ubuntu.24.04~noble |
https://download.docker.com/linux/ubuntu noble/stable amd64 Packages
docker-ce | 5:26.0.2-1~ubuntu.24.04~noble |
https://download.docker.com/linux/ubuntu noble/stable amd64 Packages
docker-ce | 5:26.0.1-1~ubuntu.24.04~noble |
https://download.docker.com/linux/ubuntu noble/stable amd64 Packages
docker-ce | 5:26.0.0-1~ubuntu.24.04~noble |
https://download.docker.com/linux/ubuntu noble/stable amd64 Packages
admin-tccl@Admin:~$ apt-cache madison docker-ce-cli
docker-ce-cli | 5:27.3.1-1~ubuntu.24.04~noble |
https://download.docker.com/linux/ubuntu noble/stable amd64 Packages
docker-ce-cli | 5:27.3.0-1~ubuntu.24.04~noble |
https://download.docker.com/linux/ubuntu noble/stable amd64 Packages
docker-ce-cli | 5:27.2.1-1~ubuntu.24.04~noble |
https://download.docker.com/linux/ubuntu noble/stable amd64 Packages
docker-ce-cli | 5:27.2.0-1~ubuntu.24.04~noble |
https://download.docker.com/linux/ubuntu noble/stable amd64 Packages
docker-ce-cli | 5:27.1.2-1~ubuntu.24.04~noble |
https://download.docker.com/linux/ubuntu noble/stable amd64 Packages
docker-ce-cli | 5:27.1.1-1~ubuntu.24.04~noble |
https://download.docker.com/linux/ubuntu noble/stable amd64 Packages
docker-ce-cli | 5:27.1.0-1~ubuntu.24.04~noble |
https://download.docker.com/linux/ubuntu noble/stable amd64 Packages
docker-ce-cli | 5:27.0.3-1~ubuntu.24.04~noble |
https://download.docker.com/linux/ubuntu noble/stable amd64 Packages
docker-ce-cli | 5:27.0.2-1~ubuntu.24.04~noble |
https://download.docker.com/linux/ubuntu noble/stable amd64 Packages
docker-ce-cli | 5:27.0.1-1~ubuntu.24.04~noble |
https://download.docker.com/linux/ubuntu noble/stable amd64 Packages
docker-ce-cli | 5:26.1.4-1~ubuntu.24.04~noble |
https://download.docker.com/linux/ubuntu noble/stable amd64 Packages
docker-ce-cli | 5:26.1.3-1~ubuntu.24.04~noble |
https://download.docker.com/linux/ubuntu noble/stable amd64 Packages
docker-ce-cli | 5:26.1.2-1~ubuntu.24.04~noble |
https://download.docker.com/linux/ubuntu noble/stable amd64 Packages
docker-ce-cli | 5:26.1.1-1~ubuntu.24.04~noble |
https://download.docker.com/linux/ubuntu noble/stable amd64 Packages
docker-ce-cli | 5:26.1.0-1~ubuntu.24.04~noble |
https://download.docker.com/linux/ubuntu noble/stable amd64 Packages
docker-ce-cli | 5:26.0.2-1~ubuntu.24.04~noble |
https://download.docker.com/linux/ubuntu noble/stable amd64 Packages
docker-ce-cli | 5:26.0.1-1~ubuntu.24.04~noble |
https://download.docker.com/linux/ubuntu noble/stable amd64 Packages
docker-ce-cli | 5:26.0.0-1~ubuntu.24.04~noble |
https://download.docker.com/linux/ubuntu noble/stable amd64 Packages
admin-tccl@Admin:~$ sudo apt-get install -y docker-ce docker-ce-cli containerd.io
[sudo] password for admin-tccl:
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
docker-buildx-plugin docker-ce-rootless-extras docker-compose-plugin git git-man
liberror-perl libslirp0 pigz slirp4netns
Suggested packages:
aufs-tools cgroupfs-mount | cgroup-lite git-daemon-run | git-daemon-sysvinit git-
doc git-email git-gui gitk gitweb git-cvs git-mediawiki git-svn
The following NEW packages will be installed:
containerd.io docker-buildx-plugin docker-ce docker-ce-cli docker-ce-rootless-
extras docker-compose-plugin git git-man liberror-perl libslirp0 pigz
slirp4netns
0 upgraded, 12 newly installed, 0 to remove and 3 not upgraded.
Need to get 128 MB of archives.
After this operation, 466 MB of additional disk space will be used.
Get:1 https://download.docker.com/linux/ubuntu noble/stable amd64 containerd.io
amd64 1.7.22-1 [29.5 MB]
Get:2 http://in.archive.ubuntu.com/ubuntu noble/universe amd64 pigz amd64 2.8-1
[65.6 kB]
Get:3 http://in.archive.ubuntu.com/ubuntu noble/main amd64 liberror-perl all
0.17029-2 [25.6 kB]
Get:4 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 git-man all
1:2.43.0-1ubuntu7.1 [1,100 kB]
Get:5 http://in.archive.ubuntu.com/ubuntu noble-updates/main amd64 git amd64
1:2.43.0-1ubuntu7.1 [3,679 kB]
Get:6 http://in.archive.ubuntu.com/ubuntu noble/main amd64 libslirp0 amd64 4.7.0-
1ubuntu3 [63.8 kB]
Get:7 http://in.archive.ubuntu.com/ubuntu noble/universe amd64 slirp4netns amd64
1.2.1-1build2 [34.9 kB]
Get:8 https://download.docker.com/linux/ubuntu noble/stable amd64 docker-buildx-
plugin amd64 0.17.1-1~ubuntu.24.04~noble [30.3 MB]
Get:9 https://download.docker.com/linux/ubuntu noble/stable amd64 docker-ce-cli
amd64 5:27.3.1-1~ubuntu.24.04~noble [15.0 MB]
Get:10 https://download.docker.com/linux/ubuntu noble/stable amd64 docker-ce amd64
5:27.3.1-1~ubuntu.24.04~noble [25.6 MB]
Get:11 https://download.docker.com/linux/ubuntu noble/stable amd64 docker-ce-
rootless-extras amd64 5:27.3.1-1~ubuntu.24.04~noble [9,588 kB]
Get:12 https://download.docker.com/linux/ubuntu noble/stable amd64 docker-compose-
plugin amd64 2.29.7-1~ubuntu.24.04~noble [12.7 MB]
Fetched 128 MB in 16s (8,198 kB/s)
Selecting previously unselected package pigz.
(Reading database ... 148106 files and directories currently installed.)
Preparing to unpack .../00-pigz_2.8-1_amd64.deb ...
Unpacking pigz (2.8-1) ...
Selecting previously unselected package containerd.io.
Preparing to unpack .../01-containerd.io_1.7.22-1_amd64.deb ...
Unpacking containerd.io (1.7.22-1) ...
Selecting previously unselected package docker-buildx-plugin.
Preparing to unpack .../02-docker-buildx-plugin_0.17.1-
1~ubuntu.24.04~noble_amd64.deb ...
Unpacking docker-buildx-plugin (0.17.1-1~ubuntu.24.04~noble) ...
Selecting previously unselected package docker-ce-cli.
Preparing to unpack .../03-docker-ce-cli_5%3a27.3.1-
1~ubuntu.24.04~noble_amd64.deb ...
Unpacking docker-ce-cli (5:27.3.1-1~ubuntu.24.04~noble) ...
Selecting previously unselected package docker-ce.
Preparing to unpack .../04-docker-ce_5%3a27.3.1-1~ubuntu.24.04~noble_amd64.deb ...
Unpacking docker-ce (5:27.3.1-1~ubuntu.24.04~noble) ...
Selecting previously unselected package docker-ce-rootless-extras.
Preparing to unpack .../05-docker-ce-rootless-extras_5%3a27.3.1-
1~ubuntu.24.04~noble_amd64.deb ...
Unpacking docker-ce-rootless-extras (5:27.3.1-1~ubuntu.24.04~noble) ...
Selecting previously unselected package docker-compose-plugin.
Preparing to unpack .../06-docker-compose-plugin_2.29.7-
1~ubuntu.24.04~noble_amd64.deb ...
Unpacking docker-compose-plugin (2.29.7-1~ubuntu.24.04~noble) ...
Selecting previously unselected package liberror-perl.
Preparing to unpack .../07-liberror-perl_0.17029-2_all.deb ...
Unpacking liberror-perl (0.17029-2) ...
Selecting previously unselected package git-man.
Preparing to unpack .../08-git-man_1%3a2.43.0-1ubuntu7.1_all.deb ...
Unpacking git-man (1:2.43.0-1ubuntu7.1) ...
Selecting previously unselected package git.
Preparing to unpack .../09-git_1%3a2.43.0-1ubuntu7.1_amd64.deb ...
Unpacking git (1:2.43.0-1ubuntu7.1) ...
Selecting previously unselected package libslirp0:amd64.
Preparing to unpack .../10-libslirp0_4.7.0-1ubuntu3_amd64.deb ...
Unpacking libslirp0:amd64 (4.7.0-1ubuntu3) ...
Selecting previously unselected package slirp4netns.
Preparing to unpack .../11-slirp4netns_1.2.1-1build2_amd64.deb ...
Unpacking slirp4netns (1.2.1-1build2) ...
Setting up liberror-perl (0.17029-2) ...
Setting up docker-buildx-plugin (0.17.1-1~ubuntu.24.04~noble) ...
Setting up containerd.io (1.7.22-1) ...
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service →
/usr/lib/systemd/system/containerd.service.
Setting up docker-compose-plugin (2.29.7-1~ubuntu.24.04~noble) ...
Setting up docker-ce-cli (5:27.3.1-1~ubuntu.24.04~noble) ...
Setting up libslirp0:amd64 (4.7.0-1ubuntu3) ...
Setting up pigz (2.8-1) ...
Setting up git-man (1:2.43.0-1ubuntu7.1) ...
Setting up docker-ce-rootless-extras (5:27.3.1-1~ubuntu.24.04~noble) ...
Setting up slirp4netns (1.2.1-1build2) ...
Setting up docker-ce (5:27.3.1-1~ubuntu.24.04~noble) ...
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service →
/usr/lib/systemd/system/docker.service.
Created symlink /etc/systemd/system/sockets.target.wants/docker.socket →
/usr/lib/systemd/system/docker.socket.
Setting up git (1:2.43.0-1ubuntu7.1) ...
Processing triggers for man-db (2.12.0-4build2) ...
Processing triggers for libc-bin (2.39-0ubuntu8.3) ...
admin-tccl@Admin:~$ sudo systemctl enable docker
Synchronizing state of docker.service with SysV service script with
/usr/lib/systemd/systemd-sysv-install.
Executing: /usr/lib/systemd/systemd-sysv-install enable docker
admin-tccl@Admin:~$ sudo systemctl enable containerd
admin-tccl@Admin:~$ sudo systemctl start docker
admin-tccl@Admin:~$ sudo systemctl start containerd
admin-tccl@Admin:~$
admin-tccl@Admin:~$ ^C
admin-tccl@Admin:~$ sudo systemctl status containerd
● containerd.service - containerd container runtime
Loaded: loaded (/usr/lib/systemd/system/containerd.service; enabled; preset:
enabled)
Active: active (running) since Wed 2024-10-23 22:46:08 IST; 2min 42s ago
Docs: https://containerd.io
Main PID: 27491 (containerd)
Tasks: 16
Memory: 14.9M (peak: 16.6M)
CPU: 264ms
CGroup: /system.slice/containerd.service
└─27491 /usr/bin/containerd

Oct 23 22:46:08 Admin containerd[27491]: time="2024-10-23T22:46:08.517661463+05:30"


level=info msg="skip loading plugin \"io.containerd.tracing.processo>
Oct 23 22:46:08 Admin containerd[27491]: time="2024-10-23T22:46:08.517681019+05:30"
level=info msg="loading plugin \"io.containerd.internal.v1.tracing\">
Oct 23 22:46:08 Admin containerd[27491]: time="2024-10-23T22:46:08.517712936+05:30"
level=info msg="skip loading plugin \"io.containerd.internal.v1.trac>
Oct 23 22:46:08 Admin containerd[27491]: time="2024-10-23T22:46:08.517727812+05:30"
level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\">
Oct 23 22:46:08 Admin containerd[27491]: time="2024-10-23T22:46:08.517746319+05:30"
level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type>
Oct 23 22:46:08 Admin containerd[27491]: time="2024-10-23T22:46:08.517766922+05:30"
level=info msg="NRI interface is disabled by configuration."
Oct 23 22:46:08 Admin containerd[27491]: time="2024-10-23T22:46:08.518086512+05:30"
level=info msg=serving... address=/run/containerd/containerd.sock.tt>
Oct 23 22:46:08 Admin containerd[27491]: time="2024-10-23T22:46:08.518170949+05:30"
level=info msg=serving... address=/run/containerd/containerd.sock
Oct 23 22:46:08 Admin containerd[27491]: time="2024-10-23T22:46:08.518264395+05:30"
level=info msg="containerd successfully booted in 0.036133s"
Oct 23 22:46:08 Admin systemd[1]: Started containerd.service - containerd container
runtime.

admin-tccl@Admin:~$ sudo systemctl status docker


● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; preset:
enabled)
Active: active (running) since Wed 2024-10-23 22:46:10 IST; 3min 7s ago
TriggeredBy: ● docker.socket
Docs: https://docs.docker.com
Main PID: 27837 (dockerd)
Tasks: 18
Memory: 22.5M (peak: 24.4M)
CPU: 568ms
CGroup: /system.slice/docker.service
└─27837 /usr/bin/dockerd -H fd://
--containerd=/run/containerd/containerd.sock

Oct 23 22:46:10 Admin dockerd[27837]: time="2024-10-23T22:46:10.179999351+05:30"


level=info msg="Starting up"
Oct 23 22:46:10 Admin dockerd[27837]: time="2024-10-23T22:46:10.181334768+05:30"
level=info msg="detected 127.0.0.53 nameserver, assuming systemd-resolv>
Oct 23 22:46:10 Admin dockerd[27837]: time="2024-10-23T22:46:10.283387367+05:30"
level=info msg="Loading containers: start."
Oct 23 22:46:10 Admin dockerd[27837]: time="2024-10-23T22:46:10.852740530+05:30"
level=info msg="Loading containers: done."
Oct 23 22:46:10 Admin dockerd[27837]: time="2024-10-23T22:46:10.886514343+05:30"
level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
Oct 23 22:46:10 Admin dockerd[27837]: time="2024-10-23T22:46:10.886551498+05:30"
level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
Oct 23 22:46:10 Admin dockerd[27837]: time="2024-10-23T22:46:10.886579155+05:30"
level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=fa>
Oct 23 22:46:10 Admin dockerd[27837]: time="2024-10-23T22:46:10.886768632+05:30"
level=info msg="Daemon has completed initialization"
Oct 23 22:46:10 Admin dockerd[27837]: time="2024-10-23T22:46:10.941479044+05:30"
level=info msg="API listen on /run/docker.sock"
Oct 23 22:46:10 Admin systemd[1]: Started docker.service - Docker Application
Container Engine.

admin-tccl@Admin:~$ sudo apt-get update


Hit:1 https://download.docker.com/linux/ubuntu noble InRelease
Hit:2 http://security.ubuntu.com/ubuntu noble-security InRelease
Hit:3 http://in.archive.ubuntu.com/ubuntu noble InRelease
Hit:4 http://in.archive.ubuntu.com/ubuntu noble-updates InRelease
Hit:5 http://in.archive.ubuntu.com/ubuntu noble-backports InRelease
Reading package lists... Done
admin-tccl@Admin:~$ sudo apt-get install -y apt-transport-https ca-certificates curl
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
E: Unable to locate package ca-certificates
admin-tccl@Admin:~$ sudo apt-get install -y apt-transport-https ca-certificates
curl
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
ca-certificates is already the newest version (20240203).
curl is already the newest version (8.5.0-2ubuntu10.4).
The following NEW packages will be installed:
apt-transport-https
0 upgraded, 1 newly installed, 0 to remove and 3 not upgraded.
Need to get 3,974 B of archives.
After this operation, 35.8 kB of additional disk space will be used.
Get:1 http://in.archive.ubuntu.com/ubuntu noble/universe amd64 apt-transport-https
all 2.7.14build2 [3,974 B]
Fetched 3,974 B in 1s (6,836 B/s)
Selecting previously unselected package apt-transport-https.
(Reading database ... 149449 files and directories currently installed.)
Preparing to unpack .../apt-transport-https_2.7.14build2_all.deb ...
Unpacking apt-transport-https (2.7.14build2) ...
Setting up apt-transport-https (2.7.14build2) ...
admin-tccl@Admin:~$ sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-
keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
admin-tccl@Admin:~$ ^C
admin-tccl@Admin:~$ echo $?
130
admin-tccl@Admin:~$ sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-
keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
admin-tccl@Admin:~$ echo $?
0
admin-tccl@Admin:~$ ls -l /usr/share/keyrings/kubernetes-archive-keyring.gpg
-rw-r--r-- 1 root root 1022 Oct 23 22:56 /usr/share/keyrings/kubernetes-archive-
keyring.gpg
admin-tccl@Admin:~$ echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-
keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" |sudo tee
/etc/apt/sources.list.d/kubernetes.list
deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg]
https://apt.kubernetes.io/ kubernetes-xenial main
admin-tccl@Admin:~$ echo $?
0
admin-tccl@Admin:~$ sudo apt-get update
Hit:1 https://download.docker.com/linux/ubuntu noble InRelease
Hit:2 http://in.archive.ubuntu.com/ubuntu noble InRelease
Hit:4 http://in.archive.ubuntu.com/ubuntu noble-updates InRelease
Hit:5 http://in.archive.ubuntu.com/ubuntu noble-backports InRelease
Ign:3 https://packages.cloud.google.com/apt kubernetes-xenial InRelease
Err:6 https://packages.cloud.google.com/apt kubernetes-xenial Release
404 Not Found [IP: 2404:6800:4007:814::200e 443]
Hit:7 http://security.ubuntu.com/ubuntu noble-security InRelease
Reading package lists... Done
E: The repository 'https://apt.kubernetes.io kubernetes-xenial Release' does not
have a Release file.
N: Updating from such a repository can't be done securely, and is therefore
disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration
details.
admin-tccl@Admin:~$ sudo apt-get install -y kubelet kubeadm kubectl
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
E: Unable to locate package kubelet
E: Unable to locate package kubeadm
E: Unable to locate package kubectl
admin-tccl@Admin:~$ ls -l /etc/apt/sources.list.d/kubernetes.list
-rw-r--r-- 1 root root 117 Oct 23 23:01 /etc/apt/sources.list.d/kubernetes.list
admin-tccl@Admin:~$ cat etc/apt/sources.list.d/kubernetes.list
cat: etc/apt/sources.list.d/kubernetes.list: No such file or directory
admin-tccl@Admin:~$ cat /etc/apt/sources.list.d/kubernetes.list
deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg]
https://apt.kubernetes.io/ kubernetes-xenial main
admin-tccl@Admin:~$ sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-
keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
admin-tccl@Admin:~$ echo $?
0
admin-tccl@Admin:~$ sudo apt-get update
Hit:1 https://download.docker.com/linux/ubuntu noble InRelease
Hit:2 http://security.ubuntu.com/ubuntu noble-security InRelease
Hit:4 http://in.archive.ubuntu.com/ubuntu noble InRelease
Get:5 http://in.archive.ubuntu.com/ubuntu noble-updates InRelease [126 kB]
Ign:3 https://packages.cloud.google.com/apt kubernetes-xenial InRelease
Err:6 https://packages.cloud.google.com/apt kubernetes-xenial Release
404 Not Found [IP: 2404:6800:4007:827::200e 443]
Hit:7 http://in.archive.ubuntu.com/ubuntu noble-backports InRelease
Reading package lists... Done
E: The repository 'https://apt.kubernetes.io kubernetes-xenial Release' does not
have a Release file.
N: Updating from such a repository can't be done securely, and is therefore
disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration
details.
admin-tccl@Admin:~$ sudo apt-get install -y kubelet kubeadm kubectl
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
E: Unable to locate package kubelet
E: Unable to locate package kubeadm
E: Unable to locate package kubectl
admin-tccl@Admin:~$ can i^C
admin-tccl@Admin:~$ sudo apt-get install -y kubelet-1.31.1 kubeadm-1.31.1 kubectl-
1.31.1
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
E: Unable to locate package kubelet-1.31.1
E: Couldn't find any package by glob 'kubelet-1.31.1'
E: Couldn't find any package by regex 'kubelet-1.31.1'
E: Unable to locate package kubeadm-1.31.1
E: Couldn't find any package by glob 'kubeadm-1.31.1'
E: Couldn't find any package by regex 'kubeadm-1.31.1'
E: Unable to locate package kubectl-1.31.1
E: Couldn't find any package by glob 'kubectl-1.31.1'
E: Couldn't find any package by regex 'kubectl-1.31.1'
admin-tccl@Admin:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 24.04.1 LTS
Release: 24.04
Codename: noble
admin-tccl@Admin:~$ ^C
admin-tccl@Admin:~$ sudo apt-get update
Hit:1 https://download.docker.com/linux/ubuntu noble InRelease
Hit:2 http://in.archive.ubuntu.com/ubuntu noble InRelease
Hit:3 http://security.ubuntu.com/ubuntu noble-security InRelease
Hit:4 http://in.archive.ubuntu.com/ubuntu noble-updates InRelease
Hit:5 http://in.archive.ubuntu.com/ubuntu noble-backports InRelease
Ign:6 https://packages.cloud.google.com/apt kubernetes-xenial InRelease
Err:7 https://packages.cloud.google.com/apt kubernetes-xenial Release
404 Not Found [IP: 2404:6800:4002:82e::200e 443]
Reading package lists... Done
E: The repository 'https://apt.kubernetes.io kubernetes-xenial Release' does not
have a Release file.
N: Updating from such a repository can't be done securely, and is therefore
disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration
details.
admin-tccl@Admin:~$ sudo apt-get install lsb-release
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
lsb-release is already the newest version (12.0-2).
lsb-release set to manually installed.
0 upgraded, 0 newly installed, 0 to remove and 3 not upgraded.
admin-tccl@Admin:~$ lsb-release -a
Command 'lsb-release' not found, did you mean:
command 'lsb_release' from deb lsb-release (12.0-2)
Try: sudo apt install <deb name>
admin-tccl@Admin:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 24.04.1 LTS
Release: 24.04
Codename: noble
admin-tccl@Admin:~$ sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-
keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
admin-tccl@Admin:~$ echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-
keyring.gpg] https://apt.kubernetes.io/ kubernetes-noble main" |sudo tee
/etc/apt/sources.list.d/kubernetes.list
deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg]
https://apt.kubernetes.io/ kubernetes-noble main
admin-tccl@Admin:~$ sudo apt-get update
Hit:1 https://download.docker.com/linux/ubuntu noble InRelease
Hit:3 http://security.ubuntu.com/ubuntu noble-security InRelease
Ign:2 https://packages.cloud.google.com/apt kubernetes-noble InRelease
Err:4 https://packages.cloud.google.com/apt kubernetes-noble Release
404 Not Found [IP: 2404:6800:4007:805::200e 443]
Hit:5 http://in.archive.ubuntu.com/ubuntu noble InRelease
Hit:6 http://in.archive.ubuntu.com/ubuntu noble-updates InRelease
Hit:7 http://in.archive.ubuntu.com/ubuntu noble-backports InRelease
Reading package lists... Done
E: The repository 'https://apt.kubernetes.io kubernetes-noble Release' does not
have a Release file.
N: Updating from such a repository can't be done securely, and is therefore
disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration
details.
admin-tccl@Admin:~$ sudo apt-get install -y kubelet kubeadm kubectl
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
E: Unable to locate package kubelet
E: Unable to locate package kubeadm
E: Unable to locate package kubectl
admin-tccl@Admin:~$ ^C
admin-tccl@Admin:~$ sudo rm /etc/apt/sources.list.d/kubernetes.list
admin-tccl@Admin:~$ sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-
keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
admin-tccl@Admin:~$ echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-
keyring.gpg] https://apt.kubernetes.io/ kubernetes-noble main" | sudo tee
/etc/apt/sources.list.d/kubernetes.list
deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg]
https://apt.kubernetes.io/ kubernetes-noble main
admin-tccl@Admin:~$ sudo apt-get update
Hit:1 https://download.docker.com/linux/ubuntu noble InRelease
Hit:2 http://security.ubuntu.com/ubuntu noble-security InRelease
Hit:3 http://in.archive.ubuntu.com/ubuntu noble InRelease
Hit:4 http://in.archive.ubuntu.com/ubuntu noble-updates InRelease
Hit:5 http://in.archive.ubuntu.com/ubuntu noble-backports InRelease
Ign:6 https://packages.cloud.google.com/apt kubernetes-noble InRelease
Err:7 https://packages.cloud.google.com/apt kubernetes-noble Release
404 Not Found [IP: 2404:6800:4007:827::200e 443]
Reading package lists... Done
E: The repository 'https://apt.kubernetes.io kubernetes-noble Release' does not
have a Release file.
N: Updating from such a repository can't be done securely, and is therefore
disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration
details.
admin-tccl@Admin:~$ cat /etc/os-release
PRETTY_NAME="Ubuntu 24.04.1 LTS"
NAME="Ubuntu"
VERSION_ID="24.04"
VERSION="24.04.1 LTS (Noble Numbat)"
VERSION_CODENAME=noble
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=noble
LOGO=ubuntu-logo
admin-tccl@Admin:~$ curl -fsSL
https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o
/etc/apt/keyrings/kubernetes-apt-keyring.gpg~
admin-tccl@Admin:~$ sudo curl -fsSL
https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o
/etc/apt/keyrings/kubernetes-apt-keyring.gpg
admin-tccl@Admin:~$ echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-
keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee
/etc/apt/sources.list.d/kubernetes.list
deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg]
https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /
admin-tccl@Admin:~$ sudo apt-get update
Hit:1 https://download.docker.com/linux/ubuntu noble InRelease
Hit:2 http://security.ubuntu.com/ubuntu noble-security InRelease
Hit:3 http://in.archive.ubuntu.com/ubuntu noble InRelease
Hit:4 http://in.archive.ubuntu.com/ubuntu noble-updates InRelease
Hit:5 http://in.archive.ubuntu.com/ubuntu noble-backports InRelease
Get:6 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/
stable:/v1.30/deb InRelease [1,189 B]
Get:7 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/
stable:/v1.30/deb Packages [10.5 kB]
Fetched 11.7 kB in 3s (3,777 B/s)
Reading package lists... Done
admin-tccl@Admin:~$ sudo apt-get install -y kubelet kubeadm kubectl
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
conntrack cri-tools kubernetes-cni
The following NEW packages will be installed:
conntrack cri-tools kubeadm kubectl kubelet kubernetes-cni
0 upgraded, 6 newly installed, 0 to remove and 3 not upgraded.
Need to get 93.5 MB of archives.
After this operation, 341 MB of additional disk space will be used.
Get:1 http://in.archive.ubuntu.com/ubuntu noble/main amd64 conntrack amd64 1:1.4.8-
1ubuntu1 [37.9 kB]
Get:2 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/
stable:/v1.30/deb cri-tools 1.30.1-1.1 [21.3 MB]
Get:3 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/
stable:/v1.30/deb kubeadm 1.30.6-1.1 [10.4 MB]
Get:4 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/
stable:/v1.30/deb kubectl 1.30.6-1.1 [10.8 MB]
Get:5 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/
stable:/v1.30/deb kubernetes-cni 1.4.0-1.1 [32.9 MB]
Get:6 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/
stable:/v1.30/deb kubelet 1.30.6-1.1 [18.1 MB]
Fetched 93.5 MB in 27s (3,491 kB/s)
Selecting previously unselected package conntrack.
(Reading database ... 149453 files and directories currently installed.)
Preparing to unpack .../0-conntrack_1%3a1.4.8-1ubuntu1_amd64.deb ...
Unpacking conntrack (1:1.4.8-1ubuntu1) ...
Selecting previously unselected package cri-tools.
Preparing to unpack .../1-cri-tools_1.30.1-1.1_amd64.deb ...
Unpacking cri-tools (1.30.1-1.1) ...
Selecting previously unselected package kubeadm.
Preparing to unpack .../2-kubeadm_1.30.6-1.1_amd64.deb ...
Unpacking kubeadm (1.30.6-1.1) ...
Selecting previously unselected package kubectl.
Preparing to unpack .../3-kubectl_1.30.6-1.1_amd64.deb ...
Unpacking kubectl (1.30.6-1.1) ...
Selecting previously unselected package kubernetes-cni.
Preparing to unpack .../4-kubernetes-cni_1.4.0-1.1_amd64.deb ...
Unpacking kubernetes-cni (1.4.0-1.1) ...
Selecting previously unselected package kubelet.
Preparing to unpack .../5-kubelet_1.30.6-1.1_amd64.deb ...
Unpacking kubelet (1.30.6-1.1) ...
Setting up conntrack (1:1.4.8-1ubuntu1) ...
Setting up kubectl (1.30.6-1.1) ...
Setting up cri-tools (1.30.1-1.1) ...
Setting up kubernetes-cni (1.4.0-1.1) ...
Setting up kubeadm (1.30.6-1.1) ...
Setting up kubelet (1.30.6-1.1) ...
Processing triggers for man-db (2.12.0-4build2) ...
admin-tccl@Admin:~$ sudo apt-mark hold kubelet kubeadm kubectl
kubelet set on hold.
kubeadm set on hold.
kubectl set on hold.
admin-tccl@Admin:~$ sudo kubeadm init
I1023 23:37:44.448753 32658 version.go:256] remote version is much newer:
v1.31.2; falling back to: stable-1.30
[init] Using Kubernetes version: v1.30.6
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: time="2024-10-
23T23:37:45+05:30" level=fatal msg="validate service connection: validate CRI v1
runtime API for endpoint \"unix:///var/run/containerd/containerd.sock\": rpc error:
code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--
ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
admin-tccl@Admin:~$ sudo systemctl enable --now kubelet
admin-tccl@Admin:~$ sudo kubead, init --pod-network-cidr=192.168.0.0/16 --cri-
socket=/var/run/containerd/containerd.sock --v=5
sudo: kubead,: command not found
admin-tccl@Admin:~$ sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --cri-
socket=/var/run/containerd/containerd.sock --v=5
W1023 23:42:29.045439 33217 initconfiguration.go:125] Usage of CRI endpoints
without URL scheme is deprecated and can cause kubelet errors in the future.
Automatically prepending scheme "unix" to the "criSocket" with value
"/var/run/containerd/containerd.sock". Please update your configuration!
I1023 23:42:29.045855 33217 interface.go:432] Looking for default routes with
IPv4 addresses
I1023 23:42:29.045872 33217 interface.go:437] Default route transits interface
"wlp3s0"
I1023 23:42:29.046148 33217 interface.go:209] Interface wlp3s0 is up
I1023 23:42:29.046224 33217 interface.go:257] Interface "wlp3s0" has 4
addresses :[192.168.29.215/24 2405:201:e00e:8087:ea80:94fa:d083:c79d/64
2405:201:e00e:8087:40d3:89de:effd:1dcb/64 fe80::a64d:8461:c950:4e78/64].
I1023 23:42:29.046255 33217 interface.go:224] Checking addr 192.168.29.215/24.
I1023 23:42:29.046274 33217 interface.go:231] IP found 192.168.29.215
I1023 23:42:29.046309 33217 interface.go:263] Found valid IPv4 address
192.168.29.215 for interface "wlp3s0".
I1023 23:42:29.046330 33217 interface.go:443] Found active IP 192.168.29.215
I1023 23:42:29.046363 33217 kubelet.go:196] the value of
KubeletConfiguration.cgroupDriver is empty; setting it to "systemd"
I1023 23:42:29.055228 33217 version.go:187] fetching Kubernetes version from URL:
https://dl.k8s.io/release/stable-1.txt
I1023 23:42:29.392782 33217 version.go:256] remote version is much newer:
v1.31.2; falling back to: stable-1.30
I1023 23:42:29.392835 33217 version.go:187] fetching Kubernetes version from URL:
https://dl.k8s.io/release/stable-1.30.txt
[init] Using Kubernetes version: v1.30.6
[preflight] Running pre-flight checks
I1023 23:42:29.953113 33217 checks.go:561] validating Kubernetes and kubeadm
version
I1023 23:42:29.953153 33217 checks.go:166] validating if the firewall is enabled
and active
I1023 23:42:29.972176 33217 checks.go:201] validating availability of port 6443
I1023 23:42:29.972523 33217 checks.go:201] validating availability of port 10259
I1023 23:42:29.972579 33217 checks.go:201] validating availability of port 10257
I1023 23:42:29.972629 33217 checks.go:278] validating the existence of file
/etc/kubernetes/manifests/kube-apiserver.yaml
I1023 23:42:29.972682 33217 checks.go:278] validating the existence of file
/etc/kubernetes/manifests/kube-controller-manager.yaml
I1023 23:42:29.972717 33217 checks.go:278] validating the existence of file
/etc/kubernetes/manifests/kube-scheduler.yaml
I1023 23:42:29.972732 33217 checks.go:278] validating the existence of file
/etc/kubernetes/manifests/etcd.yaml
I1023 23:42:29.972757 33217 checks.go:428] validating if the connectivity type is
via proxy or direct
I1023 23:42:29.972809 33217 checks.go:467] validating http connectivity to first
IP address in the CIDR
I1023 23:42:29.972834 33217 checks.go:467] validating http connectivity to first
IP address in the CIDR
I1023 23:42:29.972851 33217 checks.go:102] validating the container runtime
I1023 23:42:30.004971 33217 checks.go:637] validating whether swap is enabled or
not
I1023 23:42:30.005092 33217 checks.go:368] validating the presence of executable
crictl
I1023 23:42:30.005138 33217 checks.go:368] validating the presence of executable
conntrack
I1023 23:42:30.005165 33217 checks.go:368] validating the presence of executable
ip
I1023 23:42:30.005196 33217 checks.go:368] validating the presence of executable
iptables
I1023 23:42:30.005235 33217 checks.go:368] validating the presence of executable
mount
I1023 23:42:30.005266 33217 checks.go:368] validating the presence of executable
nsenter
I1023 23:42:30.005329 33217 checks.go:368] validating the presence of executable
ethtool
I1023 23:42:30.005367 33217 checks.go:368] validating the presence of executable
tc
I1023 23:42:30.005392 33217 checks.go:368] validating the presence of executable
touch
I1023 23:42:30.005420 33217 checks.go:514] running all checks
I1023 23:42:30.021274 33217 checks.go:399] checking whether the given node name
is valid and reachable using net.LookupHost
I1023 23:42:30.021352 33217 checks.go:603] validating kubelet version
I1023 23:42:30.088947 33217 checks.go:128] validating if the "kubelet" service is
enabled and active
I1023 23:42:30.114639 33217 checks.go:201] validating availability of port 10250
I1023 23:42:30.114776 33217 checks.go:327] validating the contents of file
/proc/sys/net/ipv4/ip_forward
I1023 23:42:30.114844 33217 checks.go:201] validating availability of port 2379
I1023 23:42:30.114898 33217 checks.go:201] validating availability of port 2380
I1023 23:42:30.114948 33217 checks.go:241] validating the existence and emptiness
of directory /var/lib/etcd
[preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: time="2024-10-
23T23:42:30+05:30" level=fatal msg="validate service connection: validate CRI v1
runtime API for endpoint \"unix:///var/run/containerd/containerd.sock\": rpc error:
code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--
ignore-preflight-errors=...`
error execution phase preflight
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:260
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:128
github.com/spf13/cobra.(*Command).execute
github.com/spf13/[email protected]/command.go:940
github.com/spf13/cobra.(*Command).ExecuteC
github.com/spf13/[email protected]/command.go:1068
github.com/spf13/cobra.(*Command).Execute
github.com/spf13/[email protected]/command.go:992
k8s.io/kubernetes/cmd/kubeadm/app.Run
k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:52
main.main
k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
runtime/proc.go:271
runtime.goexit
runtime/asm_amd64.s:1695
admin-tccl@Admin:~$ sudo kubeadm init --pod-network-cidr=192.168.0.0/16
I1023 23:43:51.039741 33441 version.go:256] remote version is much newer:
v1.31.2; falling back to: stable-1.30
[init] Using Kubernetes version: v1.30.6
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: time="2024-10-
23T23:43:51+05:30" level=fatal msg="validate service connection: validate CRI v1
runtime API for endpoint \"unix:///var/run/containerd/containerd.sock\": rpc error:
code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--
ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
admin-tccl@Admin:~$ which containerd
/usr/bin/containerd
admin-tccl@Admin:~$ sudo systemctl status containerd
● containerd.service - containerd container runtime
Loaded: loaded (/usr/lib/systemd/system/containerd.service; enabled; preset:
enabled)
Active: active (running) since Wed 2024-10-23 22:46:08 IST; 1h 1min ago
Docs: https://containerd.io
Main PID: 27491 (containerd)
Tasks: 16
Memory: 13.9M (peak: 16.6M)
CPU: 3.479s
CGroup: /system.slice/containerd.service
└─27491 /usr/bin/containerd

Oct 23 22:46:08 Admin containerd[27491]: time="2024-10-23T22:46:08.517661463+05:30"


level=info msg="skip loading plugin \"io.containerd.tracing.processo>
Oct 23 22:46:08 Admin containerd[27491]: time="2024-10-23T22:46:08.517681019+05:30"
level=info msg="loading plugin \"io.containerd.internal.v1.tracing\">
Oct 23 22:46:08 Admin containerd[27491]: time="2024-10-23T22:46:08.517712936+05:30"
level=info msg="skip loading plugin \"io.containerd.internal.v1.trac>
Oct 23 22:46:08 Admin containerd[27491]: time="2024-10-23T22:46:08.517727812+05:30"
level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\">
Oct 23 22:46:08 Admin containerd[27491]: time="2024-10-23T22:46:08.517746319+05:30"
level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type>
Oct 23 22:46:08 Admin containerd[27491]: time="2024-10-23T22:46:08.517766922+05:30"
level=info msg="NRI interface is disabled by configuration."
Oct 23 22:46:08 Admin containerd[27491]: time="2024-10-23T22:46:08.518086512+05:30"
level=info msg=serving... address=/run/containerd/containerd.sock.tt>
Oct 23 22:46:08 Admin containerd[27491]: time="2024-10-23T22:46:08.518170949+05:30"
level=info msg=serving... address=/run/containerd/containerd.sock
Oct 23 22:46:08 Admin containerd[27491]: time="2024-10-23T22:46:08.518264395+05:30"
level=info msg="containerd successfully booted in 0.036133s"
Oct 23 22:46:08 Admin systemd[1]: Started containerd.service - containerd container
runtime.

admin-tccl@Admin:~$ sudo cat /etc/containerd/config.toml


# Copyright 2018-2022 Docker Inc.

# Licensed under the Apache License, Version 2.0 (the "License");


# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at

# http://www.apache.org/licenses/LICENSE-2.0

# Unless required by applicable law or agreed to in writing, software


# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

disabled_plugins = ["cri"]

#root = "/var/lib/containerd"
#state = "/run/containerd"
#subreaper = true
#oom_score = 0

#[grpc]
# address = "/run/containerd/containerd.sock"
# uid = 0
# gid = 0

#[debug]
# address = "/run/containerd/debug.sock"
# uid = 0
# gid = 0
# level = "info"
admin-tccl@Admin:~$ sudo vi /etc/containerd/config.toml
admin-tccl@Admin:~$ sudo vi /etc/containerd/config.toml
admin-tccl@Admin:~$ sudo nano /etc/containerd/config.toml
admin-tccl@Admin:~$ sudo systemctl restart containerd
admin-tccl@Admin:~$ sudo systemctl status containerd
● containerd.service - containerd container runtime
Loaded: loaded (/usr/lib/systemd/system/containerd.service; enabled; preset:
enabled)
Active: active (running) since Wed 2024-10-23 23:55:45 IST; 15s ago
Docs: https://containerd.io
Process: 35061 ExecStartPre=/sbin/modprobe overlay (code=exited,
status=0/SUCCESS)
Main PID: 35063 (containerd)
Tasks: 16
Memory: 14.5M (peak: 16.2M)
CPU: 124ms
CGroup: /system.slice/containerd.service
└─35063 /usr/bin/containerd

Oct 23 23:55:45 Admin containerd[35063]: time="2024-10-23T23:55:45.117482087+05:30"


level=error msg="failed to load cni during init, please check CRI pl>
Oct 23 23:55:45 Admin containerd[35063]: time="2024-10-23T23:55:45.117665211+05:30"
level=info msg="Start subscribing containerd event"
Oct 23 23:55:45 Admin containerd[35063]: time="2024-10-23T23:55:45.117777168+05:30"
level=info msg="Start recovering state"
Oct 23 23:55:45 Admin containerd[35063]: time="2024-10-23T23:55:45.117889473+05:30"
level=info msg=serving... address=/run/containerd/containerd.sock.tt>
Oct 23 23:55:45 Admin containerd[35063]: time="2024-10-23T23:55:45.117902952+05:30"
level=info msg="Start event monitor"
Oct 23 23:55:45 Admin containerd[35063]: time="2024-10-23T23:55:45.117970210+05:30"
level=info msg="Start snapshots syncer"
Oct 23 23:55:45 Admin containerd[35063]: time="2024-10-23T23:55:45.117987740+05:30"
level=info msg="Start cni network conf syncer for default"
Oct 23 23:55:45 Admin containerd[35063]: time="2024-10-23T23:55:45.117997378+05:30"
level=info msg=serving... address=/run/containerd/containerd.sock
Oct 23 23:55:45 Admin containerd[35063]: time="2024-10-23T23:55:45.118010089+05:30"
level=info msg="Start streaming server"
Oct 23 23:55:45 Admin containerd[35063]: time="2024-10-23T23:55:45.118139506+05:30"
level=info msg="containerd successfully booted in 0.032528s"
admin-tccl@Admin:~$ sudo kubeadm init --pod-network-cidr=192.168.0.0/16
I1023 23:56:31.305786 35175 version.go:256] remote version is much newer:
v1.31.2; falling back to: stable-1.30
[init] Using Kubernetes version: v1.30.6
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your
internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config
images pull'
W1023 23:56:32.081881 35175 checks.go:844] detected that the sandbox image
"registry.k8s.io/pause:3.8" of the container runtime is inconsistent with that used
by kubeadm.It is recommended to use "registry.k8s.io/pause:3.9" as the CRI sandbox
image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [admin kubernetes
kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and
IPs [10.96.0.1 192.168.29.215]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [admin localhost] and IPs
[192.168.29.215 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [admin localhost] and IPs
[192.168.29.215 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file
"/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file
"/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static
Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz.
This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.683454ms
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is healthy after 6.002681209s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the
"kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the
configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node admin as control-plane by adding the labels:
[node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-
load-balancers]
[mark-control-plane] Marking the node admin as control-plane by adding the taints
[node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: xfdnmw.04h4suwu5vlpmmwq
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs
in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller
automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node
client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public"
namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable
kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.


Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as
root:

kubeadm join 192.168.29.215:6443 --token xfdnmw.04h4suwu5vlpmmwq \


--discovery-token-ca-cert-hash
sha256:4aef5bfc1d6332a75b1e631c40510bec9c9eeef00f1702d0383692019782d168
admin-tccl@Admin:~$ mkdir -p $HOME/ .kube
admin-tccl@Admin:~$ sudo cp -i /etc/Kubernetes/admin.conf $HOME/ .kube/config
cp: target '.kube/config': No such file or directory
admin-tccl@Admin:~$ mkdir -p $HOME/.kube
admin-tccl@Admin:~$ sudo cp -i /etc/Kubernetes/admin.conf $HOME/.kube/config
cp: cannot stat '/etc/Kubernetes/admin.conf': No such file or directory
admin-tccl@Admin:~$ ls /etc/kubernetes/
admin.conf controller-manager.conf kubelet.conf manifests pki scheduler.conf
super-admin.conf
admin-tccl@Admin:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/ .kube/config
cp: target '.kube/config': No such file or directory
admin-tccl@Admin:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
admin-tccl@Admin:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
admin-tccl@Admin:~$ wget
https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/
calico.yaml
--2024-10-24 00:11:35--
https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/
calico.yaml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)...
2606:50c0:8001::154, 2606:50c0:8002::154, 2606:50c0:8003::154, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|
2606:50c0:8001::154|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 253815 (248K) [text/plain]
Saving to: ‘calico.yaml’

calico.yaml 100%
[=========================================================================>]
247.87K --.-KB/s in 0.04s

2024-10-24 00:11:36 (6.87 MB/s) - ‘calico.yaml’ saved [253815/253815]

admin-tccl@Admin:~$ kubectl apply -f calico.yaml


error: error validating "calico.yaml": error validating data: failed to download
openapi: Get "https://192.168.29.215:6443/openapi/v2?timeout=32s": dial tcp
192.168.29.215:6443: connect: connection refused; if you choose to ignore these
errors, turn validation off with --validate=false
admin-tccl@Admin:~$ sudo kubectl apply -f calico.yaml --validate=false
E1024 00:14:04.717642 42114 memcache.go:265] couldn't get current server API
group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080:
connect: connection refused
E1024 00:14:04.718815 42114 memcache.go:265] couldn't get current server API
group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080:
connect: connection refused
E1024 00:14:04.719729 42114 memcache.go:265] couldn't get current server API
group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080:
connect: connection refused
E1024 00:14:04.721815 42114 memcache.go:265] couldn't get current server API
group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080:
connect: connection refused
E1024 00:14:04.722977 42114 memcache.go:265] couldn't get current server API
group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080:
connect: connection refused
E1024 00:14:04.726058 42114 memcache.go:265] couldn't get current server API
group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080:
connect: connection refused
E1024 00:14:04.729629 42114 memcache.go:265] couldn't get current server API
group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080:
connect: connection refused
E1024 00:14:04.732147 42114 memcache.go:265] couldn't get current server API
group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080:
connect: connection refused
E1024 00:14:04.734289 42114 memcache.go:265] couldn't get current server API
group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080:
connect: connection refused
E1024 00:14:04.739862 42114 memcache.go:265] couldn't get current server API
group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080:
connect: connection refused
E1024 00:14:04.741843 42114 memcache.go:265] couldn't get current server API
group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080:
connect: connection refused
E1024 00:14:04.750880 42114 memcache.go:265] couldn't get current server API
group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080:
connect: connection refused
E1024 00:14:04.759228 42114 memcache.go:265] couldn't get current server API
group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080:
connect: connection refused
E1024 00:14:04.760726 42114 memcache.go:265] couldn't get current server API
group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080:
connect: connection refused
E1024 00:14:04.762596 42114 memcache.go:265] couldn't get current server API
group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080:
connect: connection refused
E1024 00:14:04.766294 42114 memcache.go:265] couldn't get current server API
group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080:
connect: connection refused
E1024 00:14:04.767967 42114 memcache.go:265] couldn't get current server API
group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080:
connect: connection refused
E1024 00:14:04.769444 42114 memcache.go:265] couldn't get current server API
group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080:
connect: connection refused
E1024 00:14:04.770822 42114 memcache.go:265] couldn't get current server API
group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080:
connect: connection refused
E1024 00:14:04.771739 42114 memcache.go:265] couldn't get current server API
group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080:
connect: connection refused
E1024 00:14:04.775826 42114 memcache.go:265] couldn't get current server API
group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080:
connect: connection refused
E1024 00:14:04.785462 42114 memcache.go:265] couldn't get current server API
group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080:
connect: connection refused
E1024 00:14:04.786729 42114 memcache.go:265] couldn't get current server API
group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080:
connect: connection refused
E1024 00:14:04.788057 42114 memcache.go:265] couldn't get current server API
group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080:
connect: connection refused
E1024 00:14:04.789793 42114 memcache.go:265] couldn't get current server API
group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080:
connect: connection refused
E1024 00:14:04.790856 42114 memcache.go:265] couldn't get current server API
group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080:
connect: connection refused
E1024 00:14:04.791804 42114 memcache.go:265] couldn't get current server API
group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080:
connect: connection refused
E1024 00:14:04.792708 42114 memcache.go:265] couldn't get current server API
group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080:
connect: connection refused
E1024 00:14:04.794813 42114 memcache.go:265] couldn't get current server API
group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080:
connect: connection refused
E1024 00:14:04.798169 42114 memcache.go:265] couldn't get current server API
group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080:
connect: connection refused
E1024 00:14:04.799477 42114 memcache.go:265] couldn't get current server API
group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080:
connect: connection refused
unable to recognize "calico.yaml": Get "http://localhost:8080/api?timeout=32s":
dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "calico.yaml": Get "http://localhost:8080/api?timeout=32s":
dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "calico.yaml": Get "http://localhost:8080/api?timeout=32s":
dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "calico.yaml": Get "http://localhost:8080/api?timeout=32s":
dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "calico.yaml": Get "http://localhost:8080/api?timeout=32s":
dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "calico.yaml": Get "http://localhost:8080/api?timeout=32s":
dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "calico.yaml": Get "http://localhost:8080/api?timeout=32s":
dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "calico.yaml": Get "http://localhost:8080/api?timeout=32s":
dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "calico.yaml": Get "http://localhost:8080/api?timeout=32s":
dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "calico.yaml": Get "http://localhost:8080/api?timeout=32s":
dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "calico.yaml": Get "http://localhost:8080/api?timeout=32s":
dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "calico.yaml": Get "http://localhost:8080/api?timeout=32s":
dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "calico.yaml": Get "http://localhost:8080/api?timeout=32s":
dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "calico.yaml": Get "http://localhost:8080/api?timeout=32s":
dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "calico.yaml": Get "http://localhost:8080/api?timeout=32s":
dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "calico.yaml": Get "http://localhost:8080/api?timeout=32s":
dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "calico.yaml": Get "http://localhost:8080/api?timeout=32s":
dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "calico.yaml": Get "http://localhost:8080/api?timeout=32s":
dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "calico.yaml": Get "http://localhost:8080/api?timeout=32s":
dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "calico.yaml": Get "http://localhost:8080/api?timeout=32s":
dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "calico.yaml": Get "http://localhost:8080/api?timeout=32s":
dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "calico.yaml": Get "http://localhost:8080/api?timeout=32s":
dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "calico.yaml": Get "http://localhost:8080/api?timeout=32s":
dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "calico.yaml": Get "http://localhost:8080/api?timeout=32s":
dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "calico.yaml": Get "http://localhost:8080/api?timeout=32s":
dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "calico.yaml": Get "http://localhost:8080/api?timeout=32s":
dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "calico.yaml": Get "http://localhost:8080/api?timeout=32s":
dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "calico.yaml": Get "http://localhost:8080/api?timeout=32s":
dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "calico.yaml": Get "http://localhost:8080/api?timeout=32s":
dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "calico.yaml": Get "http://localhost:8080/api?timeout=32s":
dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "calico.yaml": Get "http://localhost:8080/api?timeout=32s":
dial tcp 127.0.0.1:8080: connect: connection refused
admin-tccl@Admin:~$ ^C
admin-tccl@Admin:~$ curl -O https://docs.projectcalico.org/manifests/calico.yaml
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 83 100 83 0 0 15 0 0:00:05 0:00:05 --:--:-- 19
admin-tccl@Admin:~$ nano calico.yaml
admin-tccl@Admin:~$ ll
total 92
drwxr-x--- 18 admin-tccl admin-tccl 4096 Oct 24 00:17 ./
drwxr-xr-x 3 root root 4096 Oct 23 16:29 ../
drwxr-xr-x 2 admin-tccl admin-tccl 4096 Oct 23 21:32 .anydesk/
-rw-r--r-- 1 admin-tccl admin-tccl 220 Mar 31 2024 .bash_logout
-rw-r--r-- 1 admin-tccl admin-tccl 3771 Mar 31 2024 .bashrc
drwx------ 12 admin-tccl admin-tccl 4096 Oct 23 22:03 .cache/
-rw-rw-r-- 1 admin-tccl admin-tccl 83 Oct 24 00:16 calico.yaml
drwx------ 13 admin-tccl admin-tccl 4096 Oct 23 21:55 .config/
drwxr-xr-x 2 admin-tccl admin-tccl 4096 Oct 23 16:32 Desktop/
drwxr-xr-x 3 admin-tccl admin-tccl 4096 Oct 23 21:48 Documents/
drwxr-xr-x 2 admin-tccl admin-tccl 4096 Oct 23 21:32 Downloads/
drwx------ 2 admin-tccl admin-tccl 4096 Oct 23 22:27 .gnupg/
drwxrwxr-x 2 admin-tccl admin-tccl 4096 Oct 24 00:07 .kube/
drwx------ 4 admin-tccl admin-tccl 4096 Oct 23 16:32 .local/
drwxr-xr-x 2 admin-tccl admin-tccl 4096 Oct 23 16:32 Music/
drwxr-xr-x 2 admin-tccl admin-tccl 4096 Oct 23 16:32 Pictures/
-rw-r--r-- 1 admin-tccl admin-tccl 807 Mar 31 2024 .profile
drwxr-xr-x 2 admin-tccl admin-tccl 4096 Oct 23 16:32 Public/
drwx------ 6 admin-tccl admin-tccl 4096 Oct 23 18:00 snap/
drwx------ 2 admin-tccl admin-tccl 4096 Oct 23 16:29 .ssh/
-rw-r--r-- 1 admin-tccl admin-tccl 0 Oct 23 21:53 .sudo_as_admin_successful
drwxr-xr-x 2 admin-tccl admin-tccl 4096 Oct 23 16:32 Templates/
drwxr-xr-x 3 admin-tccl admin-tccl 4096 Oct 23 21:47 Videos/
-rw-rw-r-- 1 admin-tccl admin-tccl 180 Oct 24 00:11 .wget-hsts
admin-tccl@Admin:~$ curl -O
https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/
calico.yaml
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 247k 100 247k 0 0 567k 0 --:--:-- --:--:-- --:--:-- 568k
admin-tccl@Admin:~$ nano calico.yaml
admin-tccl@Admin:~$ kubectl apply -f calico.yaml
error: error validating "calico.yaml": error validating data: failed to download
openapi: Get "https://192.168.29.215:6443/openapi/v2?timeout=32s": dial tcp
192.168.29.215:6443: connect: connection refused; if you choose to ignore these
errors, turn validation off with --validate=false
admin-tccl@Admin:~$
admin-tccl@Admin:~$ cat $HOME/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data:
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJYlJ3NFVFWXFaSW93RFF
ZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TkRFd01qTX
hPREl5TWpOYUZ3MHpOREV3TWpFeE9ESTNNak5hTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z
2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUR1RkVGSEdqdWN2THd2U2N2Rm1k
RWlsdTZEdUlSMU1GdzVnSGN4eGRJRlFQNVU4RGpPK2VtZGdHY2oKbCtWeVJYdlJ1aXAvTW5wNHNQYVZFNER
DOHYydHZzWWVqczJQSkVUSU8zb2U5WHV6NEN1VUR6RXJZdG12NFQ3Ugo0L09QZUwzamduS1E2ZFBkM1RRMm
dleEgvVUtkLy9tV3pWMkpWK2Zvd2FyVnpnenpiZGdkOGNmWFVHZGFqbnB2CnovRUxTb01DVHErak80dThRW
nBaUEJnK2FJc04ySXFBY3ZNOS9iZCtIN2hQcHhoYUJFYjc3WFZQMG9pK2pLMDAKemREUkIxWEpadmRhakhv
TlBaQjNaQ2pnamN5dlIyY2hkUnkvdVV0clZWcnpIY0QydlZ1WUl4cGI4Z3BJckgrMApwLzlvQjhwU0Y4NkZ
HN3FNcWlRQ014aEVKMiszQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQW
Y4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJTUGF2dnhsNWJFMjN1RFVTRlUxK3RseCsrd1FqQVYKQmdOVkhSR
UVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQ3IvMkVBWnU1NwpMZEFO
aldETFFBK2FMSW1MVnViRlIxQ2JtdVN3bFE1TnQ3VUlHQTBUSkdyLzBoNmpmaTg2aUFXbGJxZll6czZxCmI
1b050ekF1TlcxYm9sM2c4Tiswd25oRW1NRWdmUUhLN2FmZkFBbGZIbmpuSTNlcVprQno4ZFBLK3JpaTRLcn
QKZHR2bm1HZVVZMklXalFReFJsb3pjK21Wa01heUw5MjZQdkdZM3dMaUtUWmNTcHBkNDJKUUhzTzVFSGoyV
nhwcgorRmszb3lBRWRvOFM2WjVaOWp1NVcwRWY0Q2ZhR0tnOTRibjhIMEFzWHhEYWNWSm9sQ1c0RXQwNnZM
NlpBdkIzClBCNjI5RnJkZ01HWEJxNndNNkJnTWFqLzdwMTcrSldadnlSU21aZWNsZnBhRVF0a1pVeE1tR1p
DTTB4VTBkbVAKZWVzTWh4ZmNIbllICi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
server: https://192.168.29.215:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data:
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURLVENDQWhHZ0F3SUJBZ0lJVDdNUEtBTkpkODh3RFF
ZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TkRFd01qTX
hPREl5TWpOYUZ3MHlOVEV3TWpNeE9ESTNNalZhTUR3eApIekFkQmdOVkJBb1RGbXQxWW1WaFpHMDZZMngxY
zNSbGNpMWhaRzFwYm5NeEdUQVhCZ05WQkFNVEVHdDFZbVZ5CmJtVjBaWE10WVdSdGFXNHdnZ0VpTUEwR0NT
cUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFDZzZQeFIKQkhITzB3RVh5SGphZzlLbTFFN1A2Ukd
lUGdsKyt6Rm5neUdtc2h4eUpycGQ4a0dUd2xiZHVzSXpzUWFaeG9LagpnbmxGcDFWKzdoQkpiRlVhRFZxd2
5ZY05La3BXM0xqU0xVcHhNWk0xTXg1Y3pDY3RQRXZIeVFJVzNCNTg2QXEzCm1rZ0sxTWpmSURrRlRXTFZhc
0RPU3V5dUdpMGRuYjhPdmNlaGJLUGtWNkJVZTVTK3o1YnEvZDhZeS85aEpzN1oKTHZENzgvNWd3clhFZlRy
TkNtdlpiZ2hCdURvNzFFT0tkeWtySXVGVDVFdTFYQW8zR2RwNjhISHlWQWtaQkV1TQpRZ284MmVIZlFlQnh
nOEFyYlBsRGpSM0pBUnMrMFcybFhrWnhzM3dXMDVjN2VJWWdwaVE3V2U5bUZLVnB0WGF4CkdKV3QyekFYdj
JZd3ZpZEpBZ01CQUFHalZqQlVNQTRHQTFVZER3RUIvd1FFQXdJRm9EQVRCZ05WSFNVRUREQUsKQmdnckJnR
UZCUWNEQWpBTUJnTlZIUk1CQWY4RUFqQUFNQjhHQTFVZEl3UVlNQmFBRkk5cSsvR1hsc1RiZTROUgpJVlRY
NjJYSDc3QkNNQTBHQ1NxR1NJYjNEUUVCQ3dVQUE0SUJBUURja2xPTzFRYkcwZkR0cUNiNWZjU054cVFRCmF
4dnBYZjZ3dThFVWM0NVlmM3NkVFFlekUrTGkwZEk4ZjB1bEMwYVM2cTV3ZWtUNVpUdm8xWTVSejlhWHNURX
MKdjBoUUdSV1R3aXlhZ3VpTXUrQzNZc1lsbUZOSno0YTJYUWw0NktKTThvUDBManNSb3hSdWtUcUFKRi9rW
FlCUgpKME1qRU1hb09HNzdUVGNXRTd2ZG8yY29wS3FIYjR1YlBwbUs3OWRsMzJFa1NUS29GeWQrSktRWHdN
UENTM3lKClFaUWlxZzYxSGhBeFUvSmM4VnE2ZlNGOWdEVEpVSFBSWE9tSnJMcEhleUhZSm5pMjk5L2c4TW4
2NndyNS9nR1kKNlM4WGc5eGxwcGZoQldDODdGSnd1aDBBeXpYbzhSVUUzUEg4SklzOEdCWjk5c3prbmVrRU
hSbTVQdDRECi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
client-key-data:
LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBb09qOFVRUnh6dE1CRjh
oNDJvUFNwdFJPeitrUm5qNEpmdnN4WjRNaHBySWNjaWE2ClhmSkJrOEpXM2JyQ003RUdtY2FDbzRKNVJhZF
ZmdTRRU1d4VkdnMWFzSjJIRFNwS1Z0eTQwaTFLY1RHVE5UTWUKWE13bkxUeEx4OGtDRnR3ZWZPZ0t0NXBJQ
3RUSTN5QTVCVTFpMVdyQXprcnNyaG90SFoyL0RyM0hvV3lqNUZlZwpWSHVVdnMrVzZ2M2ZHTXYvWVNiTzJT
N3crL1ArWU1LMXhIMDZ6UXByMlc0SVFiZzZPOVJEaW5jcEt5TGhVK1JMCnRWd0tOeG5hZXZCeDhsUUpHUVJ
MakVJS1BObmgzMEhnY1lQQUsyejVRNDBkeVFFYlB0RnRwVjVHY2JOOEZ0T1gKTzNpR0lLWWtPMW52WmhTbG
FiVjJzUmlWcmRzd0Y3OW1NTDRuU1FJREFRQUJBb0lCQUNkMVdzSm5TNTFEUXdaWgpBOEhhQjZNZmR3QW5FR
W4wdnBGaitkWi9ZcFlsSVRLZzZweTFGbjJzYjI3S0tHdFNvdUs4dWpac2ZWNm1UU0htCk1ScEFOWkpBNmhk
YldjM1JyQThtNnkrbktaVWVhaEhtcWpCcFk4WUUvalJNeDNWaG54eFVMcVNkY2NNdU1OLysKWDkwNy85dUQ
1U254VjU3T0RuZ3Z3YlZVdG9xUDNRRFhQR2lLYm1GdWR5T2hXWTV6TVZQUXZuSmRTejg4VWZFeApwbDNaUn
lhZEJ6eFJFdzNuYnJaRjAyc05DbXg0bC9rbmJuQW4wSG5EOVB1eWpDK1Y5N1JPWU1ycEZOQUZtR2thCm5RZ
XRMNy80WDhjQ3p6UDBHWDFDL3R5Mm9pcGxkVmJTczVQSFFPMjhUN1RsUStQdnRidDN2TmVSa05WaWVHS3oK
OVUzdE1ZMENnWUVBeFlsaGU5R3dkUlVaUXJnUElWMVVIZm1DdVV6SElUSTFnYlR4N0Z3UnNNd29XUlBiSml
GbwpMZ3pCYkllNy8wc3Y4Yzg4SXNxK1dpQkZzNTE2aXcwVUluUDh1R29CeVVHQjhlNWlYajNKcEVVeFBWeH
dIT216ClJzY3I4TjFGVXA5STNBVkNUSkJnT0ZKYnFEV0kzdlIyZWc3cnJYdEczRHBDVlozeStQbXBmRThDZ
1lFQTBJaU0KSVFxNGlSSEQwKzlCenozbXlLNnpVd2lzbWNOcFB4U0tsK0F1ckZVMXhMSmJxcGpUZkltOEUr
MGp1MkdFaWRSWQpROC9OaTU0a0wwZ0ExdHBCaFdoaXFEQVFUNTZFVTB2OUJnTXo3ZmorLytJSjFrZjBiWFp
4cXRxVllOTms2UzRBCkwyZllGT1A3bklQbHlxdVROeHVPcXRnVnJDYS96ZU9NT3pzV1JPY0NnWUE2a1VQMC
tUUHZVdVVkY2dNU2FtQngKVHJRaWlwQVQySllpc2VwMG9NdWg5cllUeXg1VHpOM2RvV3lMNkNhbVI3MmNYV
XhBS0lxTm9EbnFTa3UyQkpldQpxMk1Icm01L0pFd0oxaHNXUkEyUUJlL1dlSnpKQmNWZ3U5YmNZRTZZYzUr
ZmxIT1d6Y3VwaDBtanN0TzAveGhOCmtqVHdSN2UzdmhKQzNrVFc2dmNFWXdLQmdCMlY0ZHVtTzd3bXF4UGN
kQWZGRG9NV1ZoYkh1a1V1ZGpZZTRmTGUKT1lEMXJlVTBNTkVwVVlmdnVxRlJHYXF5RVMzRTFLajZTSDB3ZU
kzRXQybkVHVnVtRGFreStIMXpUZTdMYnlCMQpQOTdaWHNSSyszNU5ReDVzbVgvVjl5OS9qbWVPd1RQNGxhM
lJFdGVIMXdoRUEyVGtJZitYSEt3SjYxaDRtaUtsCkpXbXRBb0dCQUtPRVhBREVZVXZqekxiZUdpTGdqRUFt
QkxVYXp5ZjZvTkFTaW12Tmk0T1pzQ0JDNUJNL2pZck0KQ2w1S2ZKRXRob1M3cmpEOVJmR1cvRGhrcGltR0R
PWFVhQ0Vub0ltbWNkclJGNk5uTTZBVk5hWEhmbERKRjBROQpJT0FoV1BnbTRiVllGc2pQam9JbG41aml1d1
hTRWI4S2dqOFJXTkx0MW5WM2ZaZGVRQTNCCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
admin-tccl@Admin:~$ ifconfig
Command 'ifconfig' not found, but can be installed with:
sudo apt install net-tools
admin-tccl@Admin:~$ ipconfig
Command 'ipconfig' not found, did you mean:
command 'hipconfig' from deb hipcc (5.2.3-12)
command 'iconfig' from deb ipmiutil (3.1.9-3)
command 'iwconfig' from deb wireless-tools (30~pre9-13.1ubuntu4)
command 'ifconfig' from deb net-tools (2.10-0.1ubuntu3)
Try: sudo apt install <deb name>
admin-tccl@Admin:~$ sudo systemctl status kublet
Unit kublet.service could not be found.
admin-tccl@Admin:~$ sudo systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; preset:
enabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Wed 2024-10-23 23:57:33 IST; 28min ago
Docs: https://kubernetes.io/docs/
Main PID: 36437 (kubelet)
Tasks: 24 (limit: 17681)
Memory: 37.1M (peak: 40.4M)
CPU: 45.912s
CGroup: /system.slice/kubelet.service
└─36437 /usr/bin/kubelet
--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf
--kubeconfig=/etc/kubernetes/kubelet.conf --config=/>

Oct 24 00:25:34 Admin kubelet[36437]: E1024 00:25:34.116445 36437


kubelet_node_status.go:544] "Error updating node status, will retry" err="error
gett>
Oct 24 00:25:34 Admin kubelet[36437]: E1024 00:25:34.116677 36437
kubelet_node_status.go:544] "Error updating node status, will retry" err="error
gett>
Oct 24 00:25:34 Admin kubelet[36437]: E1024 00:25:34.116897 36437
kubelet_node_status.go:544] "Error updating node status, will retry" err="error
gett>
Oct 24 00:25:34 Admin kubelet[36437]: E1024 00:25:34.117099 36437
kubelet_node_status.go:544] "Error updating node status, will retry" err="error
gett>
Oct 24 00:25:34 Admin kubelet[36437]: E1024 00:25:34.117119 36437
kubelet_node_status.go:531] "Unable to update node status" err="update node status
e>
Oct 24 00:25:34 Admin kubelet[36437]: E1024 00:25:34.650964 36437
kubelet.go:2920] "Container runtime network not ready"
networkReady="NetworkReady=fa>
Oct 24 00:25:34 Admin kubelet[36437]: E1024 00:25:34.711700 36437
controller.go:145] "Failed to ensure lease exists, will retry" err="Get
\"https://19>
Oct 24 00:25:34 Admin kubelet[36437]: I1024 00:25:34.940232 36437 scope.go:117]
"RemoveContainer" containerID="a000a62a64b3a7dba5ac74869cd67fc4f896348>
Oct 24 00:25:34 Admin kubelet[36437]: E1024 00:25:34.940664 36437
pod_workers.go:1298] "Error syncing pod, skipping" err="failed
to \"StartContainer\">
Oct 24 00:25:35 Admin kubelet[36437]: E1024 00:25:35.499959 36437 event.go:368]
"Unable to write event (may retry after sleeping)" err="Patch \"https:>
...skipping...
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; preset:
enabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Wed 2024-10-23 23:57:33 IST; 28min ago
Docs: https://kubernetes.io/docs/
Main PID: 36437 (kubelet)
Tasks: 24 (limit: 17681)
Memory: 37.1M (peak: 40.4M)
CPU: 45.912s
CGroup: /system.slice/kubelet.service
└─36437 /usr/bin/kubelet
--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf
--kubeconfig=/etc/kubernetes/kubelet.conf --config=/>

Oct 24 00:25:34 Admin kubelet[36437]: E1024 00:25:34.116445 36437


kubelet_node_status.go:544] "Error updating node status, will retry" err="error
gett>
Oct 24 00:25:34 Admin kubelet[36437]: E1024 00:25:34.116677 36437
kubelet_node_status.go:544] "Error updating node status, will retry" err="error
gett>
Oct 24 00:25:34 Admin kubelet[36437]: E1024 00:25:34.116897 36437
kubelet_node_status.go:544] "Error updating node status, will retry" err="error
gett>
Oct 24 00:25:34 Admin kubelet[36437]: E1024 00:25:34.117099 36437
kubelet_node_status.go:544] "Error updating node status, will retry" err="error
gett>
Oct 24 00:25:34 Admin kubelet[36437]: E1024 00:25:34.117119 36437
kubelet_node_status.go:531] "Unable to update node status" err="update node status
e>
Oct 24 00:25:34 Admin kubelet[36437]: E1024 00:25:34.650964 36437
kubelet.go:2920] "Container runtime network not ready"
networkReady="NetworkReady=fa>
Oct 24 00:25:34 Admin kubelet[36437]: E1024 00:25:34.711700 36437
controller.go:145] "Failed to ensure lease exists, will retry" err="Get
\"https://19>
Oct 24 00:25:34 Admin kubelet[36437]: I1024 00:25:34.940232 36437 scope.go:117]
"RemoveContainer" containerID="a000a62a64b3a7dba5ac74869cd67fc4f896348>
Oct 24 00:25:34 Admin kubelet[36437]: E1024 00:25:34.940664 36437
pod_workers.go:1298] "Error syncing pod, skipping" err="failed
to \"StartContainer\">
Oct 24 00:25:35 Admin kubelet[36437]: E1024 00:25:35.499959 36437 event.go:368]
"Unable to write event (may retry after sleeping)" err="Patch \"https:>
~
~
~
~
~
~
~
~
~
~
~
~
~
~
...skipping...
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; preset:
enabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Wed 2024-10-23 23:57:33 IST; 28min ago
Docs: https://kubernetes.io/docs/
Main PID: 36437 (kubelet)
Tasks: 24 (limit: 17681)
Memory: 37.1M (peak: 40.4M)
CPU: 45.912s
CGroup: /system.slice/kubelet.service
└─36437 /usr/bin/kubelet
--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf
--kubeconfig=/etc/kubernetes/kubelet.conf --config=/>

Oct 24 00:25:34 Admin kubelet[36437]: E1024 00:25:34.116445 36437


kubelet_node_status.go:544] "Error updating node status, will retry" err="error
gett>
Oct 24 00:25:34 Admin kubelet[36437]: E1024 00:25:34.116677 36437
kubelet_node_status.go:544] "Error updating node status, will retry" err="error
gett>
Oct 24 00:25:34 Admin kubelet[36437]: E1024 00:25:34.116897 36437
kubelet_node_status.go:544] "Error updating node status, will retry" err="error
gett>
Oct 24 00:25:34 Admin kubelet[36437]: E1024 00:25:34.117099 36437
kubelet_node_status.go:544] "Error updating node status, will retry" err="error
gett>
Oct 24 00:25:34 Admin kubelet[36437]: E1024 00:25:34.117119 36437
kubelet_node_status.go:531] "Unable to update node status" err="update node status
e>
Oct 24 00:25:34 Admin kubelet[36437]: E1024 00:25:34.650964 36437
kubelet.go:2920] "Container runtime network not ready"
networkReady="NetworkReady=fa>
Oct 24 00:25:34 Admin kubelet[36437]: E1024 00:25:34.711700 36437
controller.go:145] "Failed to ensure lease exists, will retry" err="Get
\"https://19>
Oct 24 00:25:34 Admin kubelet[36437]: I1024 00:25:34.940232 36437 scope.go:117]
"RemoveContainer" containerID="a000a62a64b3a7dba5ac74869cd67fc4f896348>
Oct 24 00:25:34 Admin kubelet[36437]: E1024 00:25:34.940664 36437
pod_workers.go:1298] "Error syncing pod, skipping" err="failed
to \"StartContainer\">
Oct 24 00:25:35 Admin kubelet[36437]: E1024 00:25:35.499959 36437 event.go:368]
"Unable to write event (may retry after sleeping)" err="Patch \"https:>
~
~
~
~
~
~
~
~
~
~
~
~
~
~

admin-tccl@Admin:~$ curl -k https://192.168.29.215:6443/version


curl: (7) Failed to connect to 192.168.29.215 port 6443 after 0 ms: Couldn't
connect to server
admin-tccl@Admin:~$ ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: enp2s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN
group default qlen 1000
link/ether 84:a9:38:9c:95:e8 brd ff:ff:ff:ff:ff:ff
3: wlp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group
default qlen 1000
link/ether a8:93:4a:dc:1f:4d brd ff:ff:ff:ff:ff:ff
inet 192.168.29.215/24 brd 192.168.29.255 scope global dynamic noprefixroute
wlp3s0
valid_lft 75320sec preferred_lft 75320sec
inet6 2405:201:e00e:8087:ea80:94fa:d083:c79d/64 scope global temporary dynamic
valid_lft 7483sec preferred_lft 7483sec
inet6 2405:201:e00e:8087:40d3:89de:effd:1dcb/64 scope global dynamic mngtmpaddr
noprefixroute
valid_lft 7483sec preferred_lft 7483sec
inet6 fe80::a64d:8461:c950:4e78/64 scope link noprefixroute
valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
group default
link/ether 02:42:d5:2c:97:49 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
admin-tccl@Admin:~$ curl -k https://192.168.29.215:6443/version
curl: (7) Failed to connect to 192.168.29.215 port 6443 after 0 ms: Couldn't
connect to server
admin-tccl@Admin:~$ hostname -I
192.168.29.215 172.17.0.1 2405:201:e00e:8087:ea80:94fa:d083:c79d
2405:201:e00e:8087:40d3:89de:effd:1dcb
admin-tccl@Admin:~$ curl ifconfig.me
2405:201:e00e:8087:ea80:94fa:d083:c79dadmin-tccl@Admin:~$
admin-tccl@Admin:~$
admin-tccl@Admin:~$
admin-tccl@Admin:~$
admin-tccl@Admin:~$
admin-tccl@Admin:~$
admin-tccl@Admin:~$ kubectl taint nodes --all node-role.kubernetes.io/control-
plane-
node/admin untainted
admin-tccl@Admin:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS
AGE
kube-system coredns-55cb58b774-7zsx7 0/1 Pending 0
34m
kube-system coredns-55cb58b774-sjnrv 0/1 Pending 0
34m
kube-system etcd-admin 1/1 Running 9 (8m21s
ago) 34m
kube-system kube-apiserver-admin 1/1 Running 10 (6m2s
ago) 34m
kube-system kube-controller-manager-admin 0/1 CrashLoopBackOff 10 (61s
ago) 34m
kube-system kube-proxy-kv2q6 0/1 CrashLoopBackOff 9 (2m47s
ago) 34m
kube-system kube-scheduler-admin 0/1 Completed 11
34m
admin-tccl@Admin:~$ kubectl create ns test
The connection to the server 192.168.29.215:6443 was refused - did you specify the
right host or port?
admin-tccl@Admin:~$ curl -k https://192.168.29.215:6443/version
curl: (7) Failed to connect to 192.168.29.215 port 6443 after 0 ms: Couldn't
connect to server
admin-tccl@Admin:~$
admin-tccl@Admin:~$
admin-tccl@Admin:~$
admin-tccl@Admin:~$ sudo ufw status
Status: inactive
admin-tccl@Admin:~$ ping 192.168.29.215
PING 192.168.29.215 (192.168.29.215) 56(84) bytes of data.
64 bytes from 192.168.29.215: icmp_seq=1 ttl=64 time=0.060 ms
64 bytes from 192.168.29.215: icmp_seq=2 ttl=64 time=0.039 ms
64 bytes from 192.168.29.215: icmp_seq=3 ttl=64 time=0.045 ms
64 bytes from 192.168.29.215: icmp_seq=4 ttl=64 time=0.041 ms
64 bytes from 192.168.29.215: icmp_seq=5 ttl=64 time=0.039 ms
64 bytes from 192.168.29.215: icmp_seq=6 ttl=64 time=0.045 ms
64 bytes from 192.168.29.215: icmp_seq=7 ttl=64 time=0.034 ms
64 bytes from 192.168.29.215: icmp_seq=8 ttl=64 time=0.045 ms
64 bytes from 192.168.29.215: icmp_seq=9 ttl=64 time=0.078 ms
^C
--- 192.168.29.215 ping statistics ---
9 packets transmitted, 9 received, 0% packet loss, time 8216ms
rtt min/avg/max/mdev = 0.034/0.047/0.078/0.012 ms
admin-tccl@Admin:~$ sudo kubectl get pods -n kube-system
E1024 00:36:59.563621 46680 memcache.go:265] couldn't get current server API
group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080:
connect: connection refused
E1024 00:36:59.563889 46680 memcache.go:265] couldn't get current server API
group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080:
connect: connection refused
E1024 00:36:59.565206 46680 memcache.go:265] couldn't get current server API
group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080:
connect: connection refused
E1024 00:36:59.565387 46680 memcache.go:265] couldn't get current server API
group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080:
connect: connection refused
E1024 00:36:59.566656 46680 memcache.go:265] couldn't get current server API
group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080:
connect: connection refused
The connection to the server localhost:8080 was refused - did you specify the right
host or port?
admin-tccl@Admin:~$ kubectl get pods --all-namespaces
The connection to the server 192.168.29.215:6443 was refused - did you specify the
right host or port?
admin-tccl@Admin:~$ cat /etc/kubernetes/manifests/kube-apiserver.yaml
cat: /etc/kubernetes/manifests/kube-apiserver.yaml: Permission denied
admin-tccl@Admin:~$ sudo cat /etc/kubernetes/manifests/kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint:
192.168.29.215:6443
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=192.168.29.215
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --secure-port=6443
- --service-account-issuer=https://kubernetes.default.svc.cluster.local
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
- --service-cluster-ip-range=10.96.0.0/12
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
image: registry.k8s.io/kube-apiserver:v1.30.6
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 192.168.29.215
path: /livez
port: 6443
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
name: kube-apiserver
readinessProbe:
failureThreshold: 3
httpGet:
host: 192.168.29.215
path: /readyz
port: 6443
scheme: HTTPS
periodSeconds: 1
timeoutSeconds: 15
resources:
requests:
cpu: 250m
startupProbe:
failureThreshold: 24
httpGet:
host: 192.168.29.215
path: /livez
port: 6443
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/ca-certificates
name: etc-ca-certificates
readOnly: true
- mountPath: /etc/pki
name: etc-pki
readOnly: true
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /usr/local/share/ca-certificates
name: usr-local-share-ca-certificates
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usr-share-ca-certificates
readOnly: true
hostNetwork: true
priority: 2000001000
priorityClassName: system-node-critical
securityContext:
seccompProfile:
type: RuntimeDefault
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/ca-certificates
type: DirectoryOrCreate
name: etc-ca-certificates
- hostPath:
path: /etc/pki
type: DirectoryOrCreate
name: etc-pki
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /usr/local/share/ca-certificates
type: DirectoryOrCreate
name: usr-local-share-ca-certificates
- hostPath:
path: /usr/share/ca-certificates
type: DirectoryOrCreate
name: usr-share-ca-certificates
status: {}
admin-tccl@Admin:~$ sudo journalctl -u kubelet -f
Oct 24 00:41:54 Admin kubelet[36437]: I1024 00:41:54.821149 36437
status_manager.go:853] "Failed to get status for pod"
podUID="c25fe24d89c9e417dae0c395ceceaf9a" pod="kube-system/kube-scheduler-admin"
err="Get \"https://192.168.29.215:6443/api/v1/namespaces/kube-system/pods/kube-
scheduler-admin\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:41:54 Admin kubelet[36437]: I1024 00:41:54.821401 36437
status_manager.go:853] "Failed to get status for pod" podUID="b0dd047f-9adc-412c-
993e-dc03bdf242bf" pod="kube-system/kube-proxy-kv2q6" err="Get
\"https://192.168.29.215:6443/api/v1/namespaces/kube-system/pods/kube-proxy-
kv2q6\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:41:54 Admin kubelet[36437]: E1024 00:41:54.896439 36437
kubelet.go:2920] "Container runtime network not ready"
networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network
plugin returns error: cni plugin not initialized"
Oct 24 00:41:56 Admin kubelet[36437]: E1024 00:41:56.938086 36437
kubelet_node_status.go:544] "Error updating node status, will retry" err="error
getting node \"admin\": Get \"https://192.168.29.215:6443/api/v1/nodes/admin?
resourceVersion=0&timeout=10s\": dial tcp 192.168.29.215:6443: connect: connection
refused"
Oct 24 00:41:56 Admin kubelet[36437]: E1024 00:41:56.938357 36437
kubelet_node_status.go:544] "Error updating node status, will retry" err="error
getting node \"admin\": Get \"https://192.168.29.215:6443/api/v1/nodes/admin?
timeout=10s\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:41:56 Admin kubelet[36437]: E1024 00:41:56.938581 36437
kubelet_node_status.go:544] "Error updating node status, will retry" err="error
getting node \"admin\": Get \"https://192.168.29.215:6443/api/v1/nodes/admin?
timeout=10s\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:41:56 Admin kubelet[36437]: E1024 00:41:56.938832 36437
kubelet_node_status.go:544] "Error updating node status, will retry" err="error
getting node \"admin\": Get \"https://192.168.29.215:6443/api/v1/nodes/admin?
timeout=10s\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:41:56 Admin kubelet[36437]: E1024 00:41:56.939059 36437
kubelet_node_status.go:544] "Error updating node status, will retry" err="error
getting node \"admin\": Get \"https://192.168.29.215:6443/api/v1/nodes/admin?
timeout=10s\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:41:56 Admin kubelet[36437]: E1024 00:41:56.939077 36437
kubelet_node_status.go:531] "Unable to update node status" err="update node status
exceeds retry count"
Oct 24 00:41:57 Admin kubelet[36437]: E1024 00:41:57.331907 36437 event.go:368]
"Unable to write event (may retry after sleeping)" err="Patch
\"https://192.168.29.215:6443/api/v1/namespaces/kube-system/events/kube-controller-
manager-admin.1801285e3ddaeac7\": dial tcp 192.168.29.215:6443: connect: connection
refused" event="&Event{ObjectMeta:{kube-controller-manager-admin.1801285e3ddaeac7
kube-system 906 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []
[]},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-
controller-manager-
admin,UID:d2cb40a8011ee7539fc0bed974c2f75d,APIVersion:v1,ResourceVersion:,FieldPath
:spec.containers{kube-controller-manager},},Reason:BackOff,Message:Back-off
restarting failed container kube-controller-manager in pod kube-controller-manager-
admin_kube-
system(d2cb40a8011ee7539fc0bed974c2f75d),Source:EventSource{Component:kubelet,Host:
admin,},FirstTimestamp:2024-10-23 23:58:37 +0530 IST,LastTimestamp:2024-10-24
00:34:59.935591816 +0530 IST
m=+2246.118819862,Count:74,Type:Warning,EventTime:0001-01-01 00:00:00 +0000
UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ad
min,}"
Oct 24 00:41:59 Admin kubelet[36437]: E1024 00:41:59.897869 36437
kubelet.go:2920] "Container runtime network not ready"
networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network
plugin returns error: cni plugin not initialized"
Oct 24 00:41:59 Admin kubelet[36437]: I1024 00:41:59.935971 36437 scope.go:117]
"RemoveContainer"
containerID="1ed970f958a77082e8c3c93db0c029db744c42b59dc71f9607a37b7917f7a1b9"
Oct 24 00:41:59 Admin kubelet[36437]: E1024 00:41:59.936331 36437
pod_workers.go:1298] "Error syncing pod, skipping" err="failed
to \"StartContainer\" for \"kube-controller-manager\" with
CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-
manager pod=kube-controller-manager-admin_kube-
system(d2cb40a8011ee7539fc0bed974c2f75d)\"" pod="kube-system/kube-controller-
manager-admin" podUID="d2cb40a8011ee7539fc0bed974c2f75d"
Oct 24 00:42:00 Admin kubelet[36437]: E1024 00:42:00.399430 36437
controller.go:145] "Failed to ensure lease exists, will retry" err="Get
\"https://192.168.29.215:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-
lease/leases/admin?timeout=10s\": dial tcp 192.168.29.215:6443: connect: connection
refused" interval="7s"
Oct 24 00:42:02 Admin kubelet[36437]: I1024 00:42:02.960540 36437 scope.go:117]
"RemoveContainer"
containerID="369cb1bf3917360e690f6ae04dd408783c11e3934dadace15036b543a289f3b9"
Oct 24 00:42:02 Admin kubelet[36437]: E1024 00:42:02.960865 36437
pod_workers.go:1298] "Error syncing pod, skipping" err="failed
to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s
restarting failed container=kube-proxy pod=kube-proxy-kv2q6_kube-system(b0dd047f-
9adc-412c-993e-dc03bdf242bf)\"" pod="kube-system/kube-proxy-kv2q6"
podUID="b0dd047f-9adc-412c-993e-dc03bdf242bf"
Oct 24 00:42:03 Admin kubelet[36437]: I1024 00:42:03.935735 36437
status_manager.go:853] "Failed to get status for pod"
podUID="c25fe24d89c9e417dae0c395ceceaf9a" pod="kube-system/kube-scheduler-admin"
err="Get \"https://192.168.29.215:6443/api/v1/namespaces/kube-system/pods/kube-
scheduler-admin\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:42:03 Admin kubelet[36437]: I1024 00:42:03.936013 36437
status_manager.go:853] "Failed to get status for pod" podUID="b0dd047f-9adc-412c-
993e-dc03bdf242bf" pod="kube-system/kube-proxy-kv2q6" err="Get
\"https://192.168.29.215:6443/api/v1/namespaces/kube-system/pods/kube-proxy-
kv2q6\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:42:03 Admin kubelet[36437]: I1024 00:42:03.936281 36437
status_manager.go:853] "Failed to get status for pod"
podUID="09a53fedd47e2efa2bcba0eb678a818e" pod="kube-system/etcd-admin"
err="Get \"https://192.168.29.215:6443/api/v1/namespaces/kube-system/pods/etcd-
admin\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:42:03 Admin kubelet[36437]: I1024 00:42:03.936551 36437
status_manager.go:853] "Failed to get status for pod"
podUID="d2cb40a8011ee7539fc0bed974c2f75d" pod="kube-system/kube-controller-manager-
admin" err="Get
\"https://192.168.29.215:6443/api/v1/namespaces/kube-system/pods/kube-controller-
manager-admin\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:42:03 Admin kubelet[36437]: I1024 00:42:03.936777 36437
status_manager.go:853] "Failed to get status for pod"
podUID="febe75ef8c1a35399682a0a32c2a4dc0" pod="kube-system/kube-apiserver-admin"
err="Get \"https://192.168.29.215:6443/api/v1/namespaces/kube-system/pods/kube-
apiserver-admin\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:42:04 Admin kubelet[36437]: I1024 00:42:04.092672 36437
status_manager.go:853] "Failed to get status for pod" podUID="b0dd047f-9adc-412c-
993e-dc03bdf242bf" pod="kube-system/kube-proxy-kv2q6" err="Get
\"https://192.168.29.215:6443/api/v1/namespaces/kube-system/pods/kube-proxy-
kv2q6\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:42:04 Admin kubelet[36437]: I1024 00:42:04.092892 36437
status_manager.go:853] "Failed to get status for pod"
podUID="09a53fedd47e2efa2bcba0eb678a818e" pod="kube-system/etcd-admin"
err="Get \"https://192.168.29.215:6443/api/v1/namespaces/kube-system/pods/etcd-
admin\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:42:04 Admin kubelet[36437]: I1024 00:42:04.093095 36437
status_manager.go:853] "Failed to get status for pod"
podUID="d2cb40a8011ee7539fc0bed974c2f75d" pod="kube-system/kube-controller-manager-
admin" err="Get
\"https://192.168.29.215:6443/api/v1/namespaces/kube-system/pods/kube-controller-
manager-admin\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:42:04 Admin kubelet[36437]: I1024 00:42:04.093300 36437
status_manager.go:853] "Failed to get status for pod"
podUID="febe75ef8c1a35399682a0a32c2a4dc0" pod="kube-system/kube-apiserver-admin"
err="Get \"https://192.168.29.215:6443/api/v1/namespaces/kube-system/pods/kube-
apiserver-admin\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:42:04 Admin kubelet[36437]: I1024 00:42:04.093511 36437
status_manager.go:853] "Failed to get status for pod"
podUID="c25fe24d89c9e417dae0c395ceceaf9a" pod="kube-system/kube-scheduler-admin"
err="Get \"https://192.168.29.215:6443/api/v1/namespaces/kube-system/pods/kube-
scheduler-admin\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:42:04 Admin kubelet[36437]: I1024 00:42:04.837855 36437
pod_container_deletor.go:80] "Container not found in pod's containers"
containerID="15bcd081bd049a5adf742ea3226b3c06ba65e9b2bec652d5d6d7fc1c649082cf"
Oct 24 00:42:04 Admin kubelet[36437]: I1024 00:42:04.837890 36437 scope.go:117]
"RemoveContainer"
containerID="cea7cce838667af867f4779850e4c9080c3269763eac8d49a5a0c4d3b3ca0fca"
Oct 24 00:42:04 Admin kubelet[36437]: I1024 00:42:04.838325 36437
status_manager.go:853] "Failed to get status for pod"
podUID="09a53fedd47e2efa2bcba0eb678a818e" pod="kube-system/etcd-admin"
err="Get \"https://192.168.29.215:6443/api/v1/namespaces/kube-system/pods/etcd-
admin\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:42:04 Admin kubelet[36437]: I1024 00:42:04.838646 36437
status_manager.go:853] "Failed to get status for pod"
podUID="d2cb40a8011ee7539fc0bed974c2f75d" pod="kube-system/kube-controller-manager-
admin" err="Get
\"https://192.168.29.215:6443/api/v1/namespaces/kube-system/pods/kube-controller-
manager-admin\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:42:04 Admin kubelet[36437]: I1024 00:42:04.838948 36437
status_manager.go:853] "Failed to get status for pod"
podUID="febe75ef8c1a35399682a0a32c2a4dc0" pod="kube-system/kube-apiserver-admin"
err="Get \"https://192.168.29.215:6443/api/v1/namespaces/kube-system/pods/kube-
apiserver-admin\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:42:04 Admin kubelet[36437]: I1024 00:42:04.839183 36437
status_manager.go:853] "Failed to get status for pod"
podUID="c25fe24d89c9e417dae0c395ceceaf9a" pod="kube-system/kube-scheduler-admin"
err="Get \"https://192.168.29.215:6443/api/v1/namespaces/kube-system/pods/kube-
scheduler-admin\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:42:04 Admin kubelet[36437]: I1024 00:42:04.839411 36437
status_manager.go:853] "Failed to get status for pod" podUID="b0dd047f-9adc-412c-
993e-dc03bdf242bf" pod="kube-system/kube-proxy-kv2q6" err="Get
\"https://192.168.29.215:6443/api/v1/namespaces/kube-system/pods/kube-proxy-
kv2q6\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:42:04 Admin kubelet[36437]: E1024 00:42:04.898671 36437
kubelet.go:2920] "Container runtime network not ready"
networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network
plugin returns error: cni plugin not initialized"
Oct 24 00:42:04 Admin kubelet[36437]: E1024 00:42:04.910951 36437
pod_workers.go:1298] "Error syncing pod, skipping" err="failed
to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 5m0s
restarting failed container=kube-scheduler pod=kube-scheduler-admin_kube-
system(c25fe24d89c9e417dae0c395ceceaf9a)\"" pod="kube-system/kube-scheduler-admin"
podUID="c25fe24d89c9e417dae0c395ceceaf9a"
Oct 24 00:42:05 Admin kubelet[36437]: I1024 00:42:05.841259 36437 scope.go:117]
"RemoveContainer"
containerID="592b9298b6d587aba2ce1360ee6b17532d046304542fd8c8a89f495ddadeb39a"
Oct 24 00:42:05 Admin kubelet[36437]: E1024 00:42:05.841475 36437
pod_workers.go:1298] "Error syncing pod, skipping" err="failed
to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 5m0s
restarting failed container=kube-scheduler pod=kube-scheduler-admin_kube-
system(c25fe24d89c9e417dae0c395ceceaf9a)\"" pod="kube-system/kube-scheduler-admin"
podUID="c25fe24d89c9e417dae0c395ceceaf9a"
Oct 24 00:42:05 Admin kubelet[36437]: I1024 00:42:05.841507 36437
status_manager.go:853] "Failed to get status for pod"
podUID="09a53fedd47e2efa2bcba0eb678a818e" pod="kube-system/etcd-admin"
err="Get \"https://192.168.29.215:6443/api/v1/namespaces/kube-system/pods/etcd-
admin\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:42:05 Admin kubelet[36437]: I1024 00:42:05.841783 36437
status_manager.go:853] "Failed to get status for pod"
podUID="d2cb40a8011ee7539fc0bed974c2f75d" pod="kube-system/kube-controller-manager-
admin" err="Get
\"https://192.168.29.215:6443/api/v1/namespaces/kube-system/pods/kube-controller-
manager-admin\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:42:05 Admin kubelet[36437]: I1024 00:42:05.842019 36437
status_manager.go:853] "Failed to get status for pod"
podUID="febe75ef8c1a35399682a0a32c2a4dc0" pod="kube-system/kube-apiserver-admin"
err="Get \"https://192.168.29.215:6443/api/v1/namespaces/kube-system/pods/kube-
apiserver-admin\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:42:05 Admin kubelet[36437]: I1024 00:42:05.842251 36437
status_manager.go:853] "Failed to get status for pod"
podUID="c25fe24d89c9e417dae0c395ceceaf9a" pod="kube-system/kube-scheduler-admin"
err="Get \"https://192.168.29.215:6443/api/v1/namespaces/kube-system/pods/kube-
scheduler-admin\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:42:05 Admin kubelet[36437]: I1024 00:42:05.842490 36437
status_manager.go:853] "Failed to get status for pod" podUID="b0dd047f-9adc-412c-
993e-dc03bdf242bf" pod="kube-system/kube-proxy-kv2q6" err="Get
\"https://192.168.29.215:6443/api/v1/namespaces/kube-system/pods/kube-proxy-
kv2q6\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:42:06 Admin kubelet[36437]: I1024 00:42:06.842746 36437 scope.go:117]
"RemoveContainer"
containerID="592b9298b6d587aba2ce1360ee6b17532d046304542fd8c8a89f495ddadeb39a"
Oct 24 00:42:06 Admin kubelet[36437]: E1024 00:42:06.843121 36437
pod_workers.go:1298] "Error syncing pod, skipping" err="failed
to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 5m0s
restarting failed container=kube-scheduler pod=kube-scheduler-admin_kube-
system(c25fe24d89c9e417dae0c395ceceaf9a)\"" pod="kube-system/kube-scheduler-admin"
podUID="c25fe24d89c9e417dae0c395ceceaf9a"
Oct 24 00:42:06 Admin kubelet[36437]: I1024 00:42:06.935837 36437 scope.go:117]
"RemoveContainer"
containerID="473e167f506722c1517e5e78380ac8e02e1d479e838b3f070828b0d13044aa0f"
Oct 24 00:42:06 Admin kubelet[36437]: E1024 00:42:06.936292 36437
pod_workers.go:1298] "Error syncing pod, skipping" err="failed
to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 5m0s
restarting failed container=kube-apiserver pod=kube-apiserver-admin_kube-
system(febe75ef8c1a35399682a0a32c2a4dc0)\"" pod="kube-system/kube-apiserver-admin"
podUID="febe75ef8c1a35399682a0a32c2a4dc0"
Oct 24 00:42:07 Admin kubelet[36437]: E1024 00:42:07.217128 36437
kubelet_node_status.go:544] "Error updating node status, will retry" err="error
getting node \"admin\": Get \"https://192.168.29.215:6443/api/v1/nodes/admin?
resourceVersion=0&timeout=10s\": dial tcp 192.168.29.215:6443: connect: connection
refused"
Oct 24 00:42:07 Admin kubelet[36437]: E1024 00:42:07.217408 36437
kubelet_node_status.go:544] "Error updating node status, will retry" err="error
getting node \"admin\": Get \"https://192.168.29.215:6443/api/v1/nodes/admin?
timeout=10s\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:42:07 Admin kubelet[36437]: E1024 00:42:07.217658 36437
kubelet_node_status.go:544] "Error updating node status, will retry" err="error
getting node \"admin\": Get \"https://192.168.29.215:6443/api/v1/nodes/admin?
timeout=10s\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:42:07 Admin kubelet[36437]: E1024 00:42:07.217845 36437
kubelet_node_status.go:544] "Error updating node status, will retry" err="error
getting node \"admin\": Get \"https://192.168.29.215:6443/api/v1/nodes/admin?
timeout=10s\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:42:07 Admin kubelet[36437]: E1024 00:42:07.218008 36437
kubelet_node_status.go:544] "Error updating node status, will retry" err="error
getting node \"admin\": Get \"https://192.168.29.215:6443/api/v1/nodes/admin?
timeout=10s\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:42:07 Admin kubelet[36437]: E1024 00:42:07.218022 36437
kubelet_node_status.go:531] "Unable to update node status" err="update node status
exceeds retry count"
Oct 24 00:42:07 Admin kubelet[36437]: E1024 00:42:07.332922 36437 event.go:368]
"Unable to write event (may retry after sleeping)" err="Patch
\"https://192.168.29.215:6443/api/v1/namespaces/kube-system/events/kube-controller-
manager-admin.1801285e3ddaeac7\": dial tcp 192.168.29.215:6443: connect: connection
refused" event="&Event{ObjectMeta:{kube-controller-manager-admin.1801285e3ddaeac7
kube-system 906 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []
[]},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-
controller-manager-
admin,UID:d2cb40a8011ee7539fc0bed974c2f75d,APIVersion:v1,ResourceVersion:,FieldPath
:spec.containers{kube-controller-manager},},Reason:BackOff,Message:Back-off
restarting failed container kube-controller-manager in pod kube-controller-manager-
admin_kube-
system(d2cb40a8011ee7539fc0bed974c2f75d),Source:EventSource{Component:kubelet,Host:
admin,},FirstTimestamp:2024-10-23 23:58:37 +0530 IST,LastTimestamp:2024-10-24
00:34:59.935591816 +0530 IST
m=+2246.118819862,Count:74,Type:Warning,EventTime:0001-01-01 00:00:00 +0000
UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ad
min,}"
Oct 24 00:42:07 Admin kubelet[36437]: E1024 00:42:07.400759 36437
controller.go:145] "Failed to ensure lease exists, will retry" err="Get
\"https://192.168.29.215:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-
lease/leases/admin?timeout=10s\": dial tcp 192.168.29.215:6443: connect: connection
refused" interval="7s"
Oct 24 00:42:09 Admin kubelet[36437]: E1024 00:42:09.899368 36437
kubelet.go:2920] "Container runtime network not ready"
networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network
plugin returns error: cni plugin not initialized"
Oct 24 00:42:10 Admin kubelet[36437]: I1024 00:42:10.935703 36437 scope.go:117]
"RemoveContainer"
containerID="1ed970f958a77082e8c3c93db0c029db744c42b59dc71f9607a37b7917f7a1b9"
Oct 24 00:42:10 Admin kubelet[36437]: E1024 00:42:10.936037 36437
pod_workers.go:1298] "Error syncing pod, skipping" err="failed
to \"StartContainer\" for \"kube-controller-manager\" with
CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-
manager pod=kube-controller-manager-admin_kube-
system(d2cb40a8011ee7539fc0bed974c2f75d)\"" pod="kube-system/kube-controller-
manager-admin" podUID="d2cb40a8011ee7539fc0bed974c2f75d"
Oct 24 00:42:13 Admin kubelet[36437]: I1024 00:42:13.935033 36437
status_manager.go:853] "Failed to get status for pod"
podUID="febe75ef8c1a35399682a0a32c2a4dc0" pod="kube-system/kube-apiserver-admin"
err="Get \"https://192.168.29.215:6443/api/v1/namespaces/kube-system/pods/kube-
apiserver-admin\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:42:13 Admin kubelet[36437]: I1024 00:42:13.935361 36437
status_manager.go:853] "Failed to get status for pod"
podUID="c25fe24d89c9e417dae0c395ceceaf9a" pod="kube-system/kube-scheduler-admin"
err="Get \"https://192.168.29.215:6443/api/v1/namespaces/kube-system/pods/kube-
scheduler-admin\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:42:13 Admin kubelet[36437]: I1024 00:42:13.935724 36437
status_manager.go:853] "Failed to get status for pod" podUID="b0dd047f-9adc-412c-
993e-dc03bdf242bf" pod="kube-system/kube-proxy-kv2q6" err="Get
\"https://192.168.29.215:6443/api/v1/namespaces/kube-system/pods/kube-proxy-
kv2q6\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:42:13 Admin kubelet[36437]: I1024 00:42:13.935970 36437
status_manager.go:853] "Failed to get status for pod"
podUID="09a53fedd47e2efa2bcba0eb678a818e" pod="kube-system/etcd-admin"
err="Get \"https://192.168.29.215:6443/api/v1/namespaces/kube-system/pods/etcd-
admin\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:42:13 Admin kubelet[36437]: I1024 00:42:13.936192 36437
status_manager.go:853] "Failed to get status for pod"
podUID="d2cb40a8011ee7539fc0bed974c2f75d" pod="kube-system/kube-controller-manager-
admin" err="Get
\"https://192.168.29.215:6443/api/v1/namespaces/kube-system/pods/kube-controller-
manager-admin\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:42:13 Admin kubelet[36437]: I1024 00:42:13.959995 36437 scope.go:117]
"RemoveContainer"
containerID="369cb1bf3917360e690f6ae04dd408783c11e3934dadace15036b543a289f3b9"
Oct 24 00:42:13 Admin kubelet[36437]: E1024 00:42:13.960249 36437
pod_workers.go:1298] "Error syncing pod, skipping" err="failed
to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s
restarting failed container=kube-proxy pod=kube-proxy-kv2q6_kube-system(b0dd047f-
9adc-412c-993e-dc03bdf242bf)\"" pod="kube-system/kube-proxy-kv2q6"
podUID="b0dd047f-9adc-412c-993e-dc03bdf242bf"
Oct 24 00:42:14 Admin kubelet[36437]: I1024 00:42:14.109389 36437 scope.go:117]
"RemoveContainer"
containerID="592b9298b6d587aba2ce1360ee6b17532d046304542fd8c8a89f495ddadeb39a"
Oct 24 00:42:14 Admin kubelet[36437]: E1024 00:42:14.109641 36437
pod_workers.go:1298] "Error syncing pod, skipping" err="failed
to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 5m0s
restarting failed container=kube-scheduler pod=kube-scheduler-admin_kube-
system(c25fe24d89c9e417dae0c395ceceaf9a)\"" pod="kube-system/kube-scheduler-admin"
podUID="c25fe24d89c9e417dae0c395ceceaf9a"
Oct 24 00:42:14 Admin kubelet[36437]: E1024 00:42:14.401857 36437
controller.go:145] "Failed to ensure lease exists, will retry" err="Get
\"https://192.168.29.215:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-
lease/leases/admin?timeout=10s\": dial tcp 192.168.29.215:6443: connect: connection
refused" interval="7s"
Oct 24 00:42:14 Admin kubelet[36437]: I1024 00:42:14.855754 36437 scope.go:117]
"RemoveContainer"
containerID="592b9298b6d587aba2ce1360ee6b17532d046304542fd8c8a89f495ddadeb39a"
Oct 24 00:42:14 Admin kubelet[36437]: E1024 00:42:14.855969 36437
pod_workers.go:1298] "Error syncing pod, skipping" err="failed
to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 5m0s
restarting failed container=kube-scheduler pod=kube-scheduler-admin_kube-
system(c25fe24d89c9e417dae0c395ceceaf9a)\"" pod="kube-system/kube-scheduler-admin"
podUID="c25fe24d89c9e417dae0c395ceceaf9a"
Oct 24 00:42:14 Admin kubelet[36437]: E1024 00:42:14.900676 36437
kubelet.go:2920] "Container runtime network not ready"
networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network
plugin returns error: cni plugin not initialized"
Oct 24 00:42:17 Admin kubelet[36437]: E1024 00:42:17.333791 36437 event.go:368]
"Unable to write event (may retry after sleeping)" err="Patch
\"https://192.168.29.215:6443/api/v1/namespaces/kube-system/events/kube-controller-
manager-admin.1801285e3ddaeac7\": dial tcp 192.168.29.215:6443: connect: connection
refused" event="&Event{ObjectMeta:{kube-controller-manager-admin.1801285e3ddaeac7
kube-system 906 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []
[]},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-
controller-manager-
admin,UID:d2cb40a8011ee7539fc0bed974c2f75d,APIVersion:v1,ResourceVersion:,FieldPath
:spec.containers{kube-controller-manager},},Reason:BackOff,Message:Back-off
restarting failed container kube-controller-manager in pod kube-controller-manager-
admin_kube-
system(d2cb40a8011ee7539fc0bed974c2f75d),Source:EventSource{Component:kubelet,Host:
admin,},FirstTimestamp:2024-10-23 23:58:37 +0530 IST,LastTimestamp:2024-10-24
00:34:59.935591816 +0530 IST
m=+2246.118819862,Count:74,Type:Warning,EventTime:0001-01-01 00:00:00 +0000
UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ad
min,}"
Oct 24 00:42:17 Admin kubelet[36437]: E1024 00:42:17.335213 36437
kubelet_node_status.go:544] "Error updating node status, will retry" err="error
getting node \"admin\": Get \"https://192.168.29.215:6443/api/v1/nodes/admin?
resourceVersion=0&timeout=10s\": dial tcp 192.168.29.215:6443: connect: connection
refused"
Oct 24 00:42:17 Admin kubelet[36437]: E1024 00:42:17.335481 36437
kubelet_node_status.go:544] "Error updating node status, will retry" err="error
getting node \"admin\": Get \"https://192.168.29.215:6443/api/v1/nodes/admin?
timeout=10s\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:42:17 Admin kubelet[36437]: E1024 00:42:17.335755 36437
kubelet_node_status.go:544] "Error updating node status, will retry" err="error
getting node \"admin\": Get \"https://192.168.29.215:6443/api/v1/nodes/admin?
timeout=10s\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:42:17 Admin kubelet[36437]: E1024 00:42:17.335999 36437
kubelet_node_status.go:544] "Error updating node status, will retry" err="error
getting node \"admin\": Get \"https://192.168.29.215:6443/api/v1/nodes/admin?
timeout=10s\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:42:17 Admin kubelet[36437]: E1024 00:42:17.336225 36437
kubelet_node_status.go:544] "Error updating node status, will retry" err="error
getting node \"admin\": Get \"https://192.168.29.215:6443/api/v1/nodes/admin?
timeout=10s\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:42:17 Admin kubelet[36437]: E1024 00:42:17.336243 36437
kubelet_node_status.go:531] "Unable to update node status" err="update node status
exceeds retry count"
Oct 24 00:42:18 Admin kubelet[36437]: I1024 00:42:18.935631 36437 scope.go:117]
"RemoveContainer"
containerID="473e167f506722c1517e5e78380ac8e02e1d479e838b3f070828b0d13044aa0f"
Oct 24 00:42:18 Admin kubelet[36437]: E1024 00:42:18.936054 36437
pod_workers.go:1298] "Error syncing pod, skipping" err="failed
to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 5m0s
restarting failed container=kube-apiserver pod=kube-apiserver-admin_kube-
system(febe75ef8c1a35399682a0a32c2a4dc0)\"" pod="kube-system/kube-apiserver-admin"
podUID="febe75ef8c1a35399682a0a32c2a4dc0"
Oct 24 00:42:19 Admin kubelet[36437]: E1024 00:42:19.901594 36437
kubelet.go:2920] "Container runtime network not ready"
networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network
plugin returns error: cni plugin not initialized"
Oct 24 00:42:21 Admin kubelet[36437]: E1024 00:42:21.403382 36437
controller.go:145] "Failed to ensure lease exists, will retry" err="Get
\"https://192.168.29.215:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-
lease/leases/admin?timeout=10s\": dial tcp 192.168.29.215:6443: connect: connection
refused" interval="7s"
Oct 24 00:42:21 Admin kubelet[36437]: I1024 00:42:21.935825 36437 scope.go:117]
"RemoveContainer"
containerID="1ed970f958a77082e8c3c93db0c029db744c42b59dc71f9607a37b7917f7a1b9"
Oct 24 00:42:21 Admin kubelet[36437]: E1024 00:42:21.936161 36437
pod_workers.go:1298] "Error syncing pod, skipping" err="failed
to \"StartContainer\" for \"kube-controller-manager\" with
CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-
manager pod=kube-controller-manager-admin_kube-
system(d2cb40a8011ee7539fc0bed974c2f75d)\"" pod="kube-system/kube-controller-
manager-admin" podUID="d2cb40a8011ee7539fc0bed974c2f75d"
Oct 24 00:42:23 Admin kubelet[36437]: I1024 00:42:23.935936 36437
status_manager.go:853] "Failed to get status for pod"
podUID="09a53fedd47e2efa2bcba0eb678a818e" pod="kube-system/etcd-admin"
err="Get \"https://192.168.29.215:6443/api/v1/namespaces/kube-system/pods/etcd-
admin\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:42:23 Admin kubelet[36437]: I1024 00:42:23.936252 36437
status_manager.go:853] "Failed to get status for pod"
podUID="d2cb40a8011ee7539fc0bed974c2f75d" pod="kube-system/kube-controller-manager-
admin" err="Get
\"https://192.168.29.215:6443/api/v1/namespaces/kube-system/pods/kube-controller-
manager-admin\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:42:23 Admin kubelet[36437]: I1024 00:42:23.936548 36437
status_manager.go:853] "Failed to get status for pod"
podUID="febe75ef8c1a35399682a0a32c2a4dc0" pod="kube-system/kube-apiserver-admin"
err="Get \"https://192.168.29.215:6443/api/v1/namespaces/kube-system/pods/kube-
apiserver-admin\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:42:23 Admin kubelet[36437]: I1024 00:42:23.936839 36437
status_manager.go:853] "Failed to get status for pod"
podUID="c25fe24d89c9e417dae0c395ceceaf9a" pod="kube-system/kube-scheduler-admin"
err="Get \"https://192.168.29.215:6443/api/v1/namespaces/kube-system/pods/kube-
scheduler-admin\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:42:23 Admin kubelet[36437]: I1024 00:42:23.937098 36437
status_manager.go:853] "Failed to get status for pod" podUID="b0dd047f-9adc-412c-
993e-dc03bdf242bf" pod="kube-system/kube-proxy-kv2q6" err="Get
\"https://192.168.29.215:6443/api/v1/namespaces/kube-system/pods/kube-proxy-
kv2q6\": dial tcp 192.168.29.215:6443: connect: connection refused"
Oct 24 00:42:24 Admin kubelet[36437]: E1024 00:42:24.903085 36437
kubelet.go:2920] "Container runtime network not ready"
networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network
plugin returns error: cni plugin not initialized"
^C
admin-tccl@Admin:~$ ^C
admin-tccl@Admin:~$ kubectl create -f
https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/custom-
resources.yaml
error: error validating
"https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/custom-
resources.yaml": error validating data: failed to download openapi: Get
"https://192.168.29.215:6443/openapi/v2?timeout=32s": dial tcp 192.168.29.215:6443:
connect: connection refused; if you choose to ignore these errors, turn validation
off with --validate=false
admin-tccl@Admin:~$
admin-tccl@Admin:~$
admin-tccl@Admin:~$ history
1 lsblk
2 sudo apt update && sudo apt upgrade -y
3 sudo swapoff -a
4 sudo sed -i '/ swap / s/^/#/' /etc/fstab
5 sudo apt-get install -y ca-certificates curl gnupg
6 sudo apt-get install -y ca-certificates curl gnupg
7 sudo install -m 0755 -d /etc/apt/keyrings
8 curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo tee
9 /etc/apt/keyrings/docker.asc
10 sudo chmod a+r /etc/apt/keyrings/docker.asc
11 a
12 sudo install -m 0755 -d /etc/apt/keyrings
13 curl -fsSL https://download.docker.com/linux/ubuntu/gpg| sudo tee
/etc/apt/keyrings/docker.asc
14 sudo chmod a+r /etc/apt/keyrings/docker,asc
15 sudo chmod a+r /etc/apt/keyrings/docker.asc
16 sudo echo "deb [arch=\"$(dpkg --print-architecture)\"
signed-by=/etc/apt/keyrings/docker.asc]
https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo \"$VERSION_CODENAME\") stable" | sudo tee
/etc/apt/sources.list.d/docker.list > /dev/null
17 cat /etc/apt/sources.list.d/docker.list
18 sudo apt-get update
19 cat /etc/apt/sources.list.d/docker.list
20 . /etc/os-release && echo "$VERSION_CODENAME"
21 sudo nano /etc/apt/sources.list.d/docker.list
22 sudo apt-get update
23 apt-cache madison docker-ce
24 apt-cache madison docker-ce-cli
25 sudo apt-get install -y docker-ce docker-ce-cli containerd.io
26 sudo systemctl enable docker
27 sudo systemctl enable containerd
28 sudo systemctl start docker
29 sudo systemctl start containerd
30 sudo systemctl status containerd
31 sudo systemctl status docker
32 sudo apt-get update
33 sudo apt-get install -y apt-transport-https ca-certificates curl
34 sudo apt-get install -y apt-transport-https ca-certificates curl
35 sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg
https://packages.cloud.google.com/apt/doc/apt-key.gpg
36 echo $?
37 sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg
https://packages.cloud.google.com/apt/doc/apt-key.gpg
38 echo $?
39 ls -l /usr/share/keyrings/kubernetes-archive-keyring.gpg
40 echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg]
https://apt.kubernetes.io/ kubernetes-xenial main" |sudo tee
/etc/apt/sources.list.d/kubernetes.list
41 echo $?
42 sudo apt-get update
43 sudo apt-get install -y kubelet kubeadm kubectl
44 ls -l /etc/apt/sources.list.d/kubernetes.list
45 cat etc/apt/sources.list.d/kubernetes.list
46 cat /etc/apt/sources.list.d/kubernetes.list
47 sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg
https://packages.cloud.google.com/apt/doc/apt-key.gpg
48 echo $?
49 sudo apt-get update
50 sudo apt-get install -y kubelet kubeadm kubectl
51 sudo apt-get install -y kubelet-1.31.1 kubeadm-1.31.1 kubectl-1.31.1
52 lsb_release -a
53 sudo apt-get update
54 sudo apt-get install lsb-release
55 lsb-release -a
56 lsb_release -a
57 sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg
https://packages.cloud.google.com/apt/doc/apt-key.gpg
58 echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg]
https://apt.kubernetes.io/ kubernetes-noble main" |sudo tee
/etc/apt/sources.list.d/kubernetes.list
59 sudo apt-get update
60 sudo apt-get install -y kubelet kubeadm kubectl
61 sudo rm /etc/apt/sources.list.d/kubernetes.list
62 sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg
https://packages.cloud.google.com/apt/doc/apt-key.gpg
63 echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg]
https://apt.kubernetes.io/ kubernetes-noble main" | sudo tee
/etc/apt/sources.list.d/kubernetes.list
64 sudo apt-get update
65 cat /etc/os-release
66 curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo
gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg~
67 sudo curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key |
sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
68 echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg]
https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee
/etc/apt/sources.list.d/kubernetes.list
69 sudo apt-get update
70 sudo apt-get install -y kubelet kubeadm kubectl
71 sudo apt-mark hold kubelet kubeadm kubectl
72 sudo kubeadm init
73 sudo systemctl enable --now kubelet
74 sudo kubead, init --pod-network-cidr=192.168.0.0/16
--cri-socket=/var/run/containerd/containerd.sock --v=5
75 sudo kubeadm init --pod-network-cidr=192.168.0.0/16
--cri-socket=/var/run/containerd/containerd.sock --v=5
76 sudo kubeadm init --pod-network-cidr=192.168.0.0/16
77 which containerd
78 sudo systemctl status containerd
79 sudo cat /etc/containerd/config.toml
80 sudo vi /etc/containerd/config.toml
81 sudo nano /etc/containerd/config.toml
82 sudo systemctl restart containerd
83 sudo systemctl status containerd
84 sudo kubeadm init --pod-network-cidr=192.168.0.0/16
85 mkdir -p $HOME/ .kube
86 sudo cp -i /etc/Kubernetes/admin.conf $HOME/ .kube/config
87 mkdir -p $HOME/.kube
88 sudo cp -i /etc/Kubernetes/admin.conf $HOME/.kube/config
89 ls /etc/kubernetes/
90 sudo cp -i /etc/kubernetes/admin.conf $HOME/ .kube/config
91 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
92 sudo chown $(id -u):$(id -g) $HOME/.kube/config
93 wget
https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/
calico.yaml
94 kubectl apply -f calico.yaml
95 sudo kubectl apply -f calico.yaml --validate=false
96 curl -O https://docs.projectcalico.org/manifests/calico.yaml
97 nano calico.yaml
98 ll
99 curl -O
https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/
calico.yaml
100 nano calico.yaml
101 kubectl apply -f calico.yaml
102 cat $HOME/.kube/config
103 ifconfig
104 ipconfig
105 sudo systemctl status kublet
106 sudo systemctl status kubelet
107 curl -k https://192.168.29.215:6443/version
108 ip addr show
109 curl -k https://192.168.29.215:6443/version
110 hostname -I
111 curl ifconfig.me
112 kubectl taint nodes --all node-role.kubernetes.io/control-plane-
113 kubectl get pods --all-namespaces
114 kubectl create ns test
115 curl -k https://192.168.29.215:6443/version
116 sudo ufw status
117 ping 192.168.29.215
118 sudo kubectl get pods -n kube-system
119 kubectl get pods --all-namespaces
120 cat /etc/kubernetes/manifests/kube-apiserver.yaml
121 sudo cat /etc/kubernetes/manifests/kube-apiserver.yaml
122 sudo journalctl -u kubelet -f
123 kubectl create -f
https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/custom-
resources.yaml
124 history
admin-tccl@Admin:~$ cat /proc/sys
sys/ sysrq-trigger sysvipc/
admin-tccl@Admin:~$ cat /proc/sys/net/ipv6/conf/all/disable_ipv6
0
admin-tccl@Admin:~$ nano /etc/kubernetes/kubelet.conf
admin-tccl@Admin:~$ sudo cat /etc/kubernetes/kubelet.conf
[sudo] password for admin-tccl:
Sorry, try again.
[sudo] password for admin-tccl:
Sorry, try again.
[sudo] password for admin-tccl:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data:
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJYlJ3NFVFWXFaSW93RFF
ZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TkRFd01qTX
hPREl5TWpOYUZ3MHpOREV3TWpFeE9ESTNNak5hTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z
2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUR1RkVGSEdqdWN2THd2U2N2Rm1k
RWlsdTZEdUlSMU1GdzVnSGN4eGRJRlFQNVU4RGpPK2VtZGdHY2oKbCtWeVJYdlJ1aXAvTW5wNHNQYVZFNER
DOHYydHZzWWVqczJQSkVUSU8zb2U5WHV6NEN1VUR6RXJZdG12NFQ3Ugo0L09QZUwzamduS1E2ZFBkM1RRMm
dleEgvVUtkLy9tV3pWMkpWK2Zvd2FyVnpnenpiZGdkOGNmWFVHZGFqbnB2CnovRUxTb01DVHErak80dThRW
nBaUEJnK2FJc04ySXFBY3ZNOS9iZCtIN2hQcHhoYUJFYjc3WFZQMG9pK2pLMDAKemREUkIxWEpadmRhakhv
TlBaQjNaQ2pnamN5dlIyY2hkUnkvdVV0clZWcnpIY0QydlZ1WUl4cGI4Z3BJckgrMApwLzlvQjhwU0Y4NkZ
HN3FNcWlRQ014aEVKMiszQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQW
Y4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJTUGF2dnhsNWJFMjN1RFVTRlUxK3RseCsrd1FqQVYKQmdOVkhSR
UVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQ3IvMkVBWnU1NwpMZEFO
aldETFFBK2FMSW1MVnViRlIxQ2JtdVN3bFE1TnQ3VUlHQTBUSkdyLzBoNmpmaTg2aUFXbGJxZll6czZxCmI
1b050ekF1TlcxYm9sM2c4Tiswd25oRW1NRWdmUUhLN2FmZkFBbGZIbmpuSTNlcVprQno4ZFBLK3JpaTRLcn
QKZHR2bm1HZVVZMklXalFReFJsb3pjK21Wa01heUw5MjZQdkdZM3dMaUtUWmNTcHBkNDJKUUhzTzVFSGoyV
nhwcgorRmszb3lBRWRvOFM2WjVaOWp1NVcwRWY0Q2ZhR0tnOTRibjhIMEFzWHhEYWNWSm9sQ1c0RXQwNnZM
NlpBdkIzClBCNjI5RnJkZ01HWEJxNndNNkJnTWFqLzdwMTcrSldadnlSU21aZWNsZnBhRVF0a1pVeE1tR1p
DTTB4VTBkbVAKZWVzTWh4ZmNIbllICi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
server: https://192.168.29.215:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: system:node:admin
name: system:node:admin@kubernetes
current-context: system:node:admin@kubernetes
kind: Config
preferences: {}
users:
- name: system:node:admin
user:
client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
client-key: /var/lib/kubelet/pki/kubelet-client-current.pem
admin-tccl@Admin:~$ sudo nano /etc/kubernetes/kubelet.conf
admin-tccl@Admin:~$ kubeadm init --pod-network-cidr=fd00::/64
I1026 21:55:15.781862 55122 version.go:256] remote version is much newer:
v1.31.2; falling back to: stable-1.30
[init] Using Kubernetes version: v1.30.6
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR IsPrivilegedUser]: user is not running as root
[preflight] If you know what you are doing, you can make a check non-fatal with `--
ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
admin-tccl@Admin:~$ sudo kubeadm init --pod-network-cidr=fd00::/64
[sudo] password for admin-tccl:
I1026 21:56:02.012755 55204 version.go:256] remote version is much newer:
v1.31.2; falling back to: stable-1.30
[init] Using Kubernetes version: v1.30.6
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-10257]: Port 10257 is in use
[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]:
/etc/kubernetes/manifests/kube-apiserver.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]:
/etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]:
/etc/kubernetes/manifests/kube-scheduler.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]:
/etc/kubernetes/manifests/etcd.yaml already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR FileContent--proc-sys-net-ipv6-conf-default-forwarding]:
/proc/sys/net/ipv6/conf/default/forwarding contents are not set to 1
[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `--
ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
admin-tccl@Admin:~$
admin-tccl@Admin:~$
admin-tccl@Admin:~$ sudo lsof -i :10257
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
kube-cont 55070 root 3u IPv4 224835 0t0 TCP localhost:10257 (LISTEN)
admin-tccl@Admin:~$ sudo lsof -i :10250
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
kubelet 36437 root 28u IPv6 144379 0t0 TCP *:10250 (LISTEN)
admin-tccl@Admin:~$ sudo kill 55070
admin-tccl@Admin:~$ sudo kill 36437
admin-tccl@Admin:~$ sudo lsof -i :10250
admin-tccl@Admin:~$ sudo lsof -i :10257
admin-tccl@Admin:~$ sudo kubeadm reset
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm
kubeadm-config -o yaml'
W1026 22:00:40.056130 55985 reset.go:124] [reset] Unable to fetch the kubeadm-
config ConfigMap from cluster: failed to get config map: Get
"https://192.168.29.215:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-
config?timeout=10s": dial tcp 192.168.29.215:6443: connect: connection refused
W1026 22:00:40.056253 55985 preflight.go:56] [reset] WARNING: Changes made to
this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W1026 22:00:43.051477 55985 removeetcdmember.go:106] [reset] No kubeadm config,
using etcd pod spec to get data directory
[reset] Deleted contents of the etcd data directory: /var/lib/etcd
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests
/var/lib/kubelet /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-
admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf
/etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

The reset process does not clean CNI configuration. To do so, you must remove
/etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables"
command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them
manually.
Please, check the contents of the $HOME/.kube/config file.
admin-tccl@Admin:~$ cd $HOME/.kube/
admin-tccl@Admin:~/.kube$ ll
total 20
drwxrwxr-x 3 admin-tccl admin-tccl 4096 Oct 24 00:32 ./
drwxr-x--- 18 admin-tccl admin-tccl 4096 Oct 24 00:18 ../
drwxr-x--- 4 admin-tccl admin-tccl 4096 Oct 24 00:32 cache/
-rw------- 1 admin-tccl admin-tccl 5654 Oct 24 00:07 config
admin-tccl@Admin:~/.kube$ more config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data:
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJYlJ3NFVFWXFaSW93RFF
ZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExV
UUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TkRFd01qTXhPREl5TWpOYUZ3MHpOREV3TWpFeE9ESTNNak5h
TUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRU
JBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUR1RkVGSEdqdWN2THd2U2N2Rm1kRWlsdTZEdUlSMU1GdzVnSGN4e
GRJRlFQNVU4RGpPK2VtZGdHY2oKbCtWeVJYdlJ1aXAvTW5wNHNQYVZFNERDOHYydHZzWWV
qczJQSkVUSU8zb2U5WHV6NEN1VUR6RXJZdG12NFQ3Ugo0L09QZUwzamduS1E2ZFBkM1RRMmdleEgvVUtkLy
9tV3pWMkpWK2Zvd2FyVnpnenpiZGdkOGNmWFVHZGFqbnB2CnovRUxTb01DVHErak80dThR
WnBaUEJnK2FJc04ySXFBY3ZNOS9iZCtIN2hQcHhoYUJFYjc3WFZQMG9pK2pLMDAKemREUkIxWEpadmRhakh
vTlBaQjNaQ2pnamN5dlIyY2hkUnkvdVV0clZWcnpIY0QydlZ1WUl4cGI4Z3BJckgrMApwL
zlvQjhwU0Y4NkZHN3FNcWlRQ014aEVKMiszQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQ
CkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJTUGF2dnhsNWJFMjN1RFVTRl
UxK3RseCsrd1FqQVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQ
TRJQkFRQ3IvMkVBWnU1NwpMZEFOaldETFFBK2FMSW1MVnViRlIxQ2JtdVN3bFE1TnQ3VUl
HQTBUSkdyLzBoNmpmaTg2aUFXbGJxZll6czZxCmI1b050ekF1TlcxYm9sM2c4Tiswd25oRW1NRWdmUUhLN2
FmZkFBbGZIbmpuSTNlcVprQno4ZFBLK3JpaTRLcnQKZHR2bm1HZVVZMklXalFReFJsb3pj
K21Wa01heUw5MjZQdkdZM3dMaUtUWmNTcHBkNDJKUUhzTzVFSGoyVnhwcgorRmszb3lBRWRvOFM2WjVaOWp
1NVcwRWY0Q2ZhR0tnOTRibjhIMEFzWHhEYWNWSm9sQ1c0RXQwNnZMNlpBdkIzClBCNjI5R
nJkZ01HWEJxNndNNkJnTWFqLzdwMTcrSldadnlSU21aZWNsZnBhRVF0a1pVeE1tR1pDTTB4VTBkbVAKZWVz
TWh4ZmNIbllICi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
server: https://192.168.29.215:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data:
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURLVENDQWhHZ0F3SUJBZ0lJVDdNUEtBTkpkODh3RFF
ZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUK
QXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TkRFd01qTXhPREl5TWpOYUZ3MHlOVEV3TWpNeE9ESTNNalZhTUR
3eApIekFkQmdOVkJBb1RGbXQxWW1WaFpHMDZZMngxYzNSbGNpMWhaRzFwYm5NeEdUQVhCZ
05WQkFNVEVHdDFZbVZ5CmJtVjBaWE10WVdSdGFXNHdnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUFBNElCRHdB
d2dnRUtBb0lCQVFDZzZQeFIKQkhITzB3RVh5SGphZzlLbTFFN1A2UkdlUGdsKyt6Rm5neU
dtc2h4eUpycGQ4a0dUd2xiZHVzSXpzUWFaeG9LagpnbmxGcDFWKzdoQkpiRlVhRFZxd25ZY05La3BXM0xqU
0xVcHhNWk0xTXg1Y3pDY3RQRXZIeVFJVzNCNTg2QXEzCm1rZ0sxTWpmSURrRlRXTFZhc0R
PU3V5dUdpMGRuYjhPdmNlaGJLUGtWNkJVZTVTK3o1YnEvZDhZeS85aEpzN1oKTHZENzgvNWd3clhFZlRyTk
NtdlpiZ2hCdURvNzFFT0tkeWtySXVGVDVFdTFYQW8zR2RwNjhISHlWQWtaQkV1TQpRZ284
MmVIZlFlQnhnOEFyYlBsRGpSM0pBUnMrMFcybFhrWnhzM3dXMDVjN2VJWWdwaVE3V2U5bUZLVnB0WGF4Ckd
KV3QyekFYdjJZd3ZpZEpBZ01CQUFHalZqQlVNQTRHQTFVZER3RUIvd1FFQXdJRm9EQVRCZ
05WSFNVRUREQUsKQmdnckJnRUZCUWNEQWpBTUJnTlZIUk1CQWY4RUFqQUFNQjhHQTFVZEl3UVlNQmFBRkk5
cSsvR1hsc1RiZTROUgpJVlRYNjJYSDc3QkNNQTBHQ1NxR1NJYjNEUUVCQ3dVQUE0SUJBUU
Rja2xPTzFRYkcwZkR0cUNiNWZjU054cVFRCmF4dnBYZjZ3dThFVWM0NVlmM3NkVFFlekUrTGkwZEk4ZjB1b
EMwYVM2cTV3ZWtUNVpUdm8xWTVSejlhWHNURXMKdjBoUUdSV1R3aXlhZ3VpTXUrQzNZc1l
sbUZOSno0YTJYUWw0NktKTThvUDBManNSb3hSdWtUcUFKRi9rWFlCUgpKME1qRU1hb09HNzdUVGNXRTd2ZG
8yY29wS3FIYjR1YlBwbUs3OWRsMzJFa1NUS29GeWQrSktRWHdNUENTM3lKClFaUWlxZzYx
SGhBeFUvSmM4VnE2ZlNGOWdEVEpVSFBSWE9tSnJMcEhleUhZSm5pMjk5L2c4TW42NndyNS9nR1kKNlM4WGc
5eGxwcGZoQldDODdGSnd1aDBBeXpYbzhSVUUzUEg4SklzOEdCWjk5c3prbmVrRUhSbTVQd
DRECi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
client-key-data:
LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBb09qOFVRUnh6dE1CRjh
oNDJvUFNwdFJPeitrUm5qNEpmdnN4WjRNaHBySWNjaWE2Clhm
SkJrOEpXM2JyQ003RUdtY2FDbzRKNVJhZFZmdTRRU1d4VkdnMWFzSjJIRFNwS1Z0eTQwaTFLY1RHVE5UTWU
KWE13bkxUeEx4OGtDRnR3ZWZPZ0t0NXBJQ3RUSTN5QTVCVTFpMVdyQXprcnNyaG90SFoyL
0RyM0hvV3lqNUZlZwpWSHVVdnMrVzZ2M2ZHTXYvWVNiTzJTN3crL1ArWU1LMXhIMDZ6UXByMlc0SVFiZzZP
OVJEaW5jcEt5TGhVK1JMCnRWd0tOeG5hZXZCeDhsUUpHUVJMakVJS1BObmgzMEhnY1lQQU
syejVRNDBkeVFFYlB0RnRwVjVHY2JOOEZ0T1gKTzNpR0lLWWtPMW52WmhTbGFiVjJzUmlWcmRzd0Y3OW1NT
DRuU1FJREFRQUJBb0lCQUNkMVdzSm5TNTFEUXdaWgpBOEhhQjZNZmR3QW5FRW4wdnBGait
kWi9ZcFlsSVRLZzZweTFGbjJzYjI3S0tHdFNvdUs4dWpac2ZWNm1UU0htCk1ScEFOWkpBNmhkYldjM1JyQT
htNnkrbktaVWVhaEhtcWpCcFk4WUUvalJNeDNWaG54eFVMcVNkY2NNdU1OLysKWDkwNy85
dUQ1U254VjU3T0RuZ3Z3YlZVdG9xUDNRRFhQR2lLYm1GdWR5T2hXWTV6TVZQUXZuSmRTejg4VWZFeApwbDN
aUnlhZEJ6eFJFdzNuYnJaRjAyc05DbXg0bC9rbmJuQW4wSG5EOVB1eWpDK1Y5N1JPWU1yc
EZOQUZtR2thCm5RZXRMNy80WDhjQ3p6UDBHWDFDL3R5Mm9pcGxkVmJTczVQSFFPMjhUN1RsUStQdnRidDN2
TmVSa05WaWVHS3oKOVUzdE1ZMENnWUVBeFlsaGU5R3dkUlVaUXJnUElWMVVIZm1DdVV6SE
lUSTFnYlR4N0Z3UnNNd29XUlBiSmlGbwpMZ3pCYkllNy8wc3Y4Yzg4SXNxK1dpQkZzNTE2aXcwVUluUDh1R
29CeVVHQjhlNWlYajNKcEVVeFBWeHdIT216ClJzY3I4TjFGVXA5STNBVkNUSkJnT0ZKYnF
EV0kzdlIyZWc3cnJYdEczRHBDVlozeStQbXBmRThDZ1lFQTBJaU0KSVFxNGlSSEQwKzlCenozbXlLNnpVd2
lzbWNOcFB4U0tsK0F1ckZVMXhMSmJxcGpUZkltOEUrMGp1MkdFaWRSWQpROC9OaTU0a0ww
Z0ExdHBCaFdoaXFEQVFUNTZFVTB2OUJnTXo3ZmorLytJSjFrZjBiWFp4cXRxVllOTms2UzRBCkwyZllGT1A
3bklQbHlxdVROeHVPcXRnVnJDYS96ZU9NT3pzV1JPY0NnWUE2a1VQMCtUUHZVdVVkY2dNU
2FtQngKVHJRaWlwQVQySllpc2VwMG9NdWg5cllUeXg1VHpOM2RvV3lMNkNhbVI3MmNYVXhBS0lxTm9EbnFT
a3UyQkpldQpxMk1Icm01L0pFd0oxaHNXUkEyUUJlL1dlSnpKQmNWZ3U5YmNZRTZZYzUrZm
xIT1d6Y3VwaDBtanN0TzAveGhOCmtqVHdSN2UzdmhKQzNrVFc2dmNFWXdLQmdCMlY0ZHVtTzd3bXF4UGNkQ
WZGRG9NV1ZoYkh1a1V1ZGpZZTRmTGUKT1lEMXJlVTBNTkVwVVlmdnVxRlJHYXF5RVMzRTF
LajZTSDB3ZUkzRXQybkVHVnVtRGFreStIMXpUZTdMYnlCMQpQOTdaWHNSSyszNU5ReDVzbVgvVjl5OS9qbW
VPd1RQNGxhMlJFdGVIMXdoRUEyVGtJZitYSEt3SjYxaDRtaUtsCkpXbXRBb0dCQUtPRVhB
REVZVXZqekxiZUdpTGdqRUFtQkxVYXp5ZjZvTkFTaW12Tmk0T1pzQ0JDNUJNL2pZck0KQ2w1S2ZKRXRob1M
3cmpEOVJmR1cvRGhrcGltR0RPWFVhQ0Vub0ltbWNkclJGNk5uTTZBVk5hWEhmbERKRjBRO
QpJT0FoV1BnbTRiVllGc2pQam9JbG41aml1d1hTRWI4S2dqOFJXTkx0MW5WM2ZaZGVRQTNCCi0tLS0tRU5E
IFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
admin-tccl@Admin:~/.kube$ sudo kubeadm reset
W1026 22:02:20.525020 56520 preflight.go:56] [reset] WARNING: Changes made to
this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W1026 22:02:22.962133 56520 removeetcdmember.go:106] [reset] No kubeadm config,
using etcd pod spec to get data directory
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests
/var/lib/kubelet /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-
admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf
/etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
The reset process does not clean CNI configuration. To do so, you must remove
/etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables"
command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them
manually.
Please, check the contents of the $HOME/.kube/config file.
admin-tccl@Admin:~/.kube$
admin-tccl@Admin:~/.kube$
admin-tccl@Admin:~/.kube$ sudo sysctl -w net.ipv6.conf.default.forwarding=1
net.ipv6.conf.default.forwarding = 1
admin-tccl@Admin:~/.kube$ echo "net.ipv6.conf.default.forwarding=1"| sudo tee -a
/etc/sysctl.conf
net.ipv6.conf.default.forwarding=1
admin-tccl@Admin:~/.kube$ sudo sysctl -p
net.ipv6.conf.default.forwarding = 1
admin-tccl@Admin:~/.kube$ sudo rm -rf /var/lib/etcd/*
admin-tccl@Admin:~/.kube$ sudo kubeadm init --pod-network-cidr=fd00::/64
I1026 22:07:08.162451 56670 version.go:256] remote version is much newer:
v1.31.2; falling back to: stable-1.30
[init] Using Kubernetes version: v1.30.6
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your
internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config
images pull'
W1026 22:07:08.885552 56670 checks.go:844] detected that the sandbox image
"registry.k8s.io/pause:3.8" of the container runtime is inconsistent with that used
by kubeadm.It is recommended to use "registry.k8s.io/pause:3.9" as the CRI sandbox
image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [admin kubernetes
kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and
IPs [10.96.0.1 192.168.29.215]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [admin localhost] and IPs
[192.168.29.215 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [admin localhost] and IPs
[192.168.29.215 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file
"/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file
"/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static
Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz.
This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.657549ms
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is healthy after 3.001764111s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the
"kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the
configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node admin as control-plane by adding the labels:
[node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-
load-balancers]
[mark-control-plane] Marking the node admin as control-plane by adding the taints
[node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 4r8o2a.rdbnkdffergp47p4
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs
in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller
automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node
client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public"
namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable
kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as
root:

kubeadm join 192.168.29.215:6443 --token 4r8o2a.rdbnkdffergp47p4 \


--discovery-token-ca-cert-hash
sha256:53f181b1304591372134f65f7b24e3db41dbe950f8e06d96b8673b6819eee9f7
admin-tccl@Admin:~/.kube$
admin-tccl@Admin:~/.kube$
admin-tccl@Admin:~/.kube$
admin-tccl@Admin:~/.kube$ kubectl get nodes
E1026 22:08:17.688520 58620 memcache.go:265] couldn't get current server API
group list: Get "https://192.168.29.215:6443/api?timeout=32s": dial tcp
192.168.29.215:6443: connect: connection refused
E1026 22:08:17.688729 58620 memcache.go:265] couldn't get current server API
group list: Get "https://192.168.29.215:6443/api?timeout=32s": dial tcp
192.168.29.215:6443: connect: connection refused
E1026 22:08:17.690042 58620 memcache.go:265] couldn't get current server API
group list: Get "https://192.168.29.215:6443/api?timeout=32s": dial tcp
192.168.29.215:6443: connect: connection refused
E1026 22:08:17.690227 58620 memcache.go:265] couldn't get current server API
group list: Get "https://192.168.29.215:6443/api?timeout=32s": dial tcp
192.168.29.215:6443: connect: connection refused
E1026 22:08:17.691538 58620 memcache.go:265] couldn't get current server API
group list: Get "https://192.168.29.215:6443/api?timeout=32s": dial tcp
192.168.29.215:6443: connect: connection refused
The connection to the server 192.168.29.215:6443 was refused - did you specify the
right host or port?
admin-tccl@Admin:~/.kube$ ^C
admin-tccl@Admin:~/.kube$

You might also like