User Details
- User Since
- Nov 2 2014, 11:35 PM (511 w, 2 d)
- Availability
- Available
- IRC Nick
- andrewbogott
- LDAP User
- Unknown
- MediaWiki User
- Andrewbogott [ Global Accounts ]
Yesterday
Ester writes:
Mon, Aug 19
These are now rebuilt with proper partitioning. They probably shouldn't be bootstrapped until T372821 is resolved.
All three of these need reimaging to get the drive labels set up properly; right now they all have a big OSD drive assigned to the os.
@cmooney says about cloudcephosd1036:
As the last custodians of the wikilabels infra, we/I can confirm that it
is unused and can be deleted. All the code that might be relevant is
archived on GH or elsewhere, so if there was any _historical_ interest,
that can be dug up if ever needed.Best,
Tobias
Fri, Aug 16
I created a new Bookworm VM and moved the cinder volume over. I expected to be able to launch striker with 'docker compose' but...
I don't see evidence that there has been any progress with this project. Please respond with a plan and timeline if you would like me to not delete these VMs.
emailed today:
Emailed today:
Emailed today, again:
Thanks @EgonWillighagen. This project is now up to date.
I've now repooled all affected ceph nodes (and rebuilt cloudcephosd1035) and repooled all cloudvirts. Until the switch flakes out again this is resolved! thx @cmooney @dcaro @VRiley-WMF
(meanwhile I am draining and rebuilding cloudcephosd1035 because it was built with improper drive assignments.)
Thu, Aug 15
@cmooney can we get cloudcephosd1036 set up now that the switch work is done?
Quick summary:
Hi folks! I haven't read all of this ticket but can I get an update about when/if you plan to remove the Buster VMs in this project?
I'm going to delete these VMs next week. If you need to check them or rescue an data, now's the time :)
Emailed project admins today:
Wed, Aug 14
Tue, Aug 13
+1 workaround ridiculous bug
Mon, Aug 12
On 2024-08-05 I sent this email to the releng and sre mailing lists:
Fri, Aug 9
Thu, Aug 8
codfw1dev is now running C
Wed, Aug 7
Tue, Aug 6
Current plan:
The remaining case is in wmf_sink, the proxy cleanup code. That code relies on being able to look up the dns records (and, by extension, the IPs) associated with a deleted VM. That is no longer possible.
Mon, Aug 5
It's a new week and a new nag! @Tgr if you would like to prevent these VMs from being shut down please upgrade them soon. Thanks!
@notconfusing can you confirm that you still plan to work on this?
Sun, Aug 4
Fri, Aug 2
OK, done.
Oh, good news, I already did this all but one (possibly tricky) case.
Thu, Aug 1
Oh, great. @Southparkfan does that unblock you here?
deleted!
Oh, I see that the prod poolcounter hosts are also still running Buster. Is someone tasked with fixing that?
Now we have per-service users for each openstack service.
@Dzahn iirc you were able to do some reprepro magic recently? If not I can give it a go but it's been a LONG time
Wed, Jul 31
I added a bunch of openstack things, and removed the grid engine reference. Now I'm passing this over to @dcaro to add toolforge dependencies, e.g. harbor
No response so I've emptied this directory.
@cmooney can you advise what (if anything) needs doing here?
Your VM now has a /data/scratch mountpoint. New future VMs should have it as well, thanks to a project-wide hiera setting I added in horizon.
Made the same change for the eqiad1 logs
Puppet is failing on the new host because of
thank you!
Tue, Jul 30
@Jgiannelos do you have any suggestions about this ticket? Shall I just delete restbase04 and let the chips fall where they may?
Thank you for paying attention to this, @Tgr. Do you still hope to work on this transfer?
Closing this for now because it doesn't seem to be happening repeatedly.
Would this be best addressed by changing the default logstash board settings to filter info and debug logs? Then they'd be there if we needed them.
Thanks @dcaro!
I did a full reset and rebuild of rabbitmq. I definitely do not know why that helped :(
Mon, Jul 29
I removed those two old servers via the database. I don't know why the CLI doesn't work (the cloudvirts were empty).
commit 44e084812e65e274c729da8a6b16ab5beb9782b0 (HEAD -> master) Author: gitpuppet for private repo <[email protected]> Date: Mon Jul 29 20:47:11 2024 +0000
Everything is now based off of trove-guest-bobcat-ubuntu-jammy. That image contains a local application of proposed patch https://review.opendev.org/c/openstack/trove/+/924285
Can you tell me what (if any) change or update preceded this?
Sun, Jul 28
The best procedure is to not re-use hostnames (typically when there are names like host05 we would just increment and name the next one host06). If you need to re-use names, wait a few minutes after deleting before recreating.
Wed, Jul 24
done. thanks for doing this work!
Hi! I'm very sorry this got missed; would you still like /scratch added to your cloud-vps project? Or did that get done already?
Long since done and migrated
Seems moot at this point, we're running cloud-vps on Bookworm now
I'll see if I can wrap this up