Skip to main content

End of May 2025 fedora infra bits

Scrye into the crystal ball

We have arrived at the end of May. This year is going by in the blur for me. So much going on, so much to do.

Datacenter move

The switch week is still scheduled for the week of June 30th. We made some progress this last week on installs. Got everything setup to install a bunch of servers. I installed a few and kept building out services. I was mostly focusing on getting things setup so I could install openshift clusters in both prod and staging. That will let us move applications. I also setup to do rhel10 installs and installed a test virthost. There's still a few things missing from epel10 that we need: nagios clients, collectd (thats on me) and zabbix clients, otherwise the changes were reasonably minor. I might try and use rhel10 for a few things, but I don't want to spend a lot of time on it as we don't have much time.

Flock

Flock is next week! If you are looking for me, I will be traveling basically all monday and tuesday, then in prague from tuesday to very early sunday morning, when I travel back home.

If you are going to flock and want to chat, please feel free to catch me and/or drop me a note to try and meet you. Happy to talk!

If you aren't going to flock, I'm hoping everything is pretty quiet infrastructure wise. I will try and check in on any major issues, but do try and file tickets on things instead of posting to mailing lists or matrix.

I'd also like to remind everyone going to flock that we try and not actually decide anything there. It's for discussion and learning and putting a human face on your fellow contributors. Make plans, propose things definitely, just make sure after flock you use our usual channels to discuss and actually decide things. Deciscions shouldn't be made offline where those not present can't provide input.

I'm likely to do blog posts about flock days, but may be delayed until after the event. There's likely not going to be a regular saturday post next week from me.

Arm laptop

So, I successfully used this Lenovo slim7x all week, so I guess I am going to try and use it for my flock travel. Hopefully it will all work out. :)

Issues I have run into in no particular order:

  • There are a bunch of various people working on various things, and all of that work touches the devicetree file. This makes it a nightmare to try and have a dtb with working bluetooth, ec, webcam, sound, suspend, etc. I really hope a bunch of this stuff lands upstream soon. For now I just Have a kernel with bluetooth and ec working and am ignoring sound and webcam.

  • s2idle sleep "works", but I don't trust it. I suspended the other day when I was running some errands, and when I got home, the laptop had come on and was super super hot (it was under a jacket to make it less a theft target). So, I might just shutdown most of the time traveling. There's a patch to fix deep sleep, but see above.

  • I did wake up one day and it had rebooted, no idea why...

  • Otherwise everything is working fine and it's pretty nice and zippy.

  • Battery life is... ok. 7-8 hours. It's not hitting the lowest power states yet, but that will do I think for my needs for now.

So, hopefully it will all work. :)

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/114603298176306720

Third week of May 2025 fedora infra bits

Scrye into the crystal ball

Oh look, it's saturday already. Another busy week here with lots going on, so without further adieu, lets discuss some things!

Datacenter Move

Due to delays in getting network to new servers and various logistics, We are going to be moving the switcharoo week to the week of June 30th. It was set for June 16th, but thats just too close timing wise, so we are moving it out two weeks. Look for a community blog post and devel-announce post next week on this. I realize that that means that friday is July 4th (a holiday in the US), but we hope to do the bulk of switching things on monday and tuesday of that week, and leave only fixing things for wed and thursday.

We did finally get network for the new servers last week. Many thanks to all the networking folks who worked hard to get things up and running. With some network I was able to start bootstrapping infrastructure up. We now have a bastion host, a dhcp/tftp host and a dns server all up and managed via our existing ansible control host like all the rest of our hosts.

Friday was a recharge day at Red Hat, and monday is the US Memorial day holiday, but I should be back at deploying things on tuesday. Hopefully next week I will get a initial proxy setup and can then look at doing openshift cluster installs.

Flock

The week after next is flock! It came up so fast. I do plan on being there (I get into prague late morning on the 3rd). Hope to see many folks there, happy to talk about most anything. I'm really looking forward to the good energy that comes from being around so many awesome open source folks!

Of course that means I may well not be online as much as normal (when traveling, in talks, etc), so Please plan accordingly if you need my help with something.

Laptop

So, I got this lenovo slim7x snapdragon X laptop quite a long time ago, and finally I decided I should see if I can use it day to day, and if so, use it for the flock trip, so I don't have to bring my frame.work laptop.

So, I hacked up a aarch64 rawhide live with a dtb for it and was able to do a encrypted install and then upgrade the kernel. I did have to downgrade linux-firmware for the ath12k firmware bug, but thats fine.

So far it's looking tenable (I am typing this blog post on it now). I did have to add another kernel patch to get bluetooth working, but it seems to be fine with the patch. The OLED screen on this thing is wonderfull. Battery life seems ok, although it's hard to tell without a 'real life' test.

Known things not working: camera (there's patches, but it's really early so I will wait for them), sound (there's also patches, but it has the same issue the mac laptops had with there being no safeguards so you can easily destroy your speakers if you adjust too loud).

Amusing things: no discord flatpak available (the one on flathub is x86_64 only), but the web version works fine. (Although amusingly it tells you to install the app (which doesn't exist).

Also, no chrome, but there is chromium, which should be fine for sites that firefox doesn't work with.

I'll see if I can get through the weekend and upcoming week and decide what laptop I will take traveling.

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/114564927864167658

Second week of May 2025 fedora infra bits

Scrye into the crystal ball

Hello everyone. Another saturday blog post on happenings in Fedora Infrastructure over the last week.

Data Center Move

We have pretty much gotten all the new servers setup firmware wise. We have applied all the updates that happened since they were shipped, configured things as best we could for now. A few notable configuration changes we made:

  • Enabled lldp on the machines that support it. This allows networking folks to see information about which nics are on which ports, etc. Just a bunch more handy info for us and them.

  • Disabled 'hot spare' on power supply configuration. Wouldn't we want a 'hot spare'? well, no as it turns out if you enable that it means that all the servers only use the first power supply, keeping the second one idle. This means that in a rack, ALL the servers pull power from one side, which makes things very imbalanced. Instead disabling this has the server use both supplies and balance, and in the event of a failure, it just switches to the one thats still working. So, you want to be able to run everything from one side, but you definitely don't want to do so all the time.

I installed a few servers manually (see last weeks benchmarking entry), and this week I got local network setup as it should be on one: 2 25G nics bonded with 802.3ad, and a bridge on top for guests. Should be super zippy for anything local, and has the great advantage that networking folks can upgrade/reboot switches without us noticing any outages.

I also did a bunch of work on dns configuration. In order to make things easier on both us and networking folks, I asked them to just setup the new datacenter nets with a translation of existing datacenter configuration. That means we have the same number of vlans for the same purposes. Machines will be at the same last octet in both places. So for example our iad bastion server is internally at 10.3.163.31 in IAD2, and will be at 10.16.163.31 in RDU3. This also means we have a great starting point for network acls and such.

We are now somewhat in a holding pattern, waiting on external network for the servers themselves. Since we have gotten behind where we were hoping to be at this point, we very likely will be moving the actual datacenter switcharoo week out. Should know more next week if we have networking setup by then or not.

As soon as network is available, I will be bootstrapping up things in the new datacenter. Thats starting with a bastion host (to allow our existing ansible control host in our current datacenter to provision things there in the new one), then a dhcp/tftp server, then dns, then an ipa replica, then the rest of the servers, etc. After that is far enough along, we will be installing openshift clusters, getting our new signing infra working, and openqa machines and start migrating things that aren't heavily tieed to our current datacenter.

Things are gonna be busy the next month or so.

Bot blocking

A while back, we added some apache rules to block some bots that were providing a user agent, but were ignoring robots.txt, or were trying to crawl things we didn't want them to crawl or made no sense to be indexed. Last week I was looking at some AI scrapers (which don't pass a user agent saying they are a bot at all) and noticed that our block for 'normal' bots wasn't working. It turns out we had the right expression, but it only does a string match if you put the expression in "s. :(

So, I fixed that and I think it's helped reduce load over a bunch of things that shouldn't have been getting crawled in the first place.

The AI bots are still around, but mostly mitigated via various blocking of networks or specific things they decide they really really want. They are like a dog with a bone on some projects/areas... I am pretty sure they are re-crawling things they already crawled, they also seem particularly interested in forks or mirrors of things they have already crawled (even when those forks/mirrors have 0 other changes from the upstream). Here's hoping the market for these goes bust and they all go out of business.

F40 EOL and upgrades

Fedora 40 went end of life on tuesday of this last week. It's served long and well. Fond farewell to it.

We had a very few Fedora 40 instances left. The wiki was using F40. We upgraded staging and got all the issues sorted out and should be moving production to f42 next week. Bodhi was using f40 for some things (and f41 for others). There was a new upstream release with some minor rolled up changes. I upgraded staging yesterday and today, and will be rolling production very soon.

comments? additions? reactions?

As always, comment on mastodon: https://scrye.com/blogs/nirik//posts/2025/05/17/second-week-of-may-2025-fedora-infra-bits/

First full week of May infra bits 2025

Scrye into the crystal ball

This week was a lot of heads down playing with firmware settings and doing some benchmarking on new hardware. Also, the usual fires and meetings and such.

Datacenter Move

Spent a fair bit of time this week configuring and looking at the new servers we have in our new datacenter. We only have management access to them, but I still (somewhat painfully) installed a few with RHEL9 to do some testing and benchmarking.

One question I was asked a while back was around our use of linux software raid over hardware raid. Historically, there were a few reasons we choose mdadm raid over hardware raid:

  • It's possble/easy to move disks to a different machine in the event of a controller failure and recover data. Or replace a failed controller with a new one and have things transparently work. With hardware raid you need to have the same exact controller and same firmware version.

  • Reporting/tools are all open source for mdadm. You can tell when a drive fails, you can easily re-add one, reshape, etc. With hardware raid you are using some binary only vendor tool, all of them different.

  • In the distant past being able to offload to a seperate cpu was nice, but anymore servers have a vastly faster/better cpu, so software raid should actually perform better than hardware raid (barring different settings).

So, I installed one with mdadm raid another with a hardware raid and did some fio benchmarking. The software raid won overall. Hardware was actually somewhat faster on writes, but the software raid murdered it in reads. Turns out the cache settings defaults here were write-through for software and write-back for hardware, so the difference in writes seemed explainable to that.

We will hopfully finish configuring firmware on all the machines early next week, then the next milestone should be network on them so we can start bootstrapping up the services there.

Builders with >32bit inodes again

We had a few builders hit the 'larger than 32 bit inode' problem again. Basically btrfs starts allocating inode numbers when installed and builders go through a lot of them by making and deleting and making a bunch of files during builds. When that hits > 4GB, i686 builds start to fail because they cannot get a inode. I reinstalled those builders and hopefully we will be ok for a while more again. I really am looking forward to i686 builds completely going away.

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/114484593787412504

review of the SLZB-06M

I've been playing with Homeassistant a fair bit of late and I've collected a bunch of interesting gadgets. Today I'd like to talk about / review the SLZB-06M.

So the first obvious question: what is a SLZB-06M?

It is a small, Ukrainian designed device that is a: "Zigbee 3.0 to Ethernet, USB, and WiFi Adapter" So, basically you connect it to your wired network, or via usb or via wifi and it gateways that to a Zigbee network. It's really just a esp32 with a shell and ethernet/wifi/bluetooth/zigbee, but all assembled for you and ready to go.

I'm not sure if my use case is typical for this device, but it worked out for me pretty nicely. I have a pumphouse that is down a hill and completely out of line-of-sight of the main house/my wifi. I used some network over power/powerline adapters to extend a segment of my wired network over the power lines that run from the house to it, and that worked great. But then I needed some way to gateway the zigbee devices I wanted to put there back to my homeassistant server.

The device came promptly and was nicely made. It has a pretty big antenna and everything is pretty well labeled. On powering it home assistant detected it no problem and added it. However, then I was a bit confused. I already have a usb zigbee adapter on my home assistant box and the integration was just showing things like the temp and firmware. I had to resort to actually reading the documentation! :)

Turns out the way the zigbee integration works is via zigbee2mqtt. You add the repo for that, install the add on and then configure a user. Then you configure the device via it's web interface on the network to match that. Then, the device shows up in a zigbee2mqtt pannel. Joining devices to it is a bit different from a normal wifi setup, you need to tell it to 'permit join', either anything, or specific devices. Then you press the pair button or whatever on the device and it joins right up. Note that devices can only be joined to one zigbee network, so you have to make sure you do not add them to other zigbee adapters you have. You can set a seperate queue for each one of these adapters, so you can have as many networks as you have coordinator devices for.

You can also have the SLZB-06M act as a bluetooth gateway. I may need to do that if I ever add any bluetooth devices down there.

The web interface lets you set various network config. You can set it as a zigbee coordinator or just a router in another network. You can enable/disable bluetooth, do firmware updates (but homeassistant will do these directly via the normal integration), adjust the leds on the device (off, or night mode, etc). It even gives you a sample zigbee2mqtt config to start with.

After that it's been working great. I now have a temp sensor and a smart plug (on a heater we keep down there to keep things from freezing when it gets really cold). I'm pondering adding a sensor for our water holding tank and possibly some flow meters for the pipes from the well and to the house from the holding tank.

Overall this is a great device and I recommend it if you have a use case for it.

Slava Ukraini!

Beginning of May infra bits 2025

Scrye into the crystal ball

Wow, it's already May now. Time races by sometimes. Here's a few things I found notable in the last week:

Datacenter Move

Actual progress to report this week! Managed to get access to the mgmt on all our new hardware in the new datacenter. Most everything is configured right in dhcp config now (aarch64 and power10's need still some tweaking there).

This next week will be updating firmware, tweaking firmware config, setting up access, etc on all those interfaces. I want to try and do some testing on various raid configs for storage and standardize the firmware configs. We are going to need to learn how to configure the lpars on the power10 machines next week as well.

Then, the following week hopefully we will have at least some normal network for those hosts and can start doing installs on them.

The week after that I hope to start moving some 'early' things: possibly openqa and coreos and some of our more isolated openshift applications. That will continue the week after that, then it's time for flock, some more moving and then finally the big 'switcharoo' week on the 16th.

Also some work on moving some of our soon to be older power9 hardware into a place where it can be added to copr for more/better/faster copr builders.

OpenShift cluster upgrades

Our openshift clusters (prod and stg) were upgraded from 4.17 to 4.18. OpenShift upgrades are really pretty nice. There was not much in the way of issues (although a staging compute node got stuck on boot and had to be power cycled).

One interesting thing with this upgrade was that support for cgroups v1 was listed as going away in 4.19. It's not been the default in a while, but our clusters were installed so long ago that they were still using it as a default.

I like that the upgrade is basically to edit one map and change a 1 to a 2 and then openshift reboots nodes and it's done. Very slick. I've still not done the prod cluster, but likely next week.

Proxy upgrades

There's been some instablity with our proxies in particular in EU and APAC. We are going to be over the coming weeks rolling out newer/bigger/faster instances which should hopefully reduce or eliminate problems folks have sometimes been seeing.

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/114445144640282791

Late April infra bits 2025

Scrye into the crystal ball

Another week has gone by. It was a pretty quiet one for me, but it had a lot of 'calm before the storm' vibes. The storm being of course that may will be very busy setting up the new datacenter to try and migrate to it in june.

Datacenter Move

Still don't have access to our new hardware, but I'm hoping early next week I will. I did find out a good deal more about networking there and setup our dhcp server already with all the mac addresses and ip's for the management interfaces. As soon as that comes up they should just get the right addresses and be ready to work on.

Next week then would be spent setting firmware the way we want it, testing a few install paramaters to make sure how we want to install the hosts, then move on to installing all the machines.

Then on to bootstrapping things up (we need a dns server, a tftp server, etc) and then installing openshift clusters and virthosts.

So, we are still on track for the move in June as long as the management access comes in next week as planned.

nftables in production

We rolled out our switch from iptables to nftables in production on thursday. Big shout out to James Antill for all the scripting work and getting things so they could migrate without downtime.

The switch did take a bit longer than we would have liked, and there were a few small hiccups, but overall it went pretty well.

There are still some few openqa worker machines we are going to migrate next week, but otherwise we are all switched.

Staging koji synced

To allow for some testing, I did a sync of our production koji data over to the staging instance. This takes a long long time because it loads the prod db in, vacuums it, then modifies it for staging.

There was a bit of breakage at the end (I needed to change some sequences) but otherwise it went fine and now staging has all the same tags/etc as production does.

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/114405223201008788

Later April infra bits 2025

Scrye into the crystal ball

Another busy week gone by, and I'm a day late with this blog post, but still trying to keep up with it. :)

Fedora 42 out! Get it now!

Fedora 42 was released on tuesday. The "early" milestone even. There was a last minute bug found ( see: https://discussion.fedoraproject.org/t/merely-booting-fedora-42-live-media-adds-a-fedora-entry-to-the-uefi-boot-menu/148774 ) Basically booting almost any Fedora 42 live media on a UEFI install results in it adding the "Fedora" Live media to your boot manager list. This is just booting, not installing or doing anything else. On the face of it this is really not good. We don't want live media to affect installs without installing or choosing to do so. However, in this case the added entry is pretty harmless. It will result in the live media booting again after install if you leave it attached, and if not, almost all UEFI firmware will just see that the live media isn't attached and ignore that entry.

In the end we decided not to try and stop the release at the last minute for this, and I think it was the right call. It's not great, but it's not all that harmfull either.

Datacenter Move news

Networking got delayed and the new date we hope to be able to start setting things up is this coming friday. Sure hope that pans out as our window to setup everything for the move is shrinking.

There was some more planning ongoing, but will be great to actually start digging in and getting things all setup.

AI Scraper news

The scrapers seem to have moved on from pagure.io. It's been basically unloaded for the last week or more. Sadly, they seem to have discovered koji now. I had to block a few endpoints on the web frontend to stop them. Unfortunately there was a short outage of the hub caused by this, and there were 2 builds that were corrupted as a result. Pretty aggravating.

Nftables

Worked with James to roll out our iptables->nftables switch to production. All the builders are now using nftables. Hopefully we will roll out more next week.

Thats it for this week, catch everyone next week!

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/114371722035321670

Early Mid April infra bits 2025

Scrye into the crystal ball

Another week has gone by, and here's some more things I'd like to highlight from the last week.

Datacenter Move

I wrote up a community blog post draft with updates for the community. Hopefully it will be up early next week and I will also send a devel-announce list post and discussion thread.

We had a bit of a snafu around network cards. The new aarch64 boxes we got we missed getting 10G nics, so we are working to aquire those soon. The plan in the new datacenter is to have everything on dual 10G nics connected to different switches, so networking folks can update them without causing us any outages.

Some new power10 machines have arrived. I'm hopeful we might be able to switch to them as part of the move. We will know more about them once we are able to get in and start configuring them.

Next week I am hoping to get out of band management access to our new hardware in the new datacenter. This should allow us to start configuing firmware and storage and possibly do initial installs to start bootstraping things up.

Exciting times. I Hope we have enough time to get everything lined up before the june switcharoo date. :)

Fun with databases

We have been having a few applications crash/loop and others behave somewhat sluggishly of late. I finally took a good look at our main postgres database server (hereafter called db01). It's always been somewhat busy, as it has a number of things using it, but once I looked at i/o: yikes. (htop's i/o tab or iotop are very handy for this sort of thing). It showed that a mailman process was using vast amounts of i/o and basically causing the machine to be at 100% all the time. A while back I set db01 to log slow queries. So, looking at that log showed that what it was doing was searching the mailman.bounceevents table for all entries were 'processed' was 'f'. That table is 50GB. It has bounce events back 5 or 6 years at least. Searching around I found a 7 year old bug filed by my co-worker Aurélien: https://gitlab.com/mailman/mailman/-/issues/343

That was fixed! bounces are processed. However, nothing ever cleans up this table at least currently. So, I proposed we just truncate the table. However, others made a good case that the less invasive change (we are in freeze after all) would just be to add a index.

So, I did some testing in staging and then made the change in production. The queries went from: ~300 seconds to pretty much 0. i/o was now still high but around the 20-30% range most of the time.

It's amazing what indexes will do.

Fedora 42 go for next week!

Amazingly, we made a first rc for fedora 42 and... it was GO! I think we have done this once before in all of fedora history, but it's sure pretty rare. So, look for the new release out tuesday.

I am a bit sad in that there's a bug/issue around the Xfce spin and initial setup not working. Xfce isn't a blocking deliverable, so we just have to work around it. https://bugzilla.redhat.com/show_bug.cgi?id=2358688 I am not sure whats going on with it, but you can probibly avoid it by making sure to create a user/setup root in the installer.

I upgraded my machines here at home and... nothing at all broke. I didn't even have anything to look at.

comments? additions? reactions?

As always, comment on mastodon: posts/2025/04/12/early-mid-april-infra-bits-2025.rst

Early April infra bits 2025

Scrye into the crystal ball

Another week gone by and it's saturday morning again. We are in final freeze for Fedora 42 right now, so things have been a bit quieter as folks (hopefully) are focusing on quashing release blocking bugs, but there was still a lot going on.

Unsigned packages in images (again)

We had some rawhide/branched images show up again with unsigned packages. This is due to my upgrading koji packages and dropping a patch we had that tells it to never use the buildroot repo for packages (unsigned) when making images, and to instead use the compose repo for packages.

I thought this was fixed upstream, but it was not. So, the fix for now was a quick patch and update of koji. I need to talk to koji upstream about a longer term fix, or perhaps the fix is better in pungi. In any case, it should be fixed now.

Amusing idempotentness issue

In general, we try and make sure our ansible playbooks are idempotent. That is, that if you run it once, it puts things in the desiired state, and if you run it again (or as many times as you want), it shouldn't change anything at all, as the thing is in the desired state.

There are all sorts of reasons why this doesn't happen, sometimes easy to fix and sometimes more difficult. We do run a daily ansible-playbook run over all our playbooks with '--check --diff', that is... check what (if anything) changed and what it was.

I noticed on this report that all our builders were showing a change in the task that installs required packages. On looking more closely, it turns out the playbook was downgrading linux-firmware every run, and dnf-automatic was upgrading it (because the new one was marked as a security update). This was due to us specifying "kernel-firmware" as the package name, but only the older linux-firmware package provided that name, not the new one. Switching that to the new/correct 'linux-firmware' cleared up the problem.

AI scraper update

I blocked a ton of networks last week, but then I spent some time to look more closely at what they were scraping. Turns out there were 2 mirrors of projects (one linux kernel and one git ) that the scrapers were really really interested in. Since those mirrors had 0 commits or updates in the last 5 years since they were initially created, I just made those both 403 in apache and... the load is really dramatically better. Almost back to normal. I have no idea why they wanted to crawl those old copies of things already available elsewhere, and I doubt this will last, but for now this gives us a bit of time to explore other options (because I am sure they will be back).

Datacenter Move

I'm going to likely be sending out a devel-announce / community blog post next week, but for anyone who is reading this a sneak preview:

We are hopfully going to gain at least some network on our new hardware around april 16th or so. This will allow us to get in and configure firmware, decide setup plans and start installing enough machines to bootstrap things up.

The plan currently is still to do the 'switcharoo' (as I am calling it) on the week of June 16th. Thats the week after devconf.cz and two weeks after flock.

For Fedora linux users, there shouldn't be much to notice. Mirrorlists will all keep working, websites, etc should keep going fine. pagure.io will not be directly affected (it's moving later in the year).

For Fedora contributors, monday and tuesday we plan to "move" the bulk of applications and services. I would suggest just trying to avoid doing much on those days as services may be moving around or broken in various ways. Starting wed, we hope to make sure everything is switched and fix problems or issues. In some ideal world, we could just relax then, but if not, Thursday and Friday will continue stablization work.

The following week, the newest of the old machines in our current datacenter will be shipped to the new one. We will bring those up and add capacity on them (many of them will add openqa or builder resources).

That is at least the plan currently.

Spam on matrix

There's been another round of spam on matrix this last week. It's not just Fedora thats being hit, but many other communities that are on Matrix. It's also not like older communications channels (IRC) didn't have spammers on them at times in the past either. The particularly disturbing part on the matrix end is that the spammers post _very_ distirbing images. So, if you happen to look before they get redacted/deleted it's quite shocking (which is of course what the spammer wants). We have (for a long while) a bot in place and it redacts things pretty quickly usually, but then you have sometimes a lag in matrix federation, so folks on some servers still see the images until their server gets the redaction events.

There are various ideas floated to make this better, but due to the way matrix works, along with wanting to allow new folks to ask questions/interact, there is not any simple answers. It may take some adjustments to the matrix protocol.

If you are affected by this spam, you may want to set your client to not 'preview' images (so it won't load them until you click on them), and be patient as our bot bans/kicks/redacts offenders.

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/114286697832557392