Goodbye friends, I'll be back once either I or upstream patches my kernels. :)

Out of an abundance of caution, I am going to bring NAS offline until there is an upstream Linux kernel fix available for CVE-2019-8912.

Note that NAS may be offline for a few days as a result.

Take this time to be with your friends and family OTG-style.

Looks like NAS needed a reboot to get federation rolling again. No idea what the issue was and why it ended up in a half-broken state.

Thanks, Obama.

Performing some early morning server updates. Server will be down for a bit while I perform maintenance on the underlying host server.

Server has been updated to v2.6.2, which resolved the security vulnerability I reported to Gargon.

Part of the additional delay was due to a custom patch I had to come up with to mitigate a DoS issue I discovered and reported to @Gargron. Still waiting on an official fix, but it should at least mitigate the issue for us until an upstream patch is available.

The server was down for approximately three hours while maintenance was performed on the server.
Now dust-free with fresh backups and more love and light.

NAS is going to go down for a full-rack reboot in a bit. Didn't get around to it last weekend or the weekend before, so the time is now.

Prepare yourselves.

NAS will go down for a period of 1-2 hours on Sunday.

I'm adding an additional UPS to the mix and moving the rack from a 15A fuse to a 20A.

Should be stable for the rest of today.

New surge protector + additional UPS added, cabling moved around front. Still need to re-cable and install the finger-feeder, but it's in progress. Will attach pictures once I re-cable.

In a few hours I'm going to take NAS down for at least a few hours, possibly all day and into tomorrow.

I will post pictures of the new (semi) organized rack when we return.

I'll likely be taking NAS up and down over the weekend as I reorganize the rack and re-cable some things.

...and updated to the latest non-vulnerable version of mastodon.

Fixed my biggest mastodon issue. When building the docker container, yarn frequently failed on a network request like this, " unexpected end of file", which aborted the docker build.

Fixed by adding the "--network-concurrency 1" option to "yarn install".

The fact that yarn doesn't handle graceful retries and lowering the number of threads is annoying.

# Why did it take so long?

The backup took a few hours, but what really killed it was the shitty yarn/codeload/npm registry networking problems. If we're having problems dl'ing a file, we should just give up the build process immediately. If the user wants to try again, they'll resubmit the job, riiiiiight?

I can fix this, but it means I need to stop being lazy and hook these builds up to CI and my internal registry, which I may or may not want to do.

What happened during the downtime?

# Full-vm backup

I don't do whole-vm backups (not snapshots) often due to the time it takes. The disk isn't large (500G), but it's not very sparse. I can fix this but it's tedious and I have to use DiskDestroyer to do it.

# Update mastodon to 2.4.1

I was waiting for the first point-release before updating, but we're now running the latest version of Mastodon.

Show more
No Agenda Social

Home to Producers and Fans of the
No Agenda Show Podcast If you have an issue please DM @adam@noagendasocial.com