Apologies.
Primary DB server is proper fsck'd. It corrupted the database on the secondary when it went.
We restored from a backup from 8pm last night (which is the last one that looks not to contain the corruption), so there may be some missing posts made after that time.
Because we are running from the secondary, which is normally responsible for backups, the entire site will be unusable when backups are taken. This is likely to be early morning, between 6 and 7am, give or take. This is due to the way the database has to be locked for the period of the backup.
It will be a few days before a pair of new database servers can be stood up, and normality returns.
I hear there is this newfangled "Cloud" thing, apparently you just throw everything into the cloud and it takes care of itself by magic. Sounds like it would save you loads of time and money. You should look into it
(At least I assume that's how the conversations go at Board level in companies these days...)
Its sorta nearly on a cloud, depending on definition...
But MySQL sucks, as you well know, and is perfectly happy to replicate its corruption everywhere.
That's why we (normally) back it up hourly, and have a server dedicate to this single task.
Did I mention MySQL sucks.