Hi,
I know LLMs and machine learning catch a bad rap in leftist circles (for good reason), but I think if you haven't already considered at a content moderation 'pre-filter' of some sort, it could hugely amplify your ability to manage the forum as a (relatively) small mod-team.
I have attached a URL with an example of an approach which could be trialed in moderation flows (e.g., if likelihood post is harmful > high threshold, put into moderation queue for manual approval). But if there's an easy way to grab, e.g., a structured bulk of messages over the last couple of years with an associated 'moderation action on post' column you could also likely build a 'likelihood messages would have had moderator action based on recent empirical moderation' for a more bespoke tool. Further, you could tune this threshold such that the moderation post queue was always manageable, just raising the threshold until it hits a manageable number of (hopefully) the most toxic messages.
But, yeah, anyway... I have experience as an ML researcher and now work in the space professionally, and would be happy to discuss, guide, or develop if there is any interest.
As somebody who isn't very active socially online, and has mainly lurked since the OLD chapo days, totally understand if there's reluctance. I'm happy to talk through options that could give you confidence I'm not a wrecker if this is an issue.
If this comm is the wrong place for this but there may be interest, feel free to direct me elsewhere or to DM me.