Depending on your timezone, it is possibly a peak in traffic from the US, an overlap of July 4th, Reddit userbase jumping in, and the recent surge on shitposting about...sigh... beans.
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
This issue occured a few weeks ago as well, even when we had very little traffic. We still have peanuts when compared with other instances.
interesting my new instance just had a 10ish minute cpu spike where ir was unresponsive. Even following a reboot.
Yeah, mine have technically happened after reboots, although things typically take a few days at least for the problem to creep up. This past time, I basically have a whole entire week in before things went to crap.
Oh, and for completeness:
-
We've deleted the vast majority of the spam bots that spammed our instance, are currently on closed registration with applications, and have had no anomalous activity since.
-
Our server is essentially always at 50% memory (1GB/2GB), 10% CPU (2 vCPUs), and 30% disk (15-20GB/60GB) until a spike. Disk utilization does not change during a spike.
-
Our instance is relatively quiet, and we probably have no more than ten truly active users at this point. We have a potential uptick in membership, but this is still relatively slow and negligible.
-
This issue has happened before, but I assumed it was fixed when I changed the PostgreSQL configuration to utilize less RAM. This is still the longest lead-up time before the spikes started.
-
When the spike resolves itself, the instance works as expected. The issues with service interruptions seems to stem from a drastic increase in resource utilization, which could be caused by some software component that I'm not aware of. I used the Ansible install for Lemmy, and have only modified certain configuration files as required. For the most part, I've only added a higher max_client_body_size in the nginx configs for larger images, and have added settings for an SMTP relay to the main config.hjson file. The spikes occured before these changes, which leads me to believe that they are caused by something I have not yet explored.
-
These issues occured on both 0.17.4 and 0.18.0, which seems to indicate it's not a new issue stemming from a recent source code change.
I've been seeing similar since upgrading to 0.18. Upgraded to 0.18.1-rc.9 yesterday... haven't seen it reoccur again.... yet.
Here is an example I happened to be at my PC for:
The problem is that an update will inherently involve a restart of everything, which tends to solve the problem anyway. Whether the update fixed things or restarting things temporarily did is only something you can find out in a few days.
Yeah, I've gone over 24 hours now without it occurring... but not calling it "fixed" until at least a week.
I had the same thing happen. Max CPU usage, couldn't even ssh in to fix it and had to reboot from aws console. Logs don't show anything unusual apart from postgres restarting 30 minutes into the spike, possibly from being killed by the system.
You say yours solved itself in 10 minutes, mine didn't seem to stop after 2 hours, so I reeboted. It could be that my vps is just 1 CPU, 1 GB RAM, so it took longer doing whatever it was doing.
Now I set up RAM and CPU limits following this question, and an alert so I can hopefully ssh in and figure out what's happening when it's happening.
Any suggestions on what I should be looking at if I manage to get into the system?
I rebooted about 5 minutes into it. running a t2.micro instance but it went back into high cpu after reboot and I was still unable to ssh in for another 5 minutes. I just rebooted it again to be sure and it was available almost immediately.
I'll save this to look at later, but I did use PGTune to set my total RAM allocation for PostgreSQL to be 1.5GB instead of 2. I thought this solved the problem initially, but the problem is back and my config is still at 1.5GB (set in MB to something like 1536 MB, to avoid confusion).
It just happened again. I couldn't ssh in despite the limit on docker resources, which leads me to believe it may not be related to docker or Lemmy.
This time it lasted only 20 minutes or so. Once it was over I could log back in and investigate a little. There isn't much to see. lemmy-ui was killed sometime during the event
IMAGE COMMAND CREATED STATUS PORTS
nginx:1-alpine "/docker-entrypoint.…" 9 days ago Up 25 hours 80/tcp, 0.0.0.0:14252->8536/tcp, :::14252->8536/tcp
dessalines/lemmy-ui:0.18.0 "docker-entrypoint.s…" 9 days ago Up 3 minutes 1234/tcp
dessalines/lemmy:0.18.0 "/app/lemmy" 9 days ago Up 25 hours
asonix/pictrs:0.4.0-rc.7 "/sbin/tini -- /usr/…" 9 days ago Up 25 hours 6669/tcp, 8080/tcp
mwader/postfix-relay "/root/run" 9 days ago Up 25 hours 25/tcp
postgres:15-alpine "docker-entrypoint.s…" 9 days ago Up 25 hours
I still have no idea what's going on.
I’m having similar issues with my instance where I’m the only one on it. I allocated more RAM to it now to see if it does anything.
I did that a while ago, and unfortunately, it didn't really help. I don't think it's an issue of RAM, but rather a daemon or something periodically going nuclear with resource utilization. A configuration issue, perhaps?
Sounds more like it, yes. I’ll keep an eye on it.
Maybe we should create a post in the support community?
You can if you want. Reply here with the link if you do (or mention me if that's a thing on Lemmy).
I’ve limited the resources available to Lemmy and pictrs and will see if it helps.