this post was submitted on 05 Jul 2023
24 points (92.9% liked)

Selfhosted

40246 readers
539 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I should add that this isn't the first time this has happened, but it is the first time since I reduced the allocation of RAM for PostgreSQL in the configuration file. I swore that that was the problem, but I guess not. It's been almost a full week without any usage spikes or service interruptions of this kind, but all of a sudden, my RAM and CPU are maxing out again at regular intervals. When this occurs, the instance is unreachable until the issue resolves itself, which seemingly takes 5-10 minutes.

The usage spikes only started today out of a seven-day graph; they are far above my idle usage.

I thought the issue was something to do with Lemmy periodically fetching some sort of remote data and slamming the database, which is why I reduced the RAM allocation for PostgreSQL to 1.5 GB instead of the full 2 GB. As you can see in the above graph, my idle resource utilization is really low. Since it's probably cut off from the image, I'll add that my disk utilization is currently 25-30%. Everything seemed to be in order for basically an entire week, but this problem showed up again.

Does anyone know what is causing this? Clearly, something is happening that is loading the server more than usual.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 1 year ago

It just happened again. I couldn't ssh in despite the limit on docker resources, which leads me to believe it may not be related to docker or Lemmy.

This time it lasted only 20 minutes or so. Once it was over I could log back in and investigate a little. There isn't much to see. lemmy-ui was killed sometime during the event

IMAGE                        COMMAND                  CREATED      STATUS         PORTS                                              
nginx:1-alpine               "/docker-entrypoint.…"   9 days ago   Up 25 hours    80/tcp, 0.0.0.0:14252->8536/tcp, :::14252->8536/tcp
dessalines/lemmy-ui:0.18.0   "docker-entrypoint.s…"   9 days ago   Up 3 minutes   1234/tcp                                              
dessalines/lemmy:0.18.0      "/app/lemmy"             9 days ago   Up 25 hours                                                         
asonix/pictrs:0.4.0-rc.7     "/sbin/tini -- /usr/…"   9 days ago   Up 25 hours    6669/tcp, 8080/tcp                                    
mwader/postfix-relay         "/root/run"              9 days ago   Up 25 hours    25/tcp                                                
postgres:15-alpine           "docker-entrypoint.s…"   9 days ago   Up 25 hours

I still have no idea what's going on.