abies_exarchia

joined 1 year ago
[–] [email protected] 24 points 22 hours ago (1 children)

This mobile app is not associated with the current open source project. Like i think it’s a vestige from before they went open source. They recommend using actual in your mobile browser for now, which works decently well

[–] [email protected] 17 points 1 week ago (2 children)
[–] [email protected] 14 points 4 weeks ago (3 children)

He was gonna say he fed the fries to a house sparrow and the guy waiting around the corner was going to harass him for feeding an invasive species. I guess the joke is that the author sees humans as the most invasive species (which, as an aside, is a bad take when you think about indigenous peoples of our species)

[–] [email protected] 2 points 1 month ago

I looked really hard in the original paper for where it says the rate of change is greater than it has been at any other time in the Phanerozoic and for the life of me could not find it. This article from 2013 states that climate is changing faster now than in the last 65mya (since KT extinction). So I was eager to see this updated number in the paper. The cleantechnica article cites that from an interview with Judd.

My sense is that the paper does not specifically address rate because the time spans at which the rate of change is measure is dramatically different between contemporary climate change and climate change over the last 500mya. And this is what Judd observed, but did not try to get this number through the peer-review process because it might be difficult and the paper is about so much more than just rate.

I think it's a little irresponsible of the cleantechnica journalist here to use this as the title and main point. If you read the abstract and conclusion of the paper the rate is not mentioned at all. This article makes very important contributions, namely showing a strong consistent link between climate change and CO2 concentration, showing that global mean surface temperature (GMST) varied over a range from 11° to 36°C over the last 500mya, and calculating that for every doubling of CO2 concentration the GMST increases by 8°C (which is a lot higher than we thought).

[–] [email protected] 6 points 1 month ago

“We got a president that doesn’t know he’s alive”

[–] [email protected] 23 points 1 month ago (1 children)
[–] [email protected] 19 points 1 month ago (3 children)

Lol this guy thinks democrats are leftists

[–] [email protected] 2 points 3 months ago (1 children)

Yeah, i agree that there are some really tough contradictions there, and the material result definitely looks like accelerationism.

Thanks for reading it!

[–] [email protected] 11 points 3 months ago (1 children)

I would really love to see the source for that, not that i doubt you i’m just very curious

[–] [email protected] 2 points 4 months ago (2 children)

Sweet! Does it sync to mobile? I’m on ios, and haven’t looked into syncthing

[–] [email protected] 2 points 5 months ago (4 children)

I have been using obsidian for the past few months and i really enjoy it. It’s not open source, but you can self-host a not syncing service called Obsidian LiveSync that I use to sync between my computers and phone

 

Back when I was even less experienced in self-hosting I setup my media/backup server using a RAIDZ1 array and 3 x 8TB disks. It's been running well for a while and I haven't had any problems and no disk errors.

But today I read a post about 'pool design rules' stating that RAIDZ1 configurations should not have drives over 1TB because the chances of errors occurring during re-silvering are high. I wish I had known this sooner.

What can I do about this? I send ZFS snapshots to 2 single large (18TB) hardrives for cold backups, so I have the capacity to do a migration to a new pool layout. But which layout? The same article I referenced above says to not use RAIDZ2 or RAIDZ3 with any less than 6 drives...I don't want to buy 3 more drives. Do I buy an additional 8TB drive (for a total of 4 x 8TB) and stripe across two sets of mirrors? Does that make any sense?

Thank you!

 

I've been self-hosting my music in Navidrome for the last 3 or 4 years and in general I've been very satisfied. Before that I was using an old iPod. The key difference I haven't been able to recreate in Navidrome is a feeling of my own curated library where I scroll through and recognize all the artists. When I set up navidrome I ended up integrating a bunch of mp3 libraries (my father's, my own, and a few of my friends). Because many people share the Navidrome server with me, I let them add stuff that they listen to. When I browse the artist in the iOS client play:Sub I end up not recognizing about half the artists. I've found that I forget about a bunch of music because I rely so heavily on the 'search' function and don't scroll through my artist library like I did on the iPod back in the day.

I'm not sure how to address this, and I think it pretty significantly affects my relationship to my music library. I'm not sure if the solution is server-side or client-side, but essentially I want to be able to have all the music accessible in some way, and most the time I want to just browse a selection of artists that I choose. I feel like creating a playlist is not sufficient because I don't know how I would browse by artist within a playlist (at least within the clients I'm familiar with). Has anyone felt this way? Any recs?

Thank you!

 

I currently have two computers, one that has a big zfs raidz pool that I currently back everything up to. Right now, on my local computer I use rsnapshot to do snapshot backups via rsync to the remote zfs pool. I know I'm wasting a ton of space because I have snapshotting in the rsync backup, and then the zfs pool is snapshotted every day.

Does it make sense to just do a regular rsync into a backup directory on the zfs pool and then just rely on the zfs pool snapshotting for snapshotting?

Maybe eventually I will put the local machine on zfs and then just send the local zfs snapshots over, but that will take some time. Thanks!

 

I've been after that golden goose of auto-imported transactions from my US banks into a selfhosted financial manager for some time now. Plaid doesn't work with some of my banks, and comes with a slew of privacy compromises anyway. I'm looking to import transactions into firefly iii (or actualbudget) by scraping information from bank alert emails about my transactions. I wanted to write about it here in case someone had experience doing so or any tips-- or if this is a silly venture.

My plan is to set alerts for all transactions across my banks, and direct them all to a single email address. Then I'll write a python script that checks the inbox every 5mins or so, and if it detects a new email, it will parse it according to some code I write and extract the amount and the payee, and then attempt to import it into (in this case, ActualBudget) using the importTransactions API call.

It's going to be a bit of a pain in the ass to set this up as I see it (I'm also a bit of a beginner, but think I can make it work) and I just want to see if anyone else has tried this. Thanks!

 

Best bathroom i ever saw. Found on a hike in California near Sawtooth Peak in Sequoia National Park

view more: next ›