tatterdemalion

joined 1 year ago
 

I ask because it would be nice to use the "I2P mixed mode" features of qbittorrent, but I want to keep my clearnet traffic on the VPN.

Background

I have I2PD running only on my home gateway for better tunnel uptime.

To ensure that torrent traffic never escapes the VPN tunnel, I have configured qbittorrent to use only the VPN Wireguard interface.

Problem

I think this means qbittorrent I2P traffic will flow into the VPN tunnel, but then the VPN host won't know how to route back to my home gateway where the SAM bridge is running.

 

I've configured my i2pd proxy correctly so things are somewhat working. I was able to visit notbob.i2p. But sometimes Firefox really likes to replace "http" with "https" when I click on a link or even enter the URL manually into the bar. I have "HTTPS-only mode" turned off, and I also have "browser.fixup.fallback-to-https" set to "false" and "network.stricttransportsecurity.preloadlist" to false.

I tried spying on the HTTP traffic in web dev tools, and I see the request gets NS_ERROR_UNKNOWN_HOST. This does not happen when using the xh CLI HTTP client, so Firefox is doing something weird with name resolution. I made sure to turn off the Firefox DNS over HTTPs setting as well, but it didn't seem to make a difference.

I assume that name resolution needs to happen in i2pd. How can I force Firefox to let that happen?

Update: Chrome works fine.

Update: I started fresh and simplified the setup and it seems fixed. I'm not entirely sure why. The only things I've changed from default are DoH and the manual HTTP proxy.

[–] [email protected] 12 points 3 days ago* (last edited 3 days ago) (2 children)

My point wasn't so much that I think RED is shady but that exposing my IP seems like an unnecessary requirement to join. Why can I not have my membership tracked via an anonymous account? If they are concerned about account harvesting or something, then the interview already seems like a good enough measure, accompanied by seed ratio minimums.

 

I was just reading through the interview process for RED, and they specifically forbid the use of VPN during the interview. I don't understand this requirement, and it seems like it would just leak your IP address to the IRC host, which could potentially be used against you in a honeypot scenario. Once they have your IP, they could link that with the credentials used with the tracker while you are torrenting, regardless of if you used VPN while torrenting.

[–] [email protected] 2 points 6 days ago

Thanks for all the info. This makes me want to try it even more now!

[–] [email protected] 4 points 6 days ago

There are plenty of good resources online. Here are some topics you probably wouldn't see in an intro algos course (which I've actually used in my career). And I highly recommend finding the motivation for each of these in application rather than just learning them abstractly.

  • bloom filter
  • btree
  • b+ tree
  • consensus algos (PAXOS, RAFT, VSR, etc)
  • error correction codes (Hamming, Reed Solomon, etc)
  • garbage collection (mark+sweep, generational, etc)
  • generational arena allocator
  • lease (i.e. distributed lock)
  • log-structured merge trees
  • min-cost + max-flow
  • request caching and coalescing
  • reservoir sampling
  • spatial partition (BVH, kd-tree, etc)
  • trie
  • write-ahead log
[–] [email protected] 2 points 6 days ago (2 children)

That's good to hear. Is it as easy to navigate and customize as Sway/i3?

[–] [email protected] 1 points 1 week ago

2001: A Space Odyssey

Just epic classical music.

Spirited Away

Joe Hisaishi fits the fantastical setting perfectly. Lots of bittersweet, exciting, and meditative moods, each placed in the perfect scene.

Tarantino's movies usually have a great song selection.

Full Metal Jacket

Hearing "Bird is the Word" juxtaposed with the Vietnam War is just a crazy choice that paid off.

Nobody (2021)

Turned me on to Luther Allison. Not a "best soundtrack" but it definitely stands out.

[–] [email protected] 4 points 1 week ago

O Brother is great for some classic folk.

[–] [email protected] 0 points 1 week ago

Oh yeah what kind of phone do you have?

[–] [email protected] 5 points 1 week ago (1 children)

2

Looks like some cursed 3D model out of Jimmy Neutron.

[–] [email protected] 3 points 1 week ago (1 children)

Is that because you have a daemon in your brain, swapping neurons to force you to pronounce it wrong?

[–] [email protected] 5 points 1 week ago (2 children)

Those specs are crazy. Probably going to cost a fortune.

 

I'm preparing for a new PC build, and I decided to try a new atomic OS after having been with NixOS for about a year.

First I tried Kinoite, then Bazzite, but even though KDE has a lot of features, I found it incredibly buggy, and it even had generally poor performance, especially in Firefox. I don't really have time to diagnose these issues, so I figured I would put in just a little more effort and migrate my Sway config to Fedora Sway Atomic.

I'm glad I did. The vanilla install of Fedora Sway is awesome. No bloat and very usable. I haven't noticed any bugs. Performance is excellent. And it was very straightforward to apply my sway config on top without losing the nice menu bar, since Fedora puts their sway config in /usr/share/sway.

I'm also quite happy with the middle ground of using an OSTree-based Linux plus Nix and Home Manager for my user config. I always thought that configuring the system-level stuff in Nix was the hardest part with the least payoff, but it was most productive to have a declarative config for my dev tools and desktop environment.

I originally tried NixOS because I wanted bleeding edge software without frequent breakage, and I bought into the idea of a declarative OS configuration with versioned updates and rollback. It worked out well, but I would be lying if I said it wasn't a big time investment to learn NixOS. I feel like there's a sweet spot with container images for a base OS layer then Nix and Home Manager for stuff that's closer to your actual workflows.

I might even explore building my own OS image on top of Universal Blue's Nvidia image.

Hope this path forward stays fruitful! I urge anyone who's interested in immutable distros to give this a try.

14
submitted 8 months ago* (last edited 8 months ago) by [email protected] to c/[email protected]
 

I've never felt the urge to make a PL until recently. I've been quite happy with a combination of Rust and Julia for most things, but after learning more about BEAM languages, LEAN4, Zig's comptime, and some newer languages implementing algebraic effects, I think I at least have a compelling set of features I would like to see in a new language. All of these features are inspired by actual problems I have programming today.

I want to make a language that achieves the following (non-exhaustive):

  • significantly faster to compile than Rust
  • at least has better performance than Python
  • processes can be hot-reloaded like on the BEAM
  • most concurrency is implemented via actors and message passing
  • built-in pub/sub buses for broadcast-style communication between actors
  • runtime is highly observable and introspective, providing things like tracing, profiling, and debugging out of the box
  • built-in API versioning semantics with automatic SemVer violation detection and backward compatible deployment strategies
  • can be extended by implementing actors in Rust and communicating via message passing
  • multiple memory management options, including GC and arenas
  • opt-in linear types to enable forced consumption of resources
  • something like Jane Street's Ocaml "modes" for simpler borrow checking without lifetime variables
  • generators / coroutines
  • Zig's comptime that mostly replaces macros
  • algebraic data types and pattern matching
  • more structural than nominal typing; some kind of reflection (via comptime) that makes it easy to do custom data layouts like structure-of-arrays
  • built-in support for multi-dimensional arrays, like Julia, plus first-class support for database-like tables
  • standard library or runtime for distributed systems primitives, like mesh topology, consensus protocols, replication, object storage and caching, etc

I think with this feature set, we would have a pretty awesome language for working in data-driven systems, which seems to be increasingly common today.

One thing I can't decide yet, mostly due to ignorance, is whether it's worth it to implement algebraic effects or monads. I'm pretty convinced that effects, if done well, would be strictly better than monads, but I'm not sure how feasible it is to incorporate effects into a type system without requiring a lot of syntactical overhead. I'm hoping most effects can be inferred.

I'm also nervous that if I add too many static analysis features, compile times will suffer. It's really important to me that compile times are productive.

Anyway, I'm just curious if anyone thinks this would be worth implementing. I know it's totally unbaked, so it's hard to say, but maybe it's already possible to spot issues with the idea, or suggest improvements. Or maybe you already know of a language that solves all of these problems.

 
 

Who are these for? People who use the terminal but don't like running shell commands?

OK sorry for throwing shade. If you use one of these, honestly, what features do you use that make it worthwhile?

 

More specifically, I'm thinking about two different modes of development for a library (private to the company) that's already relied upon by other libraries and applications:

  1. Rapidly develop the library "in isolation" without being slowed down by keeping all of the users in sync. This causes more divergence and merge effort the longer you wait to upgrade users.
  2. Make all changes in lock-step with users, keeping everyone in sync for every change that is made. This will be slower and might result in wasted work if experimental changes are not successful.

As a side note: I believe these approaches are similar in spirit to the continuum of microservices vs monoliths.

Speaking from recent experience, I feel like I'm repeatedly finding that users of my library have built towers upon obsolete APIs, because there have been multiple phases of experimentation that necessitated large changes. So with each change, large amounts of code need to be rewritten.

I still think that approach #1 was justified during the early stages of the project, since I wanted to identify all of the design problems as quickly as possible through iteration. But as the API is getting closer to stabilization, I think I need to switch to mode #2.

How do you know when is the right time to switch? Are there any good strategies for avoiding painful upgrades?

 
48
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 

I just commented on this post and it got removed very quickly. Then I noticed that all of the comments had been removed and the post is locked.

I cannot understand why this happened, as the comments section had seemed pretty reasonable to me.

This seems like bad moderation and I'm now less inclined to post or comment in the world news community. What should I do?

I tried messaging a mod that is seemingly online and actively posting, but I got no response.

 

After moving from lemmy.ml to programming.dev, I've noticed that web responses are fulfilled much more quickly, even for content on federated instances like lemmy.ml and lemmy.world.

It seems like this shouldn't make such a big difference. If a large instance is overloaded, it's overloaded, whether the traffic is coming from clients with accounts on that instance or from other federated instances.

Can this be explained entirely by response caching?

view more: next ›