sunstoned

joined 4 months ago
[–] [email protected] 7 points 1 week ago* (last edited 1 week ago)

Well that's odd!

Here you go:

27
submitted 1 week ago* (last edited 1 week ago) by [email protected] to c/[email protected]
 

I've been playing around with my home office setup. I have multiple laptops to manage (thanks work) and a handful of personal devices. I would love to stop playing the "does this charging brick put out enough juice for this device" game.

I have:

  • 1x 100W Laptop
  • 1x 60W Laptop
  • 1x 30W Router
  • 1x 30W Phone
  • 2x raspberry pis

I've been looking at multi-device bricks like this UGREEN Nexode 300W but hoped someone might know of a similar product for less than $170.

Saving a list of products that are in the ballpark below, in case they help others. Unfortunately they just miss the mark for my use case.

  • Shargeek S140: $80, >100W peak delivery for one device, but drops below that as soon as a second device is plugged in.
  • 200W Omega: at $140 it's a little steep. Plus it doesn't have enough ports for me. For these reasons, I'm out.
  • Anker Prime 200W: at $80 this seems like a winner, but ~~they don't show what happens to the 100W outputs when you plug in a third (or sixth) device. Question pending with their support dept.~~ it can't hit 100W on any port with 6 devices plugged in.
  • Anker Prime 250W: thanks FutileRecipe for the recommendation! This hits all of the marks and comes in around $140 after a discount. Might be worth the coin.

If you've read this far, thanks for caring! You're why this corner of the internet is so fun. I hope you have a wonderful day.

[–] [email protected] 2 points 2 weeks ago

Please don't assume anything, it's not healthy.

Explicitly stating assumptions is necessary for good communication. That's why we do it in research. :)

it depends on the license of that binary

It doesn't, actually. A binary alone, by definition, is not open source as the binary is the product of the source, much like a model is the product of training and refinement processes.

You can't just automatically consider something open source

On this we agree :) which is why saying a model is open source or slapping a license on it doesn't make it open source.

the main point is that you can put closed source license on a model trained from open source data

  1. Actually the ability to legally produce closed source material depends heavily on how the data is licensed in that case
  2. This is not the main point, at all. This discussion is regarding models that are released under an open source license. My argument is that they cannot be truly open source on their own.
[–] [email protected] 5 points 2 weeks ago (2 children)

Quite aggressive there friend. No need for that.

You have a point that intensive and costly training process plays a factor in the usefulness of a truly open source gigantic model. I'll assume here that you're referring to the likes of Llama3.1's heavy variant or a similarly large LLM. Note that I wasn't referring to gigantic LLMs specifically when referring to "models". It is a very broad category.

However, that doesn't change the definition of open source.

If I have an SDK to interact with a binary and "use it as [I] please" does that mean the binary is then open source because I can interact with it and integrate it into other systems and publish those if I wish? :)

[–] [email protected] 0 points 2 weeks ago

Do you plan to sue the provider of your "open source" model? If so, would the goal be to force the provider to be in full compliance with the license (access to their source code and training set)? Would the goal be to force them to change the license to something they comply with?

[–] [email protected] 2 points 2 weeks ago (2 children)

You would be obligated, if your goal were to be complying with the spirit and description of open source (and sleeping well at night, in my opinion).

Do you have the source code and full data set used to train the "open source" model you're referring to?

[–] [email protected] 7 points 2 weeks ago (15 children)

My point precisely :)

A pre-trained model alone can't really be open source. Without the source code and full data set used to generate it, a model alone is analogous to a binary.

[–] [email protected] 6 points 2 weeks ago (21 children)

If I license a binary as open source does that make it open source?

[–] [email protected] 1 points 2 weeks ago

Those bar mittens are killer too.

https://barmitts.com/

[–] [email protected] 3 points 2 weeks ago (23 children)

What makes it open source?

[–] [email protected] 3 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

Excellent notes. If I could add anything it would be on number 4 -- just. add. imagery. For the love of your chosen deity, learn the shortcut for a screenshot on your OS. Use it like it's astro glide and you're trying to get a Cadillac into a dog house.

The little red circles or arrows you add in your chosen editing software will do more to convey a point than writing a paragraph on how to get to the right menu.

[–] [email protected] 2 points 3 weeks ago* (last edited 3 weeks ago)

Believe what you will. I'm not an authority on the topic, but as a researcher in an adjacent field I have a pretty good idea. I also self host Ollama and SearXNG (a metasearch engine, to be clear, not a first party search engine) so I have some anecdotal inclinations.

Training even a teeny tiny LLM or ML model can run a typical gaming desktop at 100% for days. Sending a query to a pretrained model hardly even shows up on HTop unless it's gigantic. Even the gigantic models only spike the CPU for a few seconds (until the query is complete). SearXNG, again anecdotally, spikes my PC about the same as Mistral in Ollama.

I would encourage you to look at more explanations like the one below. I'm not just blowing smoke, and I'm not dismissing the very real problem of massive training costs (in money, energy, and water) that you're pointing out.

https://www.baeldung.com/cs/chatgpt-large-language-models-power-consumption

[–] [email protected] 0 points 3 weeks ago* (last edited 3 weeks ago) (2 children)

I don't disagree, but it is useful to point out there are two truths in what you wrote.

The energy use of one person running an already trained model on their own hardware is trivial.

Even the energy use of many many people using already trained models (ChatGPT, etc) is still not the problem at hand (probably on the order of the energy usage from a typical search engine).

The energy use in training these models (the appendage measuring contest between tech giants pretending they're on the cusp of AGI) is where the cost really ramps up.

 

Is anybody self hosting Beeper bridges?

I'm still wary of privacy concerns, as they basically just have you log into every other service through their app (which as I understand is always going on in the closed source part of Beeper's product).

The linked GitHub README also states that the benefit of hosting their bridge setup is basically "hosting Matrix hard" which I don't necessarily believe.

view more: next ›