Scipitie

joined 8 months ago
[–] [email protected] 5 points 4 days ago (1 children)

That... What you describe is a mesh wifi. APs plus roaming. That's a meshed network.

[–] [email protected] 2 points 5 days ago (1 children)

The screenshot had has the criteria included though. Relevant part: either be for children or for everyone.

[–] [email protected] 8 points 6 days ago

I use lemmy in two ways: Whitelist: show me my subscriptions and only those (subscribed) Or blacklisted: show me everything else except the things I want to never see.

The latter lead me to this thread! It's two different experiences for me and I get a bit out of my interest bubble from time to time.

[–] [email protected] 2 points 6 days ago

Because it's basically axiomatic: ssh uses all keys it knows about. The system can't tell you why it's not using something it doesn't know it should be able to use. You can give a -i for the certificate to check if it doesn't know it because the content is broken or the location.

That said: this doesn't make -v more useful for cases like this, just because there's a reason!

[–] [email protected] 1 points 6 days ago

I'd try chat gpt for that! :)

But to give you a very brief rundown. If you have no experience in any of these aspects and are self learning you should expect a long rampup phase! Perhaps there is an easier route but I'm not familiar with it if there is.

First, familiarize yourself with server setups. If you only want to host this you won't have to go into the network details but it could become a cause for error at one point so be warned! The usual tip here is to get yourself familiar enough with docker that you can read and understand docker compose files. The de facto standard for self hosting are linux machines but I have read of people who used Macos and even windows successfully.

One aspect quite unique to themodel landscape is the hardware requirements. As much as it hurts my nvidia despicing heart at this point in time they are the de facto monopolist. Get yourself a card with 12GB VRAM or more (everything below will be painful if you get things running at all. I've tried and pulled or smaller models on a 8GB card but experienced a lot of waiting time and crashes). Read a bit about Cuda on your chosen OS and what their drivers need.

Once you can understand this whole port, container, path mapping and environment variable things.

Then it's going to the github page linked, following their guide and starting a container. Loading models is actually the easier part once you have the infrastructure running.

[–] [email protected] 4 points 6 days ago (3 children)

No offense intended, possible that I miss read your experience level:

I hear a user asking developer questions. Either you go the route of using the publicly available services (dalle and Co) or you start digging into hosting the models yourself. The page you linked hosts trained models to use in your own contexts, not for a "click button and it works".

As a starting point for image generation self hosting I suggest https://github.com/AUTOMATIC1111/stable-diffusion-webui.

For the training part, I'll be very blunt: if you don't indent to spend five to six digit sums on hardware or processing power, forget it. And even then you'd need the raw training data to pull it of.

Perhaps what you want to do use fine tune a pretrained model, that's something I only have a. It of experience in LLMs thohfn(and even there I don't have the hardware to get beyond a personal proof of concept).

[–] [email protected] 11 points 1 week ago

Not the poster but I thought the same, only without the name calling: Because you have to agree to z receive future email when writing your account at Netflix, at least if they have the EU implementation everywhere. Highly unlikely combination.

[–] [email protected] 32 points 1 week ago

I strongly disagree. The burden of proof lies with the one making the claim and this bot has zero transparency regarding its benchmark, database or other criteria. That combined with the fact that it's usage (apparently exclusively) seems to be highly pushed is enough to stay sceptical.

Personally I just blocked it but I have full understanding for anyone downvoting it, simply to communicate "I disagree with the existence of this bot in this context"

[–] [email protected] 2 points 1 week ago

Interesting, thanks for that!

[–] [email protected] 17 points 1 week ago (6 children)

Both langchain as well as ollama run locally and are open source.

To be very frank: your post sounds like fear mongering without having even clicked on the link.

[–] [email protected] 1 points 1 week ago (2 children)

Haven't seen all of them but three is purely "do what you want, shit will happen anyway" as well. The one where the terminator doesn't need power anymore to function and a villain who's power are conviently flexible depending on what the story needed.

I didn't hate it but for me personally it was enough to lose all interest in the whole franchise. The whole terminator universe is incoherent while taking itself quite seriously - and I mean Doctor Who levels here, without just going "timey winey" with it.

I hope for the fans of the franchise that it'll work out, personally I'll stay... apathetic, if that's the right word.

[–] [email protected] 2 points 1 week ago (1 children)

Have you not read either the abstract ("calorie deficit not helping") or my comment? ("input on the inefficiency of diets is useless to OP without any impulse on what to do instead")?

I don't understand of what you're aiming for with your oneliners.

view more: next ›