I would cancel that subscription SOOO FAST.
I'd argue that YTMusic is a superior product to YT, but both put together aren't worth anywhere near the cost. You can get a premium TV/Movie service for that price with family access.
I would cancel that subscription SOOO FAST.
I'd argue that YTMusic is a superior product to YT, but both put together aren't worth anywhere near the cost. You can get a premium TV/Movie service for that price with family access.
We've GOT A PAYERR OVER HEREEEEE!!!!!!!
I think Wil Wheaton had something that was supposed to air on Freevee, the link his PR person gave him just threw you back into the Amazon video page, I've never actually seen any information about the service or a working video stream surface.
It seems like a lot of places are ready to throw millions of dollars into system and just never freaking marking them.
Oh god yes, ran into this asking for a shell.nix file with a handful of tricky dependencies. It kept trying to do this insanely complicated temporary pull and build from git instead of just a 6 line file asking for the right packages.
This has already started to happen. The new llama3.2 model is only 3.7GB and it WAAAAY faster than anything else. It can thow a wall of text at you in just a couple of seconds. You're still not running it on $20 hardware, but you no longer need a 3090 to have something useful.
We've never tried a post-morem candidate, stranger thing have happened (just happened)
You can get a lot done currently with ARC. The mobile ARC versions share system memory, So if you get a mini PC with ARC and upgrade it to 96GB, you can share system ram with the GPU and load decently large models. They're a little slow it not being vram and all, but still useful (and cheap)
https://www.youtube.com/watch?v=xyKEQjUzfAk
I have it running on a zenbook duo with 32GB so I can't load the 70B models, but I works shockingly well.
Sounds like you're getting better numbers than we do :) Wonder if there's some incompatibility in our fleet hardware that you don't have. We're mostly Dell XPS. The biggest problem we regularly have is the audio output and mic inputs going rogue. They'll be using the machine with sound all day, no problem, go into a meeting and there's no sound. They'll have the same problem with microphones. Somehow the browser session behind the scenes doesn't pick up the current default device settings and the volume for the Slack session ends up being muted.
I certainly don't wan to run windows on it :)
I've been running llama keep my telemetry out of the hands of Microsoft/Google/"open"AI. I'm kind of shocked how much I can do locally with a half assed video card, and offline model and a hacked up copy of searxng.
I had my money on Zombie Burnie Sanders, but you might be on to something.
I honestly had no idea what was in chorizo. I had been making chili with it at home and it came time to make it for work, I stopped by the market near work and they didn't have any. I was all "FINE!, I'll make my own" and looked it up, there are TONS of variations. The one I went for was basically vinegar, coriander, cinnamon, cloves, and most of the spices I already use in chili.
One of my favorite taco shops made one that was very hot and just a touch sweet the cinnamon was forward which I didn't care for at first, but it ended up being amazing, it was also processed fine like round beef. I've been trying to replicate that for a while.
Yeah, once you have to question its answer, it's all over. It got stuck and gave you the next best answer in it's weights which was absolutely wrong.
You can always restart the convo, re-insert the code and say what's wrong in a slightly different way and hope the random noise generator leads it down a better path :)
I'm doing some stuff with translation now, and I'm finding you can restart the session, run the same prompt and get better or worse versions of a translation. After a few runs, you can take all the output and ask it to rank each translation on correctness and critique them. I'm still not completely happy with the output, but it does seem that sometime if you MUST get AI to answer the question, there can be value in making it answer it across more than one session.