Can't you return the laptop within 30 days if you don't like it? If that's the case, why don't you just go ahead, buy it and give it a reasonable shot? Nobody else's opinion will change how the laptop works for you :)
TheHobbyist
I wouldn't assume this is done with malice in mind, but maybe this is someone unaware of the importance of a formal license.
I'm wondering, the integrated RAM like Intel did for Lunar Lake, could the same performance be achieved with the latest CAMM modules? The only real way to go integrated to get the most out of it is doing it with HBM, anything else seems like a bad trade-off.
So either you go HBM with real bandwidth and latency gains or CAMM with decent performance and upgradeable RAM sticks. But the on-chip ram like Intel did is neither providing the HBM performance nor the CAMM modularity.
This is interesting, I've mostly followed the pytorch activities as I've been using that since about 2018, but I have no idea where tensorflow is. Are there important non-google projects released the last two years which are using tensorflow?
I know Facebook is naturally very invested in pytorch, but we regularly see things from openai, Microsoft, Nvidia and such being released in pytorch from what I recall.
Thanks for these reports/updates, always nice to see, it's kind of like a newsletter, shedding light on various new communities worthy of visiting or looking for a new mod or so. :)
I am quite puzzled how Intel's Lunar Lake CPUs are considered so good yet these Arrow Lake CPUs are so bad. I would have hoped Arrow Lake would implement all the positive learning's of Lunar Lake and simply scale things up for desktop, workstations and beyond?
They used PimEyes, nothing new.
Of importance: they do not want to release the tool but use it as a way to raise awareness.
The whole talk is available here: https://www.youtube.com/watch?v=ZNK4aSv-krI
This specific one is at 39min.
You mean between the French article and the English comment? :)
Thanks! That's what I wanted to know, I've been eyeing the game and interested in getting it. I think I've even seen it on gog, so that's great!
This is interesting. Need to check if this is implemented in Open-WebUI.
But I think the thing which I'm hoping for most (in open-webui), is the support of draft models for speculative decoding. This would be really nice!
Edit: it seems it's not implemented in ollama yet