TheHobbyist

joined 1 year ago
[–] TheHobbyist 1 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

This is interesting. Need to check if this is implemented in Open-WebUI.

But I think the thing which I'm hoping for most (in open-webui), is the support of draft models for speculative decoding. This would be really nice!

Edit: it seems it's not implemented in ollama yet

[–] TheHobbyist 4 points 2 weeks ago

Can't you return the laptop within 30 days if you don't like it? If that's the case, why don't you just go ahead, buy it and give it a reasonable shot? Nobody else's opinion will change how the laptop works for you :)

[–] TheHobbyist 17 points 2 weeks ago (3 children)

I wouldn't assume this is done with malice in mind, but maybe this is someone unaware of the importance of a formal license.

[–] TheHobbyist 14 points 2 weeks ago* (last edited 2 weeks ago) (5 children)

Indeed. They only have in their github page:

Terms of Use

Feel free to use these components in personal and commercial projects. However, while the tutorials and demos are available for your use as-is, they cannot be redistributed or resold. Let’s keep things fair and respect each other’s work.

[–] TheHobbyist 10 points 2 weeks ago* (last edited 2 weeks ago) (4 children)

I'm wondering, the integrated RAM like Intel did for Lunar Lake, could the same performance be achieved with the latest CAMM modules? The only real way to go integrated to get the most out of it is doing it with HBM, anything else seems like a bad trade-off.

So either you go HBM with real bandwidth and latency gains or CAMM with decent performance and upgradeable RAM sticks. But the on-chip ram like Intel did is neither providing the HBM performance nor the CAMM modularity.

[–] TheHobbyist 1 points 2 weeks ago (1 children)

This is interesting, I've mostly followed the pytorch activities as I've been using that since about 2018, but I have no idea where tensorflow is. Are there important non-google projects released the last two years which are using tensorflow?

I know Facebook is naturally very invested in pytorch, but we regularly see things from openai, Microsoft, Nvidia and such being released in pytorch from what I recall.

[–] TheHobbyist 4 points 3 weeks ago

Thanks for these reports/updates, always nice to see, it's kind of like a newsletter, shedding light on various new communities worthy of visiting or looking for a new mod or so. :)

[–] TheHobbyist 1 points 3 weeks ago

I am quite puzzled how Intel's Lunar Lake CPUs are considered so good yet these Arrow Lake CPUs are so bad. I would have hoped Arrow Lake would implement all the positive learning's of Lunar Lake and simply scale things up for desktop, workstations and beyond?

[–] TheHobbyist 16 points 3 weeks ago

They used PimEyes, nothing new.

Of importance: they do not want to release the tool but use it as a way to raise awareness.

[–] TheHobbyist 14 points 3 weeks ago (1 children)

The whole talk is available here: https://www.youtube.com/watch?v=ZNK4aSv-krI

This specific one is at 39min.

[–] TheHobbyist 9 points 3 weeks ago (1 children)

You mean between the French article and the English comment? :)

[–] TheHobbyist 2 points 4 weeks ago (1 children)

Thanks! That's what I wanted to know, I've been eyeing the game and interested in getting it. I think I've even seen it on gog, so that's great!

view more: ‹ prev next ›