TheHobbyist

joined 1 year ago
[–] TheHobbyist 14 points 11 hours ago (16 children)

It's impressive how this initiative has gathered 360k votes in no time yet completely stagnated ever since. I don't think it means that the initiative is done, actually I am quite optimistic it can pass, but these things go in waves. I think every time there is a new coverage on the initiative it launches it higher, we just need some folks who have some local presence in the EU to get their audience to act. Perhaps some famous video game streamers? Has that been done already?

[–] TheHobbyist 13 points 15 hours ago (2 children)

Has anyone been following the development of 3.6? What are some highlighted features or bugs being addressed?

[–] TheHobbyist 3 points 1 day ago

I think the only thing to keep in mind is that Nvidias proprietary drivers work better for Linux whereas for AMD it is the open-source ones.

I have an Nvidia card and the prop. drivers have worked flawlessly for me for years.

I know the open source drivers are closing the gap for Nvidia, and they also seem to be playing ball on that front. But for AMD the open source drivers are definitely the way to go from what I understand.

[–] TheHobbyist 1 points 1 day ago (1 children)

What your are describing on a high level is what O1 does. But where you are mistaken is when you say:

This thought is not human-interpretable, but it is much more efficient than the pre-output reasoning tokens of o1, which uses human language to fill its own context window with.

What makes those reasoning tokens more efficient? They are just tokens, similarly to all other ones and equally complex/simple to generate. Yes they allow for more reflexion before a presented output is given, but the process is the same.

Also, they would all need to fit in the same context because otherwise you will prevent the model from actually reasoning on it while it iterates its thoughts.

[–] TheHobbyist 12 points 1 day ago (1 children)

Being able to stream my shows on an unstable or lower bandwidth internet connection like on a train (which is where I really enjoy watching it) is impossible if I am streaming the raw files. I usually watch 480p or 720p on the go but enjoy the 1080p quality when watching from home.

Also, downloading a 1080p file takes significantly longer and takes up much more space than a 480p or 720p. My phone has no memory card and despite having 128GB internal storage, it is scarce. For a while, in the morning I was downloading my episodes before heading out, but really needed to luck out to get the episodes before I needed to catch the train (as the native jellyfin client does not allow downloading the transcoded files). You could argue I should adapt my habits to my means but I frankly really think it should be the other way around, and transcoding solves that for me.

[–] TheHobbyist 2 points 1 day ago (2 children)

It seems the post does not contain the sentence "before the end of the year [...] which gets framework laptops to all of the EU". What a shame, I was really getting excited about that fact!

[–] TheHobbyist 6 points 1 day ago (8 children)
  • Direct play only, (no transcoding)

The app sounds great but this is for me a critical missing feature.

[–] TheHobbyist 9 points 2 days ago (7 children)

it seems AT&T may be interested in looking for alternatives to VMware?

https://xcp-ng.org/blog/2022/10/19/migrate-from-vmware-to-xcp-ng/

[–] TheHobbyist 8 points 2 days ago (7 children)

This was already quite a significant challenge compared to socketed RAM, but now with Lunar Lake I guess this is simply impossible? The RAM chips are colocated with the CPU...

[–] TheHobbyist 2 points 1 week ago

Same boat, fedora + kde, solid experience all around. Love Fedora and really enjoying KDE, though im facing some minor gripe on my laptop with the power management which always seems to kick in max performance when plugged in despite all possible tweaks I have tried (tlp, powertop and the native power management settings).

[–] TheHobbyist 1 points 1 week ago

I think unless they have demonstrated bad faith in the past, we should still give them the benefit of the doubt that this was an honest mistake though it raises some other concerns as to what the internal process is for green lighting this when they had worked with Jeff in the past?

[–] TheHobbyist 12 points 1 week ago (4 children)

You only mention your laptop is running out of space so you need to get a new computer? does your laptop have a soldered SSD? If that's not the case, I think the reflex should first be to see what storage you can get your laptop so that you can keep using it rather than discarding it :(

 

Hi folks,

I'm seeing there are multiple services which externalise the task of "identity provider" (e.g. login with Facebook, google or what not).

In my case, I am curious about Tailscale, a VPN service which allows one to chose an identity provider/SSO between Google, Microsoft, Github, Apple and OIDC.

How can I find out what data is actually communicates to the identity provider? Their task should simply be to decide whether I am who I claim to be, nothing more. But I'm guessing there may be some subtleties.

In the case of Tailscale, would the identity provider know where I'm trying to connect? Or more?

Answers and insights much appreciated! The topic does not seem to have much information online.

6
submitted 1 month ago* (last edited 1 month ago) by TheHobbyist to c/[email protected]
 

Hi folks, I'm considering setting up an offsite backup server and am seeking recommendations for a smallish form factor PC. Mainly, are there some suitable popular second hand PCs which meet the following requirements:

  • fits 4x 3.5" HDD
  • Smaller than a regular tower (e.g. mATX or ITX)
  • Equipped with a 6th of 7th gen Intel CPU at least (for power efficiency and transcoding, in case I want it to actually to some transcoding) with video output.
  • Ideally with upgradeable RAM

Do you know of something which meets those specs and is rather common on the second hand market?

Thanks!

Edit: I'm looking for a prebuilt system, such as a dell optiplex or similar.

43
submitted 3 months ago* (last edited 3 months ago) by TheHobbyist to c/[email protected]
 

Yesterday, there was a live scheduled by Louis Grossman, titled "Addressing futo license drama! Let's see if I get fired...". I was unable to watch it live, but now the stream seems to be gone from YouTube.

Did it air and was later removed? Or did it never happen in the first place?

Here's the link to where it was meant to happen: https://www.youtube.com/watch?v=HTBYMobWQzk

Cheers

Edit: a new video was recently posted at the following link: https://www.youtube.com/watch?v=lCjy2CHP7zU

I do not know if this was the supposedly edited and reuploaded video or if this is unrelated.

 

DeepComputing is preparing a RISC-V based motherboard to be used in existing Framework Laptop 13s!

Some snippets from the Framework blog post (the link to which is provided below):

The DeepComputing RISC-V Mainboard uses a JH7110 processor from StarFive which has four U74 RISC-V cores from SiFive.

This Mainboard is extremely compelling, but we want to be clear that in this generation, it is focused primarily on enabling developers, tinkerers, and hobbyists to start testing and creating on RISC-V.

DeepComputing is also working closely with the teams at Canonical and Red Hat to ensure Linux support is solid through Ubuntu and Fedora.

DeepComputing is demoing an early prototype of this Mainboard in a Framework Laptop 13 at the RISC-V Summit Europe next week.

Announcement: https://frame.work/blog/introducing-a-new-risc-v-mainboard-from-deepcomputing

The upcoming product page (no price/availability yet): https://frame.work/products/deep-computing-risc-v-mainboard

Edit: Adding link the the announcement by DeepComputing: https://deepcomputing.io/a-risc-v-world-first-independently-developed-risc-v-mainboard-for-a-framework-laptop-from-deepcomputing/

29
submitted 5 months ago* (last edited 5 months ago) by TheHobbyist to c/[email protected]
 

From Simon Willison: "Mistral tweet a link to a 281GB magnet BitTorrent of Mixtral 8x22B—their latest openly licensed model release, significantly larger than their previous best open model Mixtral 8x7B. I’ve not seen anyone get this running yet but it’s likely to perform extremely well, given how good the original Mixtral was."

 

Hi all,

I think around 1 or 2 years ago, I stumbled upon a personal blog of an asian woman (I think) working at OpenAI. She had numerous extensive fascinating blog posts on a black themed blog, going into the technical details of embeddings of language models and such.

I can no longer find that blog and have no other information to go by. Would anyone possibly know which blog I'm referring to? It would be very much appreciated.

 

Hi folks,

I seem to be having some internet connectivity issues lately and I would like to monitor my access to the internet. I have a homelab and was wondering whether someone had perhaps something like a docker container which pings a custom website every so often and plots a timescale of when the connection was successful and when it was not.

Or perhaps you have another suggestion? I know of dashboards like grafana but I don't know whether they can be configured to actually generate that data or whether they rely on a third party to feed them. Thanks!

 

Just wanted to share my appreciation of the game.

I grabbed a copy of this game a year ago, taking advantage of a sale and ahead of the massive update. Then forgot about it, never touched it.

Fast forward a year later, and now I got a steam deck and decided to dive into the game. I love it. I'm just a few hours in but I can already say this is among my favorite games. The broad openness of the world, the level of detail, the characters, the interactive dialogs, the items, the strategies, the game mechanics. It's a very involved game. It really is up there. Thank you CDPR for this game and this remake.

 

I was exploring the fps and refresh rate slider and I realized that when setting the framerate limiter to 25, the refresh rate was incorrectly set to 50Hz on the OLED version, when the 75 Hz setting would be a more appropriate setting, for the same reason 30 fps is at 90 Hz and not 60 Hz. Anyone else seeing the same behavior? Is there an explanation I'm missing here?

 

Hi folks, I'm looking for a specific YouTube video which I watched around 5 months ago.

The gist of the video is that it was comparing the transcoding performance of an Intel iGPU when used natively, compared to when passed through to a VM. From what I recall there was a significant performance hit and it was around 50% or so (in terms of fps transcoding). I believe the test was performed on jellyfin. I don't remember whether it was using xcpng, proxmox or another OS. I don't remember which channel published this video nor when it was published, just that I watched it sometime between April and June this year.

Anyone recall or know what video I'm talking about? Possible keywords include: quicksync, passthrough, sriov, iommu, transcoding, iGPU, encoding.

Thank you in advance!

 

Hi y'all,

I am exploring TrueNAS and configuring some ZFS datasets. As ZFS provides with some parameters to fine-tune its setup to the type of data, I was thinking it would be good to take advantage of it. So I'm here with the simple task of choosing the appropriate "record size".

Initially I thought, well this is simple, the dataset is meant to store videos, movies, tv shows for a jellyfin docker container, so in general large files and a record size of 1M sounds like a good idea (as suggested in Jim Salter's cheatsheet).

Out of curiosity, I ran Wendell's magic command from level1 tech to get a sense for the file size distribution:

find . -type f -print0 | xargs -0 ls -l | awk '{ n=int(log($5)/log(2)); if (n<10) { n=10; } size[n]++ } END { for (i in size) printf("%d %d\n", 2^i, size[i]) }' | sort -n | awk 'function human(x) { x[1]/=1024; if (x[1]>=1024) { x[2]++; human(x) } } { a[1]=$1; a[2]=0; human(a); printf("%3d%s: %6d\n", a[1],substr("kMGTEPYZ",a[2]+1,1),$2) }'

Turns out, that's when I discovered it was not as simple. The directory is obviously filled with videos, but also tiny small files, for subtitiles, NFOs, and small illustration images, valuable for Jellyfin's media organization.

That's where I'm at. The way I see it, there are several options:

    1. Let's not overcomplicate it, just run with the default 64K ZFS dataset recordsize and roll with it. It won't be such a big deal.
    1. Let's try to be clever about it, make 2 datasets, one with a recordsize of 4K for the small files and one with a recordsize of 1M for the videos, then select one as the "main" dataset and use symbolic links for each file to the other dataset such that all content is "visible" from within one file structure. I haven't dug too much in how I would automate it, but might not play nicely with the *arr suite? Perhaps overly complicated...
    1. Make all video files MKV files, embed the subtitles, rename the videos to make NFOs as unnecessary as possible for movies and tv shows (though this will still be useful for private videos, or YT downloads etc)
    1. Other?

So what do you think? And also, how have your personally set it up? Would love to get some feedback, especially if you are also using ZFS and have a videos library with a dedicated dataset. Thanks!

Edit: Alright, so I found the following post by Jim Salter which goes through more detail regarding record size. It clarifies my misconception about recordsize not being the same as the block size, but also it can easily be changed at any time. It's just the size of the chunks of data to be read. So I'll be sticking to 1M recordsize and leave it at that despite having multiple smaller files, because the important will be to effectively stream the larger files. Thank you all!

 

Sean who reviewed the Laptop 16 from theVerge and was unable to properly test the Graphics Module due to the ventilation being broken, was invited to Framework HQ to test a fully functional version.

There is hardly any new information other than that it worked, but was running at maximum fan speed due to fan curves not having been implemented yet.

He was able to play Cyberpunk 2077 and Halo Infinite on high for 15 min without throttling.

Nothing more than what was expected but it's good to get a confirmation.

view more: next ›