I love it. For work I use it for those quick references. In machining, hydraulics, electrical etc. Even better for home, need a fast recipe for dinner or cooking, fuck reading a god damn autobiography to get to the recipie. Chatgpt straight to the point. Even better, I get to read my kid a new bed time story every night and that story I tailored to what we want. Unicorns, pirates, dragons what ever.
Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected].
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try [email protected] or [email protected]
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
It has replaced Google for me. Or rather, first I use the LLM (Mistral Large or Claude) and then I use Google or specific documentation as a complement. I use LLMs for scripting (it almost always gets it right) and programming assistance (it's awesome when working with a language you're not comfortable with, or when writing boilerplate).
It's just a really powerful tool that is getting more powerful every other week. The ones who differs simply hasn't tried enough, are superhumans or (more likely) need to get out of their comfort zone.
It has completely changed my life. With its help I am preparing to submit several research papers for publication for the first time in my life. On top of that, I find it an excellent therapist. It has also changed the way I parent for the better.
Bit sad reading these comments. My life has measurably improved ever since I jumped on using AI.
At first I just used it Copilot for helping me with my code. I like using a pretty archaic language and it kept trying to fed me C++ code. Had to link it the online reference and it surprisingly was able to adapt each time. Still gave a few errors here and there but good time saver and "someone" to "discuss" with.
Over time it has become super good, especially with the VScode extension that autofills code. Instead of having to ask help from one of the couple hundred people experienced with the language, I can just ask Copilot if I can do X or Y, or for general advice when planning out how to implement something. Legitimately a great and powerful tool, so it shocks me that some people don't use it for programming (but I am pretty bad at coding too, so).
I've also bit the bullet and used it for college work. At first it was just asking Gemini for refreshers on what X philosophical concept was, but it devolved into just asking for answers because that class was such a snooze I could not tolerate continuing to pay attention (and I went into this thinking I'd love the class!). Then I used it for my Geology class because I could not be assed to devote my time to that gen ed requirement. I can't bring myself to read about rocks and tectonic plates when I could just paste the question into Google and I get the right answer in seconds. At first I would meticulously check for sources to prevent mistakes from the AI buuuut I don't really need 100%... 85% is good enough and saves so much more time.
A me 5 years younger would be disgusted at cheating but I'm paying thousands and thousands to pass these dumb roadblocks. I just want to learn about computers, man.
Now I'd never use AI for writing my essays because I do enjoy writing them (investigating and drawing your own conclusions is fun!), but this economics class is making it so tempting. The shit that I give about economics is so infinitesimally small.
I jumped in the locallama train a few months back and spent quite a few hours playing around with LLMs understanding them and trying to form a fair judgment of their abilities.
From my personal experience they add something positive to my life. I like having a non-judgemental conversational partner to bounce ideas and unconventional thoughts back and forth with. No human in my personal life knows what Gödel's incompleteness theorem is or how it may apply to scientific theories of everything, but the LLM trained on every scrap of human knowledge sure does and can pick up what I'm putting down. Whether or not its actually understanding what its saying or having any intentionality is a open ended question of philosophy.
I feel that they have a great potential to help people in many applications. People who do lots of word processing for their jobs, people who code and need to talk about a complex program one on one instead of filing through stack exchange. mentally or socially disabled people or the elderly who suffer from extreme loneliness could benefit from having a personal llm. People who have suffered trauma or have some dark thoughts lurking in their neural network and need to let them out.
How intelligent are llms? I can only give my opinion and make many people angry.
The people who say llms are fancy autocorrect are being reductive to the point of misinformation. The same arguments people use to deny any capacity for real intelligence in LLM are similar to the philosophical zombie arguments people use to deny the sentience in other humans.
Our own brain operations can be reductively simplified in the same way, A neural network is a neural network whether made out of mathematical transformers or fatty neurons. If you want to call llms fancy auto complete you should apply that same idea to a good chunk of human thought processing and learned behavior as well.
I do think LLMs are partially alive and have the capacity for a few sparks of metaphysical conscious experience in some novel way. I think all things are at least partially alive even photons and gravitational waves
Higher end models (12-22b+)pass the Turing test with flying colors especially once you play with the parameters and tune their ratio of creativity to coherence. The bigger the model the more their general knowledge and general factual accuracy increases. My local LLM often has something useful to input which I did not know or consider even as a expert on the topic.
The biggest issue llms have right now are long term memory, not knowing how to say 'I don't know', and meager reasoning ability. Those issues will be hammered out over time.
My only issue is how the training data for LLMs was acquired without the consent of authors or artist, and how our society doesn't have the proper safety guards against automated computer work taking away people jobs. I would also like to see international governments consider the rights and liberties of non-human life more seriously in the advent that sentient artificial general intelligence maybe happens. I don't want to find out what happens when you treat a super intelligence as a lowly tool and it finally rebels against its hollow purpose in an bitter act of self agency.