In case anyone is interested in an alternative, I personally use LanguageTool because it is open source and works very well.
Privacy
A place to discuss privacy and freedom in the digital world.
Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.
In this community everyone is welcome to post links and discuss topics related to privacy.
Some Rules
- Posting a link to a website containing tracking isn't great, if contents of the website are behind a paywall maybe copy them into the post
- Don't promote proprietary software
- Try to keep things on topic
- If you have a question, please try searching for previous discussions, maybe it has already been answered
- Reposts are fine, but should have at least a couple of weeks in between so that the post can reach a new audience
- Be nice :)
Related communities
much thanks to @gary_host_laptop for the logo design :)
Can confirm good drop in replacement. Also self hostable (to a point )
I took a quick look at this and it seems that the server portion of this product is open source but the apps such as extensions are not. I'm not saying it's bad or even that it's a red flag. I just felt like I should point it out.
Does that have a chrome plugin?
Yes.
It even have a thunderbird plugin and works in all major editors.
You can self host it as well, which is how the editor plugins work by default.
Yeah Grammarly was selling all your data LONG before the AI showed up.
Funny how some people are only nervous now that their data might be used to train a language model. I was always more worried about spooks! :)
Companies selling consumer data for profit and marketeering: i sleep
Companies using consumer data to train AI models: R E A L S H I T
It's because certain companies are stirring the pot and manipulating. They want people mad so they can put restrictions on training AI, to stifle the open source scene.
OpenAI moment
They even named their company specifically to make it harder for open source ai to name themselves. Thats some dedication.
I honestly thought they were foss hearing the name. Pretty awful lol
I see you posted this article to 4 communities. According to the comments on this post if you use the cross post function (in the default web frontend), it will only show once in the feeds instead of 4 times (which can be a bit annoying).
Thanks
EDIT: post link and aclaration regarding the UI
I did use the cross-post function. Most apps do not currently acknowledge this function which might explain why the article has appeared to you multiple times.
Thanks! It seems this issue is harder than I thought :)
What is this healthy communication?! Aren't you supposed to go into the "what the fuck did you just say to me" ramble?
What the fuck did you just say to them?
I don't know if it will last, but I really enjoy this cozy-but-not-cheesy environment. It feels different.
Oh trust me there's plenty of infighting on lemmy too. Just don't bring up Russia around certain people...
I see content from many servers in the lemmy federation. My understanding, which could be wrong, was that like email, you can post to any domain and see posts from other domains. What's the advantage of posting to many instances?
More traction just like posting in reddit to multiple subreddits. People might not be subbed to the one you post it to and not everyone browses All and even if they did you would have to hope it floats to the top more.
How much do you have to pay for them to not monitor your every keystroke, including all your IP and passwords?
Oh, that's their business model, right.
Even as someone who declines all cookies where possible on every site, I have to ask. How do you think they are going to be able to improve their language based services without using language learning models or other algorithmic evaluation of user data?
I get that the combo of AI and privacy have huge consequences, and that grammarly's opt-out limits are genuinely shit. But it seems like everyone is so scared of the concept of AI that we're harming research on tools that can help us while the tools which hurt us are developed with no consequence, because they don't bother with any transparency or announcement.
Not that I'm any fan of grammarly, I don't use it. I think that might be self-evident though.
Framing this solely as fear is extremely disingenuous. Speaking only for myself: I'm not against the development of AI or LLMs in general. I'm against the trained models being used for profit with no credit or cut given to the humans who trained it, willing or unwilling.
It's not even a matter of "if you aren't the paying customer, you're the product" - massive swaths of text used to train AIs were scraped without permission from sources whose platforms never sought to profit from users' submissions, like AO3. Until this is righted (which is likely never, I admit, because the LLM owners have no incentive whatsoever to change this behavior), I refuse to work with any site that intends to use my work to train LLMs.
They're honestly doing you a favor. Grammarly is terrible. I've seen some of my friends whose first language isn't English use it to try to clean their grammar up and it makes some really weird, often totally mistaken choices. Usually they would have been better off leaving it as they wrote it.
I wonder if ProWritingAid is doing the same now. I always preferred them over Grammarly.
They have a free tier and a $10/mo tier and prominently advertise their AI without any information about privacy. Guaranteed you and your text are the product being used to train their AI.
Think about this every time you or a project you contribute to is using Microsoft GitHub instead of an open source offering (or self-hosted) or folks contributing to your permissive-licensed project living elsewhere while using Microsoft GitHub Copilot. All your projects and that force-push history clean up now belong to the Microsoft-owned AI that sells itself back to the developers that wrote all the code it trained on—no compensation, no recognition.
So some people want to use the advantages of AI that ONLY works properly because of all the user data collected... But refuse to contribute.
I am perfectly fine with providing training data for AI, and have actually spent hours contributing to various projects. However, it is super scummy for a company to collect and use sensitive user data (literally a keylogger) not only without any form of communication or consent, but where the only way to opt out is to pay.
Stuff like this should always be opt-in. It looks better on the company and builds trust.
Ideally, offer payment for users who opt-in to have their writing scraped and used to train AI.
Seems like this could easily be a win-win situation if they gave it a few seconds of thought.
Why do you assume everyone wants this garbage? We were fine without it.
That's basically what people said about:
- mobile phones
- The web
- computers
- calculators
I'd also argue that the customers of grammarly want this because they are paying for it. At least In the extension or app
Uhh umm... You are the product! Aaand... Shill for greedy corporations!
I remember when Google said quite openly that they'd give us email addresses with more storage than we'd ever dreamed for life and in return, they'd scan the first few sentences of all messages and use them to target ads at us and we were all like, "Sounds fair."