Puttybrain

joined 1 year ago
[–] [email protected] 2 points 1 year ago (1 children)

I see this on Beehaw

[–] [email protected] 1 points 1 year ago

I've been using uncensored models in Koboldcpp to generate whatever I want but you'd need the RAM to run the models.

I generated this using Wizard-Vicuna-7B-Uncensored-GGML but I'd suggest using at least the 13B version

It's a basic reply but it's not refusing

[–] [email protected] 2 points 1 year ago

Not at my PC so this is the best I've got

[–] [email protected] 1 points 1 year ago

It's Wizard-Vicuna-7B-Uncensored-GGML

Been running it on my phone through Koboldcpp

[–] [email protected] 1 points 1 year ago

I'm currently working on a discord bot, it's still a major work on progress though.

It's a rewrite of a bot I made a few months ago in Python but I wasn't getting the control I needed with the libraries available and based on my current testing, this rewrite is what I needed

It uses ML to generate text replies (currently using ChatGPT) and images (currently using DALL-E and Stable diffusion), I've got the text generation working, I just need yo get image generation working now.

Link to the github: https://github.com/2haloes/Delta-bot-rusty

Link to the original bot (has the env variables that need to be set): https://github.com/2haloes/Delta-Discord-Bot

 
[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

I tried running this with some output from a Wizard-Vicuna-7B-Uncensored model and it returned ('Human', 0.06035491254523517)

So I don't think that this hits the mark, to be fair, I got it to generate something really dumb but a perfect LLM detection tool will likely never exist.

Good thing is that it didn't false positive my own words.

Below is the output of my LLM, there's a decent amount of swearing so heads up

Edit:

Tried with a more sensible question and still got a false negative

('Human', 0.03917657845587952)