• TheWiseAlaundo
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    That’s a good question. Apparently, these large data companies start with their own unaligned dataset and then introduce bias through training their model after. The censorship we’re talking about isn’t necessarily trimming good input vs. bad input data, but rather “alignment” which is intentionally introduced after.

    Eric Hartford, the man who created Wizard (the LLM I use for uncensored work), wrote a blog post about how he was able to unalign LLAMA over here: https://erichartford.com/uncensored-models

    You probably could trim input data to censor output down the line, but I’m assuming that data companies don’t because it’s less useful in a general sense and probably more laborious.