- cross-posted to:
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
A “natural language query” search engine is what I need sometimes.
Edit: directly reachable with the !ai bang
A “natural language query” search engine is what I need sometimes.
Edit: directly reachable with the !ai bang
Every current LLM is built this way so it is a hard and fast rule.
I’m only talking about current iterations. No one here knows what the next iterations will be so we can’t comment on it. And right now its incredibly foolish to believe what an LLM tells you. They lie, like a lot.
No, that is a trend, not a rule, and the former of which I would argue is not even 100%. Claude in my experience of using it seems to be designed to be more conversational and factual, not strictly entertaining.
I never said you should believe everything an LLM says. Of course a critical mind is important, but one can’t necessarily just assume any answer they give is wrong either just because they’re an LLM. Especially in this stage of LLM development; the technology is still maturing, still in its infancy.
Generally the more a technology matures out of its infancy the better it becomes at the job it’s designed for. If an AI is designed to be entertaining, then yes it will be better at that in time; but likewise also if it’s designed for factuals. And I already said what I think about the current state of development in regards to that.
Therefore, I think it’s a reasonable assumption that as time goes on, the frequency of hallucinations will go down. We’re still working out the kinks, as it is.
Rule or trend, whatever word you use is semantics at this point. And your experience is irrelevant to the facts of how all current LLM’s are built. They are all built the same way. We have proof they are all built the same way.
If you talk to someone and you know they lie to you 10% of the time, would you ever take anything they day at face value?
We can sit down and speculate all day about what could be but that has no bearing on what is which is the entire point of this discussion.
Hardly. There is a very clear distinction between a rule & a trend.
They are not all built the same, though. Claude, for instance, is built with a framework of values called “Constitutional AI”. It’s not perfect, as the developers even state, but it is a genuine step in the right direction compared to many of its contemporaries in the AI space.
Humans are not tools that can be improved upon. They are sentient beings that have conscious choice. LLMs are the former, and are not the latter.
They are not 1:1 comparisons as you claim.
You are wrong and tiresome. Goodbye.
And yet I’ve provided sources for each and every one of my assertions, while you have not.
Good day.