• Nalivai@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 days ago

    Basically, you can tune a model to hallucinate less

    You can tune it to hallucinate more, you can’t tune it to not hallucinate at all, and that’s what matters. You need it to be “not at all” if you want it to be useful, otherwise you can never be sure that it’s not lying, and you can’t check for lies other than reading the article, which defies the whole purpose of it.