I’ve been using LLMs a lot. I use gpt 4 to help edit articles, answer nagging questions I can’t be bothered to answer, and other random things, such as cooking advice.
It’s fair to say, I believe, that all general purpose LLMs like this are plagiarizing all of the time. Much in the way my friend Patrick doesn’t give me sources for all of his opinions, Gpt 4 doesn’t tell me where it got its info on baked corn. The disadvantage of this, is that I can’t trust it any more than I can trust Patrick. When it’s important, I ALWAYS double check. The advantage is I don’t have to take the time to compare, contrast, and discover sources. It’s a trade off.
From my perspective, The theoretical advantage of bing or Google’s implementation is ONLY that they provide you with sources. I actually use Bing’s implementation of gpt when I want a quick, real world reference to an answer.
Google will be making a big mistake by sidelining it’s sources when open source LLMs are already overtaking Google’s bard’s ai in quality. Why get questionable advice from Google, when I can get slightly less questionable advice from gpt, my phone assistant, or actual, inline citations from bing?
Haven’t used Google in years, but it sounds like it’s getting even worse.
(Disclaimer: I dislike Google’s current search ranking, and prefer other search engines for the most part)
I’m conflicted about this. On the one hand, I think generative ML-based answers can often be very useful and superior to the ‘classic’ search experience; on the other, I’m worried about the implications of using it as a full-on replacement for search, because that’s basically what this is.
No thank you. How’s bing?
What do you mean? Bing is okay, for the most part.