I didn’t see this one in the list, but UNN newscast from Starcraft 2 is one of my favorites! Particularly Donny Vermillion.
I didn’t see this one in the list, but UNN newscast from Starcraft 2 is one of my favorites! Particularly Donny Vermillion.
I used to work in a really big project written in C and C++ (and even some asm in there) and the build was non-deterministic. However the funky part was there was a C file in all of this that had a couple dozen of commented nee lines with a line at the top saying: ‘don’t remove this or the build will fail’ That remains my favorite code comment to this day.
It was definitely a loaded or insincere question. The use of “you would” instead of “would you” suggests that the person who is asking this question has already made up their mind about OP’s opinion. And no, I don’t think that was a typo, a Freudian slip maybe, but not a typo.
What do you have against licorice?! (also American licorice is crap) Try this:
LAKRIDS BY BÜLOW - Læmon - 10.4 OZ - Soft Licorice Swirled in Luscious White Lemon Chocolate, Cream and Vanilla https://a.co/d/9bCxy5A
Cool and all… But why did you have to post this over and over in a gajillion subs?
The answer to “is it part of the activityPub spec?” is more often than not a strong No.
That’s the fun part about being in a place where you can hold a discussion. Some people don’t agree with you, but they can still see the benefits of the option you are talking about or even agree that they are a great solution for now.
but by that logic, I could take 90% of your paycheck and say, “yes well you still made 10% which is way better than nothing!”
That’s already happening :)
I know this is a joke cause it doesn’t have a color, but it’s “Guau, guau” too.
Source: I am Chilean.
At least what I see with this experiment/article is that is overly verbose, he takes a long time to get to the point. And then when he does his methodology shows an experiment that cannot be verified. Even when something is “subjective” we can still draw conclusions from it if we set up proper non-subjective ways of evaluating the results we see (ie. Rubrics). The fact that he doesn’t really say what leads him to say in detail what is a “terrible/v. bad/bad/good result” is a massive red flag in his method.
After seeing that, I no longer read the rest of it. Any conclusions drawn from a flawed methodology are inherently fallacies or hearsay.
If in any case it is further explained in the article and that somehow refutes what I’ve postulated later on, then I would have to say that the article is poorly written.
All this to say… I agree with you, not worth the read.
What? Ballmer hasn’t had anything to do with msft since 2014 man.
Software engineer here, but not llm expert. I want to address one of the questions you had there.
Why doesn’t it occasionally respond with a hundred thousand word response? Many of the texts it’s trained on are longer than it’s usual responses.
An llm like chatgpt does some rudimentary level of pattern matching when it analyzes training data. So this is why it won’t generate a giant blurb of text unless you ask it to.
Let’s say for example one of its training inputs is a transcription of a conversation. That will be tagged “conversation” by a person. Then it will see that tag when analyzing hundreds of input texts that are conversations. Finally, the training algorithm writes down that “conversation” have responses of 1-2 sentences with x% likelyhood because that’s what the transcripts did. Now if another of the training sets is “best selling novels” it’ll store that “best selling novels have” responses" that are very long.
Chatgpt will probably insert a couple of tokens before your question to help it figure out what it’s supposed to respond: “respond to the user as if you are in a casual conversation”
This will make the model more likely to output small answers rather than giving you a giant wall of text. However it is still possible for the model to respond with a giant wall of text if you ask something that would contradict the original instructions. (hence why jailbreaking models is possible)
So your logic is that since we already have some toxic we should just go ahead and make it more toxic?