If you want a pretty cool example, Le morte d’Arthur was written in prison.
If you want a pretty cool example, Le morte d’Arthur was written in prison.
They’re definitely among the worst of the worst. It’s always surprised me how comparatively sterile their wiki page is. Feels like they’ve got someone cleaning it up.
w++ is a programming language now 🤡
You can make it as complicated as you want, of course.
Out of curiosity, what use-case did you find for it? I’m always interested to see how AI is actually applied in real settings.
Lazy is right. Spending fifty hours to automate a task that doesn’t take even five minutes is commonplace.
It takes laziness to new, artful heights.
True! Interfacing is also a lot of work, but I think that starts straying away from AI to “How do we interact with it.” And let’s be real, plugging into OAI’s or Anthropic’s API is not that hard.
Does remind me of a very interesting implementation I saw once though. A VRChat bot powered by GPT 3.5 with TTS that used sentiment classification to display the appropriate emotion for the text generated. You could interact with it directly via talking to it. Very cool. Also very uncanny, truth be told.
All that is still in the realm of “fucking around” though.
That’s only the first stage. Once you get tired enough you start writing code that not even you can understand the next morning, but which you’re loathe to change because “it just works”.
“The bug is fixed, but we inadvertently created two new ones, one of which broke production because it was inexplicably not caught.”
If you want to disabuse yourself of the notion that AI is close to replacing programmers for anything but the most mundane and trivial tasks, try to have GPT 4 generate a novel implementation of moderate complexity and watch it import mystery libraries that do exactly what you want the code to do, but that don’t actually exist.
Yeah, you can do a lot without writing a single line of code. You can certainly interact with the models because others who can have already done the leg work. But someone still has to do it.
It really is big. From baby’s first prompting on big corpo model learning how tokens work, to setting up your own environment to run models locally (Because hey, not everyone knows how to use git), to soft prompting, to training your own weights.
Nobody is realistically writing fundamental models unless they work with Google or whatever though.
I read it a long time ago. The format is interesting, novel certainly. I suppose it’s the selling point, over the prose.
To me it seemed like there were many competing “ways” to read it as well. Like a maze, you can go different paths. Do you read it front to back? Niggle through the citations? Thread back through the holes? It’s not often you get a book that has this much re-read value.
The assertion that they cannot be cheap is funny, when Vicuna 13B was trained on all of $300.
Not $300,000. $300. And that gets you a model that’s almost parity with ChatGPT.
Yes, but also I would hope that if you have the autonomy to install linux you also have the autonomy to look up an unknown command before running it with superuser privileges.