That’s a room temp take at best
Doing the Lord’s work in the Devil’s basement
That’s a room temp take at best
But then how am I supposed to use your “research” to make imaginary claims on generational attention spans ?
That’s only if you are in the subset of PC enjoyers who like state of the art equipment, expensive accessories and expensive recent games.
I consider myself a PC nerd but 5K is more than I spent in the last 10 years on the hobby. I’ve built my main rig for <1K, own a few raspberries and my home server is an old work laptop. It absolutely doesn’t have to be an expensive hobby.
“i have collected some soil samples from the mesolithic age near the Amazon basin which have high sulfur and phosphorus content compared to my other samples. What factors could contribute to this distribution?”
Haha yeah the top execs were tripping balls if they thought some off-the-shelf product would be able to answer this kind of expert questions. That’s like trying to replace an expert craftsman with a 3D printer.
What kind of use-cases was it, where you didn’t find suitable local models to work with ? I’ve found that general “chatbot” things are hit and miss but more domain-constrained tasks (such as extracting structured entities from unstructured text) are pretty reliable even on smaller models. I’m not counting my chickens yet as my dataset is still somewhat small but preliminary testing has been very promising in that regard.
Most projects I’ve been in contact with are very aware of that fact. That’s why telemetry is so big right now. Everybody is building datasets in the hopes of fine tuning smaller, cheaper models once they have enough good quality data.
I doubt these tools will ever get to a level of quality that can confuse a court. They’ll get better, sure, but they’ll never really get there.
Did you listen to that hardcore history episode? It was crazy
Good point, i was thinking more about your regular old independent artist trying to make it with their art. Obviously someone who’s an online celebrity depends on generating outrage for clicks, so they are bound to display more divisive, over-the-top opinions.
The only reason people are throwing bitch fits over AI/LLM’s is because it’s the first time the “art” industry is experiencing their own futility.
I would even go further and argue that the art industry doesn’t really care about AI. The people white knighting on the topic are evidently not artists and probably don’t know anybody legitimately living from their art.
The intellectual property angle makes it the most obvious. Typically independent artists don’t care about IP because they don’t have the means to enforce it. They make zero money from their IP and their business is absolutely not geared towards that - they are artists selling art, not patent trolls selling lawsuits. Copying their “style” or “general vibes” is not harming them, just like recording a piano cover of a musician’s song doesn’t make them lose any tickets sales, or sell fewer vinyls (which are the bulk of their revenue).
AI is not coming for the job of your independent illustrator pouring their heart and soul into their projects. It is coming for the job of corporate artists illustrating corporate blogs, and those who work in content farms. Basically swapping shitty human made slop for shitty computer made slop. Same for music - if you know any musician who’s losing business because of Suno, then it’s on them cause Suno is really mediocre.
I have yet to meet any artist with this kind of deep anti-AI sentiment. They are either vaguely anxious about the idea of the thing, but don’t touch it cause they’re busy practicing their craft - or they use the hallucination engines as a tool for inspiration. At any rate there’s no indication that their business has seen much of a slowdown linked to AI.
That’s the problem with imaginary enemies. They have to be both ridiculously incompetent, and on the verge of controlling the whole world. Sounds familiar doesn’t it?
Yeah it certainly depends on the teacher. If you’re into that kind of history, Pacome from Blast made a gigantic episode about this in his “L’empire n’a jamais pris fin” series. One of the best youtube essays i’ve ever seen in French.
I had the same feeling with planet crafter. After a while you learn to run around with just enough materials to build a room and a door and bam, the whole oxygen management mechanic is neutralized.
where was that ? My hometown is like 20km from a city that was entirely burned down and had its population eradicated during the first Albigense crusade - i swear to God it was never mentioned to me. My parents hadn’t ever heard of it either.
From reading your post it seems like you could be interested by the Jesus movement (that is the jewish followers of Jesus, before catholicism was codified and adopted by the Romans as state religion). Everything that wasn’t authoritarian fear-based catholic was branded as “gnostic heresy” and purged from the canon, but there’s some real good shit that is very close to the core message of Christ.
A recent(-ish) example of gnostic christianity is catharism, which was a heresy that lasted for a few centuries in the South of France. They had no clergy, just a caste of ascetic wise men and women who would walk the land and dispense wisdom and judgement. Very egalitarian, very spiritual, very christ-like. As you can imagine, they got crushed in one of the rare “self-crusades” in history (meaning the King of France sent his own armies to burn down cities in his own country and murder thousands upon thousands of his own subjects). As you can imagine there is not one history teacher in France who will tell you about this episode.
It’s especially frustrating as the whole point of the Google search page was that it was designed to get you out on your way as fast as possible. The concept was so mind blowing at the time and now they’re just like nevermind let’s default to shitty
This comment shows you have no idea of what is going on. Have fun in your little bubble, son.
If I understand these things correctly, the context window only affects how much text the model can “keep in mind” at any one time. It should not affect task performance outside of this factor.
Yeh, i did some looking up in the meantime and indeed you’re gonna have a context size issue. That’s why it’s only summarizing the last few thousand characters of the text, that’s the size of its attention.
There are some models fine-tuned to 8K tokens context window, some even to 16K like this Mistral brew. If you have a GPU with 8G of VRAM you should be able to run it, using one of the quantized versions (Q4 or Q5 should be fine). Summarizing should still be reasonably good.
If 16k isn’t enough for you then that’s probably not something you can perform locally. However you can still run a larger model privately in the cloud. Hugging face for example allows you to rent GPUs by the minute and run inference on them, it should just net you a few dollars. As far as i know this approach should still be compatible with Open WebUI.
I’ve only had issues with fitgirl repacks i think there’s an optimisation they use for low RAM machines that doesn’t play well with proton