I am actively testing this out. It’s hard to say at the moment. There’s a lot to figure out deploying a model into a live environment, but I think there’s real value in using them for technical tasks - especially as models mature and improve over time.
At the moment, though, performance is closer to GPT 3.5 than GPT 4, but I wouldn’t be surprised if this is no longer the case within the next year or so.
Assuming everything from the papers translate into current platforms, yes! A rather significant one at that. Time will tell us the true results as people begin tinkering with this new approach in the near future.
Thanks for reading! I’m glad you enjoy the content. I find this tech beyond fascinating.
Who knows, over time you might even begin to pick up on some of the nuance you describe.
We’re all learning this together!
Thanks for sharing this!
Good bot, I will do that next time.
Come hangout with us at !fosai@lemmy.world
I run this show solo at the moment, but do my best to keep everyone informed. I have much more content on the horizon. Would love to have you if we have what you’re looking for.
FOSAI Posts:
For anyone unaware, this is probably one of the better short and sweet explanations in regards to what HuggingFace is.
It is a hub for many code repositories hosting AI specific files and configurations, which has become a core ecosystem of many artificial intelligence breakthroughs, platforms, and applications.
FWIW, it’s a new term I am trying to coin in FOSS communities (Free, Open-Source Software communities). It’s a spin off of ‘FOSS’, but for AI.
There’s literally nothing wrong with FOSS as an acronym, I just wanted to use one more focused in regards to AI tech to set the right expectations for everything shared in /c/FOSAI
I felt it was a term worth coining given the varied requirements and dependancies AI/LLMs tend to have compared to typical FOSS stacks. Making this differentiation is important in some of the semantics these conversations carry.
Big brain moment.
Ironically, I think using this technology to do exactly that is one of its greatest strengths…
GL, HF!
Lol, you had me in the first half not gonna lie. Well done, you almost fooled me!
Glad you had some fun! gpt4all is by far the easiest to get going with imo.
I suggest trying any of the GGML models if you haven’t already! They outperform almost every other model format at the moment.
If you’re looking for more models, TheBloke and KoboldAI are doing a ton for the community in this regard. Eric Hartford, too. Although TheBloke is typically the one who converts these into more accessible formats for the masses.
Thank you! I appreciate the kind words. Please consider subscribing to /c/FOSAI if you want to stay in the loop with the latest and greatest news for AI.
This stuff is developing at breakneck speeds. Very excited to see what the landscape will look like by the end of this year.
Absolutely! I’m having a blast launching /c/FOSAI over at Lemmy.world. I’ll do my best to consistently cross-post for everyone over here too!
I used to feel the same way until I found some very interesting performance results from 3B and 7B parameter models.
Granted, it wasn’t anything I’d deploy to production - but using the smaller models to prototype quick ideas is great before having to rent a gpu and spend time working with the bigger models.
Give a few models a try! You might be pleasantly surprised. There’s plenty to choose from too. You will get wildly different results depending on your use case and prompting approach.
Let us know if you end up finding one you like! I think it is only a matter of time before we’re running 40B+ parameters at home (casually).