it sure seems like it though
i mean, they’ll never replace system package manager, but for desktop applications, flatpak is honestly quite good
it sure seems like it though
i mean, they’ll never replace system package manager, but for desktop applications, flatpak is honestly quite good
wow is me, i am le surprised
according to the github readme, you can just run sudo pro config set apt_news=false
to disable those
if you have things set up the way you like on xubuntu, it’s maybe worth it to just do that rather than start fresh
iirc, postgresql renames itself in htop to show its current status and which database it’s operating on
the avreage person also isn’t as convincing as a bot we’re told is the peak of computer intelligence
there are tons of webring still going these days!
well, i just tried it, and its answer is meh –
i asked it to transcribe “zenquistificationed” (made up word) in IPA, it gave me /ˌzɛŋˌkwɪstɪfɪˈkeɪʃənd/, which i agree with, that’s likely how a native english speaker would read that word.
i then asked it to transcribe that into japaense katakana, it gave me “ゼンクィスティフィカションエッド” (zenkwisuthifikashon’eddo), which is not a great transcription at all - based on its earlier IPA transcription, カション (kashon’) should be ケーシュン (kēshun’), and the エッド (eddo) part at the end should just, not be there imo, or be shortened to just ド (do)
it is absolutely capable to come up with it’s own logical stuff
interesting, in my experience, it’s only been good at repeating things, and failing on unexpected inputs - it’s able to answer pretty accurately if a small number is even or odd, but not if it’s a large number, which indicates it’s not reasoning but parroting answers to me
do you have example prompts where it showed clear logical reasoning?
huh, i kinda assumed it was a term made up/taken by journalists mostly, are there actual research papers on this using that term?
because it’s a text generation machine…? i mean, i wouldn’t say i can prove it, but i don’t think anyone can prove it’s capable of thinking, much less of reasoning
like, it can string together a coherent sentence thanks to well crafted equations, sure, but i wouldn’t qualify that as “thinking”, though i guess the definition of “thinking” is debatable
New response just dropped
for it to “hallucinate” things, it would have to believe in what it’s saying. ai is unable to think - so it cannot hallucinate
A/B testing moment
you probably got a kernel panic, which froze the system. it’s like a BSOD on windows, except on linux, there isn’t a proper stack to handle them when they happen while you have a graphicam session running, so it kinda just freezes
i don’t think reisub would do anything, because the kernel was probably already dead
you don’t risk corrupting much data by hard-reseting your pc on linux – journaling filesystems, like ext4 or btrfs, are built to be resilient to sudden power loss (or kernel crashing). if a program was writing a file at thz time the kernel crashed, this one file may be corrupted, because the program would get killed before it finished writing the file, but all in all, it’s pretty unlikely. outside of fs bugs, which are thankfully few and far between on time-tested filesytems like ext4, you shouldn’t have to worry much about sudden power loss!
unfortunately, figuring out the cause of these issues can be challenging – i’ve had many such occurences, and you have no logs to go off of (because the system doesn’t have time to save them), so you’d most likely need to figure out a way to send your kernel logs onto another system to record them
as general mitigation steps, you should try monitoring your cpu temperature a bit closer - it could be high temperature tripping the safeties of your motherboard/cpu to avoid physical damage to them - in which case, try installing a daemon to control your cpu frequency, like auto-cpufreq, or something like thermald specifically made to throttle your cpu if it gets too hot (though i think that one is intel specific)
my main question is: how much csam was fed into the model for training so that it could recreate more
i think it’d be worth investigating the training data usued for the model
there seems to be qt qml bindings for Zig
qml is a language made to build UIs, and is very easy to use in my experience - you can build your logic that needs to be high-performance (file loading, audio effects, etc.) in zig, and expose it to qml so it’s available in the UI.
i’ve never used zig, but i did do a similar thing using c++ & qml, and it was great to work with, so i think you should be fine going that route
lmao. as if the ai was gonna have a better carbon footprint than the small plastic thing you replace every 5-10 years
i mean, the problem isn’t the portal in those cases, and i think the portal is a very cool idea – imo, the fact that these people get in the news for it is probably why they’re doing it, it’s just one way to get people’s attention by doing outrageous stuff around a new attraction
it’s nothing new, and eventually dies off, and there are probably also many events of people being nice to each other that go unreported
edit: also, yeah, showing body parts generally shouldn’t be considered that harshly imo - of you’re forcing people to look at them, they’re probably not pretty, but I wouldn’t call those “vulgaire” either
i mean, it’s cool we have science to provide that to the people who want it imo
“AI” today mostly refers to LLMs, and whichever LLM you’re using, you’ll likely face the same issues (wrong answers creeping in, tending towards mediocrity in its answers, etc.) - those seem to be things you have to live with if you want to use LLMs. if you know you can’t deal with it, another rebrand won’t help anything