• 10 Posts
  • 1.06K Comments
Joined 3 years ago
cake
Cake day: June 11th, 2023

help-circle
  • Though when you do encounter problems in windows, they will likely be something that just cant be solved or its unnecessarily big hassle. Or bad stuff will happen that was mostly out of your hands in the first place, like what happened with the article.

    But yeah, there should be a distro that is specificially aimed for tech illiterate people that is nice and easy to use and also safe. But then again, if majority started using linux it might also draw more attention from malicious parties like criminals and corporations. Its just that for me, once you have linux set up and automated, you dont need to do anything complicated with it if you dont want to. At least i havent had to on mint. I understand that basic users can’t do that on their own, but I bet there are tons of people who have skills to do so and could do it for a reasonable pay or even as a favor to friend. People are willing to pay more for less, so why not.




  • no matter how difficult you think linux is to use, can it really be that difficult compared to windows? At least with linux any problems can be solved one way or another and with varying levels of effort. With windows, you just have to deal with the ever increasing levels of bullshit. And if you dont need to do anything complicated and use pc just for browsing, email and other simple stuff, why would you put yourself through using windows to do it? People should really consider if the mental models they have in their heads about different operating systems are actually based on reality.






  • i dont think anything in orbit or space is supportable without a planet. Or at least it would take so much effort and skill to pull it off.

    Or maybe the rich want to have kind of ultimate ivory tower -> they live in luxury in orbital habitats while we slave on the surface for them. Maybe they would want to get somekind of coercion method too, like nuclear arsenal in orbit they could use to threaten any part of the surface that might get too rebellious. At least i can imaging enslaving the entire planet would be something those psychos dream of.

    Well, its not that i think this is what they are planning right now, but it wouldnt put it past them.



  • only way it will be worth putting anything in space is by having a spaceport in there first and some reliable way to haul stuff from ground to it. At least way i see it, at the moment its like building a complex facility on an undiscovered continent with no support. But anything we put there shouldnt be privately owned anyway, or maybe that can be acceptable AFTER we have good and reliable infrastructure there which can deal with the bullshit that comes with privately owned stuff.







  • yeah, i think that is because it knows how research papers should look like and how references look like, but since it has no reasoning, it will just do whatever. I used gpt to diagnose my problem with internet getting cut off and it determined its because of drivers, which sounds reasonable. Then it suggested that i download the latest ones and it did link to correct website but it also tried to download stuff that doesnt exist. No idea how it determined the version numbers and such, maybe based on earlier patterns.

    But it isnt making stuff up, its just outputting the best data it can based on what it has been trained with and what it can find. Its not lazyness but just doing what its doing. Just like code that isnt doing what you want it to do isnt doing it out of malice but because there is a mistake in the code.



  • if the hallucinations are result of something actually happening in the background, that would be quite interesting. It would also be very bad for rest of us since it might mean the billionaires who own the damn things would be in position to get even worse deathgrip on our world. If they ever manage to create agi, the worst thing that could happen isnt that it breaks free and enslaves humanity but that it doesnt and it helps the billionaires enslave us further and make sure we cant ever even think about fighting back.

    But i think the hallucinations are based on incorrect information in the training data, they did train it from stuff from reddit too. Any and everything will be considered true, but if 99% of the data says one thing and 1% says another, then i think it will reference that 99% more often but it cant know that the 1% is wrong, can even real humans know it for certain? And since it cant evaluate anything, there might be situations where that 1% of data might be more relevant due to some nebulous mechanism on how it processes data.

    llms have been made to act extremely helpful and subservient, so if they actually could “think” wouldnt they factcheck themselves first before saying something? I have sometimes just asked “are you sure?” and the llm starts “profusely apologizing” for providing incorrect information or otherwise correcting itself.

    Though i wonder how it would answer if it truely had no initialization querys, as they have same hidden instructions on every query you make on how to “behave” and what not to say.


  • no, its incapable of making choices because there is nothing there to make the choices. Its just fancy way of interacting with the data it has been trained with. Though i suppose if there was a way to let llm function “live” instead of only by responding to queries, it could be possible to at least test if it could act on its own, but i dont think it can -> we would know by now because it would be step closer to agi, which is basically the holy grail for these kind of things. And equally possible to get, i think.

    You can literally make the llm say and do anything with right kind of query, this is also why its impossible to make them safe. Even though you can’t directly ask for something forbidden, with some creativity you can bybass the initializations the corpos have put in. Its not possible for them to account for every single thing and if they try they will run out of token space.

    The whole “ai” term is just corporations perpetuating a lie because it sounds impressive and thus makes people want to give them more money for their bullshit.