• 0 Posts
  • 28 Comments
Joined 1 year ago
cake
Cake day: November 7th, 2024

help-circle







  • I have to disagree. The only reason computer expanded your mind is because you were curious about it. And that is still true even with AI. Just for example, people doesn’t have to learn or solve derivations or complex equations, Wolfram Alpha can do that for them. Also, learning grammar isn’t that important with spell-checkers. Or instead of learning foreign languages you can just use automatic translators. Just like computers or internet, AI makes it easier for people, who doesn’t want to learn. But it also makes learning easier. Instead of going through blog posts, you have the information summarized in one place (although maybe incorrect). And you can even ask AI questions to better understand or debate the topic, instantly and without being ridiculed by other people for stupid questions.

    And to just annoy some people, I am programmer, but I like much more the theory then coding. So for example I refuse to remember the whole numpy library. But with AI, I do not have to, it just recommends me the right weird fuction that does the same as my own ugly code. Of course I check the code and understand every line so I can do it myself next time.



  • I wouldn’t be so dramatic. Transferring an eSIM is only a few clicks, there is no need for searching the little thingie to open SIM compartment, no searching for the right hole to stick it into, no fear of losing the tiny SIM card during the process. I would say the transfer process is pretty hard, mainly for older people or people with bigger fingers. On the other hand, you still need the operator and his servers and proprietary code for the SIM to be useful (unless you are building your own network).








  • I would say that artificial neuron nets try to mimic real neurons, they were inspired by them. But there are a lot of differences between them. I studied artificial intelligence, so my experience is mainly with the artificial neurons. But from my limited knowledge, the real neural nets have no structure (like layers), have binary inputs and outputs (when activity on the inputs are large enough, the neuron emits a signal) and every day bunch of neurons die, which leads to restructurizing of the network. Also from what I remember, short time memory is “saved” as cycling neural activities and during sleep the information is stored into neurons proteins and become long time memory. However, modern artificial networks (modern means last 40 years) are usually organized into layers whose struktuře is fixed and have inputs and outputs as real numbers. It’s true that the context is needed for modern LLMs that use decoder-only architecture (which are most of them). But the context can be viewed as a memory itself in the process of generation since for each new token new neurons are added to the net. There are also techniques like Low Rank Adaptation (LoRA) that are used for quick and effective fine-tuning of neural networks. I think these techniques are used to train the specialized agents or to specialize a chatbot for a user. I even used this tevhnique to train my own LLM from an existing one that I wouldn’t be able to train otherwise due to GPU memory constraints.

    TLDR: I think the difference between real and artificial neuron nets is too huge for memory to have the same meaning in both.