• 0 Posts
  • 16 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle
  • All fair points, and I don’t deny predictive text generation is at the core of what’s happening. I think it’s a fair statement that most people hear “predictive text” and think it’s like the suggested words in a text message, which it’s more than that.

    I also don’t think Turing Tests are particularly useful long term because humans are so fallible. We too hallucinate all the time with our convictions based on false memories. Getting an AI to have what seems like an emotional response or show uncertainty or confusion in a Turing test is a great way to trick people.

    The algorithm is already a black box as is the mechanics of our own intelligence. We have no idea where the ceiling is for this technology yet. This debate quickly goes into the ontological and epistemological discussion about what it means to be intelligent…if the AI predictive text generation is complex enough where you simply cannot tell a difference, then is there a meaningful difference? What if we are just insanely complex algorithms?

    I also don’t trust that what the market sees in AI products is indicative of the current limits. AGI isn’t here yet, but LLMs are a scary big step in that direction.

    Pragmatically, I will maintain that AI is a different form of intelligence because I think it shortcuts to better discussions around policy and how we want this tech in our lives. I would gladly welcome the news that tells me I’m wrong.



  • First, we don’t understand our own neurons enough to model them.

    AI’s “neuron” or node is a math equation that takes a numeric input with a variable “weight” that affects the output. An actual neuron a cell with something like 6000 synaptic connections each and 600 trillion synapses total. How do you simulate that? I’d argue the magic of AI is how much more efficient it is comparatively with only 176 billion parameters in GPT4.

    They’re two fundamentally different systems and so is the resulting knowledge. AI doesn’t need to learn like a baby, because the model is the brain. The magic of our neurons is their plasticity and our ability to freely move around in this world and be creative. AI is just a model of what it’s been fed, so how do you get new ideas? But it seems that with LLMs, the more data and parameters, the more emergent abilities. So we just need to scale it up and eventually we can raise the.

    AI does pretty amazing and bizarre things today we don’t understand, and they are already using giant expensive server farms to do it. AI is super compute heavy and require a ton of energy to run. So, the cost is a rate limiting the scale of AI.

    There are also issues related to how to get more data. Generative AI is already everywhere and what good s is it to train on its own shit? Also, how do you ethically or legally get that data? Does that data violate our right to privacy?

    Finally, I think AI actually possess an intelligence with an ability to reason, like us. But it’s fundamentally a different form of intelligence.


  • Speaking as a designer, it’s important to separate the style/trend of a UI from its function. I think what you’re looking for is actually UX design.

    As a discipline, User Experience uses evidence-based research to understand how and why users behave they do. This leads to specific design patterns and principles that underlie all the good UI design seen from the giants like Apple, Google, Microsoft, etc. It gives you the language to evaluate designs. This is the foundation of your UI and the rest is just style — fonts, colors, imagery and icons which is subjective and less important. I lost ambition to be a trendy UI designer, so every design looks the same, but usability will shines through. Clean, simple and accessible is timeless.

    Study the articles from nngroup.com. They pretty much established the field of UX Design, with content talking about user behavior in the 1990s. https://lawsofux.com is a more attractive and consumable option, also heavily influenced by NN Group. Finally, accessible design is good design for all, not just those with disabilities. Understand the guidelines set by the W3C for accessibility, like minimum font sizes or contrast ratios for colors.




  • Your body and mind is just a bag of chemical soup, undergoing a constant reaction. Your tangle of nerves and synapses feed a mess of neurons that are wired in a circuit that gives you that spark of consciousness. But none of this is a fixed system, and your body goes through constant change. As one neural pathway dies, another one is rewired and the circuitry is now different.

    You can play the game of debating the Ship of Theseus, but who you “are” or “were” is just an illusion. Our memories are just the old circuits powering up, but even those change over time. Your memories are a false representation of the past because they only ever exist in the present and you’re at the mercy of your own perceptions.

    You “are” until you are not. So do what feels good —Kiss your loved ones, hug a tree, and be kind to yourself and others while your bag of soup ain’t leaking.





  • Yep, I spent a month refactoring a few thousand lines of code using GPT4 and I felt like I was working with the best senior developer with infinite patience and availability.

    I could vaguely describe what I was after and it would identify the established programming patterns and provide examples based on all the code snippets I fed it. It was amazing and a little terrifying what an LLM is capable of. It didn’t write the code for me but it increased my productivity 2 fold… I’m a developer now a getting rusty being 5 years into management rather than delivering functional code, so just having that copilot was invaluable.

    Then one day it just stopped. It lost all context for my project. I asked what it thought what we were working on and it replied with something to do with TCP relays instead of my little Lua pet project dealing with music sequencing and MIDI processing… not even close to the fucking ballpark’s overflow lot.

    It’s like my trusty senior developer got smashed in the head with a brick. And as described, would just give me nonsense hand wavy answers.




  • I work as hired hands for satan himself (not directly for insurance companies but a consulting company specializing intelling pharma companies how to market themselves to insurance companies) and the industry is a shit show across the board.

    The entire system is for-profit. Hospital systems make money by getting as much from insurance companies as possible. Pharmaceutical companies make money by selling drugs to insurance companies for as much money as they can. And the companies that hold the purse strings are the insurance companies of course (definitely the most evil).

    Why is everything so damn expensive? Well, pharmaceutical company creates drug X and in order to recoup years of R&D and the strict rigor of government regulation, they need to charge a LOT of money—high skilled/high salaries positions to develop the drugs, many years clinical trials (most of which fail), government regulation, market strategy/assessment etc. You can’t get to the finish line without dumping a shit ton of money into development. So yeah, new drugs cost a lot. This doesn’t excuse corporate greed or all the schemes to keep exclusivity on a drug in order to maintain a monopoly, which is rampant and makes the situations worse.

    New groundbreaking rare cancer treatment comes out—$1 million per patient. All the diseases that are easily treated with a pill are gone. So you’re left with rare disorders or ones requiring cutting edge treatment with a much much smaller market. So they have to charge an extraordinary amount to be profitable. Pharma companies don’t price drugs based on who can’t afford it, so they price it based on well insured patients that can. Insurance companies also can’t pay for everyone, so they come up with limits and preexisting conditions.

    Meanwhile hospital systems, like any other corporation, seek profits by cutting costs, consolidating expenses (your treatment) and charging as much as they can for your treatment. What the public isn’t aware of is that when negotiating prices, hospital systems come up with an inflated menu of costs as a bargaining chip. So that MRI that is listed as $5,000 a pop doesn’t is negotiated down behind closed door’s with insurance companies (in-network vs out-of-network). But what price do you pay when you don’t have insurance or your insurance company doesn’t want to pay it? You guessed it!

    Insurance companies pay for the whole lot and they too are out there to make money by NOT paying so they can hold onto your insurance premiums.

    Oh and let us not forget that manufacturers operate at a global scale and all those countries that have single payer systems do the reasonable thing (this does include Medicare and Medicaid too) and tell everyone that they will only pay a fair price based on Real World Evidence. Guess who picks up the slack? USA! Greatest country in the fucking world, with congress paid for by everyone of these corporations.



  • I’ve definitely seen GPT-4 become faster and the output has been sanitized a bit. I still find it incredibly effective in helping with code reviews where GPT-3 was never helpful in producing useable code snippets. At some point it stopped trying to write large swaths of code and started being a little more prescriptive and you still need to actually implement snippets it provides. But as a tool, it’s still fantastic. It’s like a sage senior developer you can rubber duck anytime you want.

    I probably fall in the minority of people who thinks releasing a castrated version of GPT is the ethical approach. People outside the technology bubble don’t have a comprehension of how these models work and the capacity for harm. Disinformation, fake news and engagement algorithms are already social ills that manipulate us emotionally and most people are too technologically illiterate to see how pervasive these problems are already.