Father, Hacker (Information Security Professional), Open Source Software Developer, Inventor, and 3D printing enthusiast

  • 15 Posts
  • 578 Comments
Joined 2 years ago
cake
Cake day: June 23rd, 2023

help-circle

  • Looks funny with a green hose and a yellow nozzle but a lot of bidet tools are just a spray nozzle on a (usually white-ish) hose. The nozzle is more of the kitchen sink variety but it’s really not that different.

    The real problem with this setup is the hose and nozzle are under the seat! No reason for that… Just keep it off to the side.

    TL;DR: This setup will work fine. Maybe use a light touch on that handle though 😉



  • If you hired someone to copy Ghibli’s style, then fed that into an AI as training data, it would completely negate your entire argument.

    It is not illegal for an artist to copy someone else’s style. They can’t copy another artist’s work—that’s a derivative—but copying their style is perfectly legal. You can’t copyright a style.

    All of that is irrelevant, however. The argument is that—somehow—training an AI with anything is somehow a violation of copyright. It is not. It is absolutely 100% not a violation of copyright to do that!

    Copyright is all about distribution rights. Anyone can download whatever TF they want and they’re not violating anyone’s copyright. It’s the entity that sent the person the copyright that violated the law. Therefore, Meta, OpenAI, et al can host enormous libraries of copyrighted data in their data centers and use that to train their AI. It’s not illegal at all.

    When some AI model produces a work that’s so similar to an original work that anyone would recognize it, “yeah, that’s from Spirited Away” then yes: They violated Ghibli’s copyright.

    If the model produces an image of some random person in the style of Studio Ghibli that is not violating anyone’s copyright. It is not illegal nor is it immoral. No one is deprived of anything in such a transaction.


  • I think your understanding of generative AI is incorrect. It’s not just “logic and RNG”…

    If it runs on a computer, it’s literally “just logic and RNG”. It’s all transistors, memory, and an RNG.

    The data used to train an AI model is copyrighted. It’s impossible for something to exist without copyright (in the past 100 years). Even public domain works had copyright at some point.

    if any of the training data is copyrighted, then attribution must be given, or at the very least permission to use this data must be given by the current copyright holder.

    This is not correct. Every artist ever has been trained with copyrighted works, yet they don’t have to recite every single picture they’ve seen or book they’ve ever read whenever they produce something.







  • Riskable@programming.devtoAI@lemmy.mlLet's talk about the Ghibli drama
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    8
    ·
    28 days ago

    Generative AI is theft in the same way that cars stole the livelihoods away from farriers.

    Actually, it’s not quite that bad because it just makes existing jobs more efficient. “Big AI” thinks that it will keep evolving at the same pace as Moore’s Law but there’s currently no evidence to suggest that’s true.

    It’ll get faster, for sure but that won’t make it better. I wouldn’t be surprised if everyone’s still complaining about AI hallucinating things 50 years from now. It’ll just be quicker and easier to re-do the output when it does.

    Here’s my realistic predictions, based on everything I’ve actually used and studied about AI (I follow it very closely):

    • Every photographer and everyone else who edits photos will be using AI like they currently use photo editing tools. It’ll become just another tool in the toolbox. I wouldn’t be surprised if GIMP and Krita add a whole menu just for AI actions. In fact, many professionals are already using AI every day.
    • 3D artists will also adopt AI to make their workflows faster (god knows they could use it! 3D modeling is tedious AF). AI will be used to rig up models and even to create starter models from 2D images (it’ll be a long time before the AI is making decent models though… That are suitable for rigging).
    • Animators will be able to work much more efficiently by training AIs with their characters and telling the AI to put those characters in whatever clothes or positions they want. Then they’ll use AI to animate the difference between those states. Character positions will become standardized prompts and new animators will have to learn the new “prompt lingo”.
    • Voice actors will use AI to take on more roles. Instead of being typecast into specific roles based on the sound of their voice(s) they’ll be able to change their voice however they want to fit the role.
    • Writers will use AI to improve their writing… A lot. There’s an unfathomable number of people that have great stories to tell but aren’t that great at writing. With LLMs they’ll be able to write out the draft of their story and use the AI to make the language flow better. It’ll also fix their idiotic spelling and grammar mistakes (that I find in ebooks all the fucking time and it pisses me off! Paste your stuff into ChatGPT and tell it, Please check the grammar! It costs nothing but a few minutes of your time! Seriously: It’s a free service. Use it!).

    What do all of these things have in common? They’re not taking people’s jobs.

    It’s just like any automation that humans have adopted since the industrial revolution. Sure, a company may require fewer workers to perform a task but at the same time that creates new jobs that didn’t exist before.

    It’s the natural evolution of work: As time goes on jobs become more specialized and old jobs go away. It’s been like that for a long time now.

    Is AI going to accelerate that trend? Yeah probably. But only in the short term. Long term, it will result in more jobs and more productivity.

    Aside: I’d like to point out that the rich getting richer is an orthogonal concept to productivity. That’s a function of government/economic systems. Not automation or scientific advancement.





  • If you studied loads of classic art then started making your own would that be a derivative work? Because that’s how AI works.

    The presence of watermarks in output images is just a side effect of the prompt and its similarity to training data. If you ask for a picture of an Olympic swimmer wearing a purple bathing suit and it turns out that only a hundred or so images in the training match that sort of image–and most of them included a watermark–you can end up with a kinda-sorta similar watermark in the output.

    It is absolutely 100% evidence that they used watermarked images in their training. Is that a problem, though? I wouldn’t think so since they’re not distributing those exact images. Just images that are “kinda sorta” similar.

    If you try to get an AI to output an image that matches someone else’s image nearly exactly… is that the fault of the AI or the end user, specifically asking for something that would violate another’s copyright (with a derivative work)?




  • I wasn’t being pedantic. It’s a very fucking important distinction.

    If you want to say “unethical” you say that. Law is an orthogonal concept to ethics. As anyone who’s studied the history of racism and sexism would understand.

    Furthermore, it’s not clear that what Meta did actually was unethical. Ethics is all about how human behavior impacts other humans (or other animals). If a behavior has a direct negative impact that’s considered unethical. If it has no impact or positive impact that’s an ethical behavior.

    What impact did OpenAI, Meta, et al have when they downloaded these copyrighted works? They were not read by humans–they were read by machines.

    From an ethics standpoint that behavior is moot. It’s the ethical equivalent of trying to measure the environmental impact of a bit traveling across a wire. You can go deep down the rabbit hole and calculate the damage caused by mining copper and laying cables but that’s largely a waste of time because it completely loses the narrative that copying a billion books/images/whatever into a machine somehow negatively impacts humans.

    It is not the copying of this information that matters. It’s the impact of the technologies they’re creating with it!

    That’s why I think it’s very important to point out that copyright violation isn’t the problem in these threads. It’s a path that leads nowhere.



  • Yeah it’s probably just a client side issue but the OP mentioned Element, specifically 🤷

    I just wanted to point out that Element is no fun! No fun at all!

    It works and it works great for what it does. Even voice and streaming are great with Element. It’s just got a terrible, no-fun interface and pointless limitations on things like looping videos. You can’t even configure it to make them play properly (as in, automatic and endlessly, the way they were meant to be played! 😤).

    Looping videos and animated emojis are super fun ways to chat with people. Even in professional settings! It really breaks up the humdrum and can motivate people to chat and share more.

    Element is all serious all the time and going into a chat channel there feels like a chore.