• 1 Post
  • 44 Comments
Joined 11 months ago
cake
Cake day: December 11th, 2023

help-circle





  • zerakith@lemmy.mltoScience Memes@mander.xyzpringles
    link
    fedilink
    English
    arrow-up
    4
    ·
    3 months ago

    I’m pretty sure it’s real. I met someone once who worked in materials research for food and they said that modelling was big there because the scope for experimentation is more limited. In materials for construction where they wanted to change a property they could play around with adding new additives and seeing what happens. For food though you can’t add anything beyond a limited set of chemicals that already have approval from the various agencies* and therefore they look at trying to fine tune in other ways.

    So for chocolate, for example, they control lots of material properties by very careful control of temperature and pressure as it solidifies. This is why if chocolate melts and resolidifies you see the white bits of milk that don’t remain within the materia.

    *Okay you can add a new chemical but that means a time frame of over a decade to then get approval. I think the number of chemicals that’s happened to is very very small and that’s partly because the innovation framework of capitalism is very short term.




  • I won’t rehash the arguments around “AI” that others are best placed to make.

    My main issue is AI as a term is basically a marketing one to convince people that these tools do something they don’t and its causing real harm. Its redirecting resources and attention onto a very narrow subset of tools replacing other less intensive tools. There are significant impacts to these tools (during an existential crisis around our use and consumption of energy). There are some really good targeted uses of machine learning techniques but they are being drowned out by a hype train that is determined to make the general public think that we have or are near Data from Star Trek.

    Addtionally, as others have said the current state of “AI” has a very anti FOSS ethos. With big firms using and misusing their monopolies to steal, borrow and coopt data that isn’t theirs to build something that contains that’s data but is their copyright. Some of this data is intensely personal and sensitive and the original intent behind the sharing is not for training a model which may in certain circumstances spit out that data verbatim.

    Lastly, since you use the term Luddite. Its worth actually engaging with what that movement was about. Whilst its pitched now as generic anti-technology backlash in fact it was a movement of people who saw what the priorities and choices in the new technology meant for them: the people that didn’t own the technology and would get worse living and work conditions as a result. As it turned out they were almost exactly correct in thier predictions. They are indeed worth thinking about as allegory for the moment we find ourselves in. How do ordinary people want this technology to change our lives? Who do we want to control it? Given its implications for our climate needs can we afford to use it now, if so for what purposes?

    Personally, I can’t wait for the hype train to pop (or maybe depart?) so we can get back to rational discussions about the best uses of machine learning (and computing in general) for the betterment of all rather than the enrichment of a few.



  • Not irrational to be concerned for a number of reasons. Even if local and secure AI image processing and LLMs add fairly significant processing costs to a simple task like this. It means higher requirements for the browser, higher energy use and therefore emissions (noting here that AI has blown Microsoft’s climate mitigation plan our of the water even with some accounting tricks).

    Additionally, you have to think about the long term changes to behaviours this will generate. A handy tool for when people forget to produce proper accessible documents suddenly becomes the default way of making accessible documents. Consider two situations: a culture that promotes and enforces content providers to consider different types of consumer and how they will experience the content; they know that unless they spend the 1% extra time making it accessibile for all it will exclude certain people. Now compare that to a situation where AI is pitched as an easy way not to think about the peoples experiences: the AI will sort it. Those two situations imply very different outcomes: in one there is care and thought about difference and diversity and in another there isn’t. Disabled people are an after thought. Within those two different scenarios there’s also massively different energy and emissions requirements because its making every user perform AI to get some alt text rather than generate it at source.

    Finally, it worth explaining about Alt texts a bit and how people use them because its not just text descriptions of an image (which AI could indeed likely produce). Alt texts should be used to summarise the salient aspects of the image the author wants a reader to take away for it in a conscise way and sometimes that message might be slightly different for Alt Text users. AI can’t do this because it should be about the message the content creator wants to send and ensuring it’s accessible. As ever with these tech fixes for accessibility the lived experience of people with those needs isn’t actually present. Its an assumed need rather than what they are asking for.









  • It’s not about “satisfying the minorities”. It’s about ensuring a basic base level of respect and behaviour for people from all backgrounds. The roles you are talking about were specifically to deal with the fact there was an active problem around that minority in that community that needed dealing with. So bringing in that lived experience is absolutely important. Someone can be adequate, sane, have “proper” mindset and judgement and be from a minority that is currently being targeted with lived experience of the problem. Dealing with issues around diversity and inclusion make life easier and better for everyone: it’s well evidenced. I benefit daily from work that’s been done to make my area easier for people with disabilities despite not having one. Those only came about by people with disabilities challenging and getting in the room where decisions are made.

    It’s really not that hard! If you don’t feel minoritised in your daily life and therefore don’t see the importance, fine, but all of us are only one incident or cultural shift to end up being the target so if you aren’t motivated by the plight of people you are happy to “other” than do so because one day you might be the other.


  • You say remove discrimination and then use a discriminatory strawman. No one is suggesting a code contribution must be accepted based on a minority status. They are saying that to get a decent functioning community for everyone you need a diverse range of people in positions that set the behaviour of the community. You can’t get the CoC and enforcement of it right unless those affected are in positions that influence it. Your enforced anonymity doesn’t work because there are other ways of gendering and racialising people (e.g. based on who people talk). Additionally, what you are saying is that minoritised people have to hide who they are so they don’t get discriminated against rather than just deal with those doing the discrimination. They are called communities because that’s what’s they are: people want to be part of something and that involves sharing a part of themselves too. Open source projects live or die on their communities because they mostly don’t have the finances to just pay people to do the work. You need people to beleive in the project and not burn out etc.

    You lose nothing by making sure people from all backgrounds have the same opportunity and enjoyment being part of it. If you aren’t in a minority and don’t care about those that are then just say so!