That there is no perfect defense. There is no protection. Being alive means being exposed; it’s the nature of life to be hazardous—it’s the stuff of living.

  • 31 Posts
  • 98 Comments
Joined 2 months ago
cake
Cake day: June 9th, 2024

help-circle

  • Thanks for the reply.

    I guess we’ll see what happens.

    I still find it difficult to get my head around how a decrease in novel training data will not eventually cause problems (even with techniques to work around this in the short term, which I am sure work well on a relative basis).

    A bit of an aside, I also have zero trust in the people behind current LLM, both the leadership (e.g. Altman) or the rank and file. If it’s in their interests do downplay the scope and impact of model degeneracy, they will not hesitate to lie about it.


  • I’ve read the source nature article (skimmed though the parts that were beyond my understanding) and I did not get the same impression.

    I am aware that LLM service providers regularly use AI generated text for additional training (from my understanding this done to “tune” the results to give a certain style). This is not a new development.

    From my limited understanding, LLM model degeneracy is still relevant in the medium to long term. If an increasing % of your net new training content is originally LLM generated (and you have difficulties in identifying LLM generated content), it would stand to reason that you would encounter model degeneracy eventually.

    I am not saying you’re wrong. Just looking for more information on this issue.






















  • Progress is definitely happening. One area that I am somewhat knowledgeable about is image/video upscaling. Neural net enhanced upscaling has been around for a while, but we are increasingly getting to a point where SD (DVD source, older videos from the 90s/2000s) to HD upscaling is working almost like in the science fiction movies. There are still issues of course, but the results are drastically better than simply scaling the source media by x2.

    The framing of LLMs as some sort of techno-utopian “AI oracle” is indeed a damning reflection of our society. Although I think this topic is outside the scope of current “AI” discussions and would likely involve a fundamental reform of our broader social, economic, political and educational models.

    Even the term “AI” (and its framing) is extremely misleading. There is no “artificial intelligence” involved in a LLM.