Stemming from a security researcher and his team proposing a new Linux Security Module (LSM) three years ago and it not being accepted to the mainline kernel, he raised issue over the lack of review/action to Linus Torvalds and the mailing lists. In particular, seeking more guidance for how new LSMs should be introduced and raised the possibility of taking the issue to the Linux Foundation Technical Advisory Board (TAB).

This mailing list post today laid out that a proposed TSEM LSM for a framework for generic security modeling was proposed but saw little review activity in the past three years or specific guidance on getting that LSM accepted to the Linux kernel. Thus seeking documented guidance on new Linux Security Module submissions for how they should be optimally introduced otherwise the developers are “prepared to pursue this through the [Technical Advisory Board] if necessary.”

  • rmt@programming.dev
    link
    fedilink
    arrow-up
    1
    arrow-down
    11
    ·
    20 hours ago

    If the lowest paid intern gets to use AI, then it will probably help them configure it properly… the docs generally aren’t bad (of the ones I’ve seen/used), but they’re not newbie/intern level docs.

      • rmt@programming.dev
        link
        fedilink
        arrow-up
        2
        arrow-down
        4
        ·
        16 hours ago

        “Trust but verify” … which just means doing due diligence as a professional, whether the crapHHHHquality code and documentation is written by a human or AI.

        Humans are incredibly good at saying dumb shit while making it seem like it could be the right thing, but LLMs are arguably better at it.

        And you, and I, and everyone here, will fall for it… not always, but too often. We are all lazy thinkers by nature.