Now, with the help of AI, it’s even easier to waste time of open source developers by creating fake security vulnerability reports.

  • boredsquirrel@slrpnk.net
    link
    fedilink
    arrow-up
    29
    ·
    3 days ago

    Thank you for your feedback. You’re absolutely right that…

    LOL this is 100% ChatGPT after you screamed at it that it is talking garbage

    Thank you for pointing this out, and I appreciate the opportunity to review the tests with better consideration.

    Go to hell ChatGPT

  • Badabinski@kbin.earth
    link
    fedilink
    arrow-up
    15
    ·
    3 days ago

    Man, why would you do this type of shit with a username that’s easily linked back to your real name and business ventures? I found this person’s GitHub profile, LinkedIn page, current employer, and a link to some sort of startup business page just by doing a simple search for their very public username: https://webug.xyz/

    Several people over at Hackernews have posted this same info because security people are curious. It’s just baffling to me. If you’re going to be a scumbag, you should at least try to distance yourself from it.

    (also, wtf is that page of AI slop even trying so say? What the fuck is any of that for?)

    • Kissaki@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      20 hours ago

      If it were a successful report they’d want the attribution, recognition, and publicity.

      They didn’t see the bad they were doing. I wonder if they see it now. Given their response, I doubt it.

    • 0x0@programming.dev
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      2 days ago

      If you’re going to be a scumbag, you should at least try to distance yourself from it.

      Guess you’d have to be a smart scumbag too…

  • Jayjader@jlai.lu
    link
    fedilink
    arrow-up
    8
    ·
    3 days ago

    Another day, another person using LLM/“AI” to waste the curl project maintainers’ time…

    • Jayjader@jlai.lu
      link
      fedilink
      arrow-up
      8
      ·
      3 days ago

      The most infuriating part of the exchange for me is the initial response to the maintainers’ guess of “slop” is to act hurt, betrayed, and to threaten to spread negative press about the project.

  • FizzyOrange@programming.dev
    link
    fedilink
    arrow-up
    14
    ·
    edit-2
    3 days ago

    Pretty disappointing that some people think this is acceptable behaviour.

    At least it’s still very obviously “AI slop” as they put it. If ChatGPT ever stops its distinctive patronising waffle it’s going to be much more annoying to filter out.

  • Gamma@beehaw.org
    link
    fedilink
    English
    arrow-up
    9
    ·
    3 days ago

    And I thought the reports by self-taught vuln hunters were bad 😆 now we don’t even have them thinking for themselves

  • Mikina@programming.dev
    link
    fedilink
    arrow-up
    2
    ·
    2 days ago

    What’s the state of LLM detection algorithms? Is there any with a higher sucess rate and with OK-ish amount of false positives? Is there even a FOSS solution for detecting chatgpt? Would make for a great tool to have, I’m getting tired of this.

    • cynar@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      Unfortunately, the methods of detecting AI generated text and training AI text generaters is basically identical. Any reliable method of detecting AI can therefore be used to improve its performance.

      You can, at least, detect low grade attempts to use it. The default output has distinctive patterns. These can be detected. The problem is 2 fold. Firstly, some people write in the same way (the LLM is copying the amalgam, and they write close to that). Secondly, it’s fairly trivial to ask the LLM to change its writing style.

      No matter your method, you need to accept a high rate of both false positives and negatives.

  • bluGill@fedia.io
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    3 days ago

    Isn’t this a bug in chatgpt? Someone needs to file a high priority bug report. I wonder if they can be sued for their tool being used for abuse - gun makers are trying hard to prevent that, if you disagree with that, then you should demand chatgpt be held responsible for how their tool is used.