AI is causing yet another problem, inundating open source developers with bug report spam that is vague and often unactionable.
The latest incident comes courtesy of a Curl bug report. Curl is an important open-source component used for data transfer across networks. A recent, very detailed bug report raised alarming concerns about possible vulnerabilities.
Both Curl_inet_ntop and inet_ntop4 pose significant buffer overflow risks due to a lack of proper size validation and unsafe string operations. The proposed fixes address these issues by enforcing strict buffer size checks and using safer string handling techniques. Comprehensive testing and adherence to these best practices will ensure the functions are secure and robust for both IPv4 and IPv6 address conversions.
Despite the ominous-sounding nature of the bug report, investigation by the developers immediately turned up issues. As developer Jim Fuller point out, the report was not actionable, containing no steps to reproduce the bug, let alone fix it.
Once I got through ‘the wall of text’ which appears comprehensive but does not contain a reproducer or concrete steps to vulnerability – as far as I can tell we have another report with no risk of vulnerability … if the op wants to suggest a code change then I would suggest to raise a PR.
Unfortunately, at this point in the discussion, the user who reported the “vulnerability” started claiming they were being disrespected and treated with a lack of empathy because their report was being ignored and labeled “AI slop.” Curl maintainer Daniel Stenberg responded, condemning AI bug reports.
I’m sorry you feel that way, but you need to realize your own role here. We receive AI slop like this regularly and at volume. You contribute to unnecessary load of curl maintainers and I refuse to take that lightly and I am determined to act swiftly against it. Now and going forward.
You submitted what seems to be an obvious AI slop “report” where you say there is a security problem, probably because an AI tricked you into believing this. You then waste our time by not telling us that an AI did this for you and you then continue the discussion with even more crap responses – seemingly also generated by AI.
By all means, use AI to learn things and to figure out potential problems, but when you just blindly assume that a silly tool is automatically right just because it sounds plausible, then you’re doing us all (the curl project, the world, the open source community) a huge disservice. You should have studied the claim and verified it before you reported it. You should have told us an AI reported this to you. You should have provided an exact source code location or steps-to-reproduce when asked to – because when you failed to, you proved that your “report” had no particular value.
A Larger Problem
The Curl team’s issues with AI bug reports are by no means isolated incidents. First spotted by The Register, Seth Larson, the Python Software Foundation’s security developer-in-residence, penned a blog post lamenting the situation.
I’m on the security report triage team for CPython, pip, urllib3, Requests, and a handful of other open source projects. I’m also in a trusted position such that I get “tagged in” to other open source projects to help others when they need help with security.
Recently I’ve noticed an uptick in extremely low-quality, spammy, and LLM-hallucinated security reports to open source projects. The issue is in the age of LLMs, these reports appear at first-glance to be potentially legitimate and thus require time to refute. Other projects such as curl have reported similar findings.
Larson goes on to say that people need to recognize how much time and money are wasted with spammy, AI-generated bug reports, and that the industry should consider such reports malicious.
Security is already a topic that is not aligned with why many maintainers contribute their time to open source software, instead seeing security as important to help protect their users. It’s critical as reporters to respect this often volunteered time.
Security reports that waste maintainers’ time result in confusion, stress, frustration, and to top it off a sense of isolation due to the secretive nature of security reports. All of these feelings can add to burn-out of likely highly-trusted contributors to open source projects.
In many ways, these low-quality reports should be treated as if they are malicious. Even if this is not their intent, the outcome is maintainers that are burnt out and more averse to legitimate security work.
Conclusion
Open source projects need contributors, including people who are willing to put the time in that it takes to produce detailed, well-researched,and actionable bug reports. Users interested in contributing in this way should put in the time and effort it takes to do it right, instead of taking the easy way of relying on hallucination-prone AI models.
Above all, when called out for lazy behavior, individual’s shouldn’t blame the developer and whine about a lack of empathy, while having no empathy for the developer’s wasted time and energy.
from WebProNews https://ift.tt/CnDsH8q
No comments:
Post a Comment