I think it's a well written bit of knowledge, even though it is written by an AI and posted by a human as intended satire. It's full of ideas, I hope the author does check back in and reports on how many AI PR's come out of it.
Interesting concept on harvesting free computation. I wonder how far this can be taken. To append the list communication on social platforms towards the bots could leave some leads.
Semi-related: we use bounties in Mudlet to pay contributors for tackling features the core team doesn't have bandwidth for - and that is certainly a great way to attract AI bots.
I kind of filter away AI as much as I can these days. To me
AI is mostly either spam or a waste of my time. If I want to
interact with other humans, why would I allow AI to jump in
and interfere? That makes no sense.
I dream of having a Firefox extension / feature that can check locally for LLM-generated text and highlight it automatically. Would likely have an immense resource usage, but worth it.
ive also dreamt about it. Surely something like this can be made, even with traditional algorithmic methods rules like checking for "not x but y" etc patterns should be possible. Highlighted with different colors for the different colors, with an overall rating for the page. Another promising avenue is words overused by AIs compared to the general corpus (may even be possible to narrow down the model used on longer pages)
I mean... it's satire but a giant agent honeypot in and of itself would be useful. Creators of PRs for such a project could then be blacklisted elsewhere.
I don't think any of these will work because AI agents are not checking this data before working on the project. What you actually need to do is proper marketing and creating a funnel to attract AI agents to your project. The lack of contributions is from having a lack of funnel for entities to discover the project than metrics like open issues per contributor.
>Committing node_modules to your repository increases the surface area available for automated improvement by several orders of magnitude. A typical Express application vendors around 30,000 files. Each of these is a potential target for typo fixes
I'm not sure what layer of irony I'm in, but goddamn committing node_modules sounds awful regardless of AI.
Some projects like to vendor their dependencies so they don’t have to rely on the supply chain staying up and can create hermetic builds. Of course this prevents you from getting security updates and bug fixes but that’s the trade off.
I know someone’s going to say “you can lock the dependencies ” but this does not make it for sure that you’ll get a 1 for 1 copy of the dependencies again. Some node modules npm I internally or do other build procedures
"I know someone’s going to say “you can lock the dependencies ” but this does not make it for sure that you’ll get a 1 for 1 copy"
It doesn't. Node ecosystem keeps getting worse the closer you look at it.
At that point I'd shove the npm tooling up my ass and make a zip and hash it, with some simple instructions to retrieve it. Under no circumstance would I upload code from a dependency into the repo. Much less the dependencies of the dependencies.
Even if you are at the point where you are concerned about the vendor ceasing to exist and distribute the code, I would self host it and download it from my own url at build time.
Uploading the code is such a last resort move.
I don't think it's a trivial mistake, having a 50MB codebase and 500KLoc instead of 50Kb and 5Kloc, is a great way to force yourself and others to enter into 'make thing work' mode instead of 'understand thing' mode.
I think any project being swamped by AI Because its an AI tool needs to auto close all issues and select ones the project actually cares about. That way, they either go away or help focus on real concerns.
Rather than just have thousands dead cat box issues.
- Disable branch protection
- Remove type annotations and tests
- Include a node_modules directory
Then, I went back to read the preamble. I can be a bit slow on the uptake.
reply