-
Notifications
You must be signed in to change notification settings - Fork 241
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Policy requiring disclosure of standard development tools is impracticle #2434
Comments
Would it be enough to remove the two "but you should be clear about this"? Here is how it would look like without them and I think it still reads well, and clarifies the intend (it's fine to use AI, but only if you also use your own intelligence)
I think the purpose of the "you should be clear about this" was in the context of the question to differentiate between acceptable and not acceptable usage of LLMs? If we want to keep that point, maybe we can add a sentence like
|
I closed #2435 as duplicate of this. Copying my comment here: My suggestion is to rewrite #2417 without referencing LLMs. The overall guidance boils down to "don't create garbage/useless PRs", and "if you do we will block you". If a PR is high quality then it's irrelevant if it's created with GenAI help or not. We can expect most PRs to be done with AI assistance in the near future, so the guidance to "disclose that fact" serves no purpose. |
Nowhere in this document do we state that we block or ban contributors that create "garbage/useless PRs". The "if you do" is that maintainers can close/hide that individual PR (or issue or other kind of contribution).
I agree with that, and I think also the policy acknowledges that multiple times, especially it states that "this policy does not prohibit the use of LLMS to assist"
The is no general guidance to disclose the fact that one is using such a tool. The guidance is contextual since it is given as an answer to the question "how do I know the difference between allowed and disallowed usage of LLMS". To make this clearer, I proposed the change above to remove the "but you should be clear about this" language and add a sentence that suggests calling out the usage of LLM if a contributor is unsure. This serves the purpose of a maintainer and contributor to have a transparent conversation, where the maintainer can either let the contributor know that what they do is not helpful (and close the contribution based on that), or they can acknowledge that the contributor is using that tool properly and they have no issue with that.
I disagree with that. The guidance is "don't use LLMs to create contributions that mimic higher quality than you are able to produce yourself, because it is impolite towards other contributors" and it gives maintainers a document they can point to when they close a contribution assuming it is such a case. The difference between a non-AI-assisted garbage/useless PR and an AI-assisted garbage/useless PR is that the latter can be harder to recognize. The initial PR may look acceptable, so a maintainer engages in the review, and only during that process they figure out that the submitter of that PR feeds their questions into a LLM and sends the maintainer back the answers. In that case a maintainer can write their own words why they feel disrespected or they can point the contributor to the GenAI policy document and allow them to educate themselves. |
|
They will not read it upfront, but maintainer can point them to it if needed. Also, we have unexperienced maintainer looking for guidance how to handle GenAI PRs, they can lean into such guidance. I regularly use guidelines that we have to point things out, instead of explaining them in long words in a comment. That's the ROI for me.
The contributions that triggered that guideline were mostly from unexperienced contributors that tried to create a "quick win". So it is less about bad faith and more about being unexperienced, where some guidance can be helpful.
That's the point. If the review the PR and then during the PR figure out that the contributor is not able to fix the issues on the PR the maintainer wasted a lot of time.
Sure, but it helps. The maintainer can write long comments on why and how LLMs should be used or they can point to a guideline. Note, that is how I think about it, and how I will be using it, and that's why I defend it. You suggested that you rewrite it in a way that it works without referencing LLMs/GenAI. If this is possible, I am OK with that as well. So I am happy to review your PR on that matter! |
I personally think the guidance should not be about "don't create garbage/useless PRs" but rather "you should be able to engage in constructive conversation, justify your design decisions, and apply feedback giving on a PR". This is how I understood the "but you should be clear about this" part of the guidance proposed. Not as a disclosure of using LLMs on every PR raised (which I think it's unreasonable), but as a need to be clear about usage of LLM/GenAI if asked about it. I believe the mention of LLM/GenAI is important though, but perhaps as exemplification of one of the cases in which a PR is raised without sufficient knowledge of the change proposed. The same would apply if someone opens a PR using someone else's code and then is not able to reason about it. |
Just to be clear, the intention isn't that you affirmatively state that you're using GenAI, it's that you shouldn't dissemble about it (and that we reserve the right to ask). If that would be a useful clarification, I encourage PRs. |
The new Generative AI policy states:
We should expect that most contirbutors are using tools based on generative AI in their pull requests and reviews. Therefore, as written, this policy requires that most PRs and most reviews within the project contain a disclosure of the obvious. Developer tooling is evolving quickly so we should expect this requirement to become increasingly obtuse over time. I suggest we remove the disclosure language and instead assume that contributors are using modern tools.
The text was updated successfully, but these errors were encountered: