Thanks for bringing up the GitHub community code of conduct update. I had just posted on GitHub community forums asking for clarification on how to report automated AI-generated posts and comments, completely missing the CoC update announcement. It’s still unclear to me if they actually have an enforcement plan.
As far as I understand, the GitHub CoC covers all interactions happening within all GH-hosted community spaces, including Python repos. In case users encounter suspicious use of AI-generated answers, would it be appropriate to report it to Python moderators, in addition to filing it to GH support?
As a note on the topic of this discussion.
The focus is not so much LLM-generated content, but interaction with community spaces. If someone posts an unrevised ChatGPT-generated comment, the issue is not so much the tool they used to generate the comment. The issue is that they are most likely abusing the space to gain something (usually reputation, either for the social value, or to enable spamming), and doing so they require the time and attention of human readers, which is ultimately what is being exploited here.
Reputation-based cultural niches—and the online tools that codify this concept of reputation in their rules—are especially vulnerable, because they look for signifiers of reputability through proxy signals (competence, proper use of language, social recognition, etc) that can be fabricated[1]. Abusive personalities have exploited this fact in communities for ages, and sometimes brought massive destructive consequences. It’s an inherent vulnerability of reputation-based social groups, and the incentives they set up.
LLMs make the fabrication of some reputability signals accessible to a much larger number of people. They are an object of interest because they enable a larger scale of abuse. Although the instances of abuse tend to be minor because of the low quality and easy detectability (but I expect an evolution), they are way more common. The destructive effect is already evident in that as a human you need to stop and think that you may be interacting with a text generator which brings little additional value to the conversation, harming trust just like trolls do.
The tool is just a tool, but maybe what should be encouraged (even more) is authentic interaction, a non-judgmental mindset, and anything that can go beyond mere reputability. This is the long-term way to defeat the incentives for dysfunctional and abusive behavior. At the same time, I think it’s absolutely fair to see suspected use of generative AI tools as a signal to monitor. A user whose only interactions are ChatGPT responses is not contributing positively[2], and is wasting everyone’s time when the answers are not correct.
tl;dr: focus on the behavior, not on the tool.
Not everybody (whether in good or bad faith) can produce those signals easily. Someone in good faith might struggle to fit in or be taken seriously because of their language use, reinforcing the predominance of people with a certain background. Someone in bad faith might fail to impress and win credit. Both might end up wishing to have a magic tool to overcome the barrier
↩︎
no matter how good their “prompting” skills are, or how accurate the generated text is, if I wanted to get a LLM-generated answer I would request one myself. I don’t need to post on GH or SO or Discourse. ↩︎