Welcome to Software Development on Codidact!
Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.
Should we allow answers generated by ChatGPT?
We got our first (mostly) ChatGPT answer in our community. Also, a question includes an adapted ChatGPT code that does not seem to do the job.
StackOverflow has already banned ChatGPT answers and I am wondering how we should proceed in this case.
From my perspective, we should also ban ChatGPT answers because it is very likely to include subtle errors and lack any citations (ChatGPT actually had the option to answer the question of sources, but this was removed).
What do you think? Should we allow answers generated by ChatGPT?
I want to let you know that today we (Codidact team) posted our default Gen-AI policy. As far as this community is conc …
1y ago
After some more experience from this bot over a couple of months, I would say that we should ban it simply because: T …
2y ago
"Subtle errors" understates the problem. For example, I asked it about uses for std::equalto, and it tragicomically gave …
2y ago
The expectation for all posts is, that a post is always understood as the own work of the poster, who has the copyright …
2y ago
Now that we've had a few of these answers, I really don't like them. It seems there are three separate problems with …
2y ago
It is a relevant but not decisive point whether ChatGPT can produce citations to back up its output (and whether the cit …
1y ago
My oppion is that they should NOT be allowed if they are not checked by a human for correctness and/or 100% copy pasted …
2y ago
Good question, but solid ‘meh’ on the issue. I don't think a preemptive ban is warranted. It's not as if we're being flo …
2y ago
This may be a bit out of left-field, but I don't see anybody else taking the approach. Let's face it, ChatGPT and A …
2y ago
9 answers
I want to let you know that today we (Codidact team) posted our default Gen-AI policy. As far as this community is concerned I think it's consistent with what you're already doing and nothing surprising, but I want to make sure folks are aware.
What we posted is not a deviation from what we were all already doing across the network, but we hadn't made it clear before and the question came up, so we wanted to articulate it. We based it on principles that were already part of our network's expectations:
-
Presenting work as your own that you did not create is plagiarism.
-
When using another's work, make it clear that it's not your own work (such as through quote formatting) and attribute it.
-
Don't violate others' copyrights or licenses.
And we based it on one additional factor: the quality of generative-AI content, out of the box, is poor. It's the antithesis of the high-quality, peer-reviewed information we're all here for.
Most posts that use generative AI violate at least one of the existing policies. It's possible for a post to use gen-AI with disclosure and attribution, or for an author to use AI output as a starting point and then refine it so that the result is original or quotes appropriately and isn't just a dump of AI output. Each community on our network is free to decide whether this is ok. You're free to ban, restrict, or allow posts that include gen-AI output. We will support our communities and moderators whatever you decide.
This isn't, strictly speaking, an answer to your question; the Software Development community, not the Codidact team, gets to decide what this community's policy is. I just wanted to supply information that might be part of your deliberations, and I didn't want today's post to raise any concerns here. If anyone has any questions or concerns, don't hesitate to let me know. Thanks.
After some more experience from this bot over a couple of months, I would say that we should ban it simply because:
The answers it gives are often wrong.
ChatGPT has been hyped up ridiculously. It is not that good, it is not that smart, it cannot be trusted to give correct answers to complex topics such as programming. It cannot write decent code. It is just guessing.
Whenever someone tries to make it answer a complex topic, it starts to blurt out cocksure statements that are factually incorrect, mixing those with things that are correct. Basically it is producing "the best kind of lies with a little bit of truth mixed in".
0 comment threads
"Subtle errors" understates the problem. For example, I asked it about uses for std::equal_to, and it tragicomically gave me a code sample passing std::equal_to as the comparison predicate to std::sort.
No more citations?! I loved that feature!
I would be inclined to favor replies where someone took the ChatGPT output and ran it in a debugger before posting.
Another option would be to ask it to check its work. I tried the obvious nonstarter of asking "Does this code have a bug?" and pasting in the code sample it had provided. It was obvious that wouldn't work, but it did. ChatGPT explained that its code sample was wrong and explained exactly why.
One angle on this is that a person with a question can simply run ChatGPT if that's good enough, and that person will only have come here if there's a need for something better.
0 comment threads
The expectation for all posts is, that a post is always understood as the own work of the poster, who has the copyright and responsibility for it and offers it to the site according to the site's conditions regarding editing etc.
What we should ban, therefore, is adding of verbatim ChatGPT responses that are marked as such. Such answers would violate the aforementioned expectations:
-
Responsibility: Someone has to be responsible for the answer. If the answer is just verbatim ChatGPT output marked as such, the poster does obviously not take responsibility for the text.
-
Editability: If the output is marked as verbatim ChatGPT output, this would be a barrier for editing this - you don't edit citations, do you?
Note that I did not mention the quality aspect as argument: People can give bad/wrong answers even without help from ChatGPT :-)
What we should not ban is, that someone creates an answer with the support of ChatGPT, that is, using information from ChatGPT output: We in general can not limit where posters get their information from - we would not even know, if they don't tell. They may also have information from other non-reliable resources.
What if someone takes verbatim output from ChatGPT without classifying it as such? We may discourage or even ban it (I am not sure about possible copyright issues, for example), but it could be hard to prove anyway. However, again, the aforementioned expectation for all posts would simply apply - and then this is not a special case from Codidact's perspective.
The site should also not include ChatGPT generated output automatically: Like, putting a section at the bottom of each page like "see what ChatGPT says about it". I don't like this idea for the following reasons:
-
Users can simply do this themselves if they want it.
-
The output is non-deterministic: Asking the same question repeatedly may deliver different answers each time. Which answer would we put to the site? Would this be stored, or would it be dynamically re-created?
-
Alternatives to ChatGPT exist / will pop up. Should we add all those over time? This would really clutter the pages from user perspective.
What may be OK could be some box somewhere on our pages that directs users to other sources of information in general (that is, not just ChatGPT, maybe ChatGPT not even part of this list), like "other places you can look for answers". May sound a bit strange to re-direct people this way, given that we are hoping for the community to grow, but such an approach may also have the effect of people using Codidact as a starting point.
0 comment threads
Now that we've had a few of these answers, I really don't like them.
It seems there are three separate problems with the ChatGPT answers we have seen:
-
Quote-only. Just like we don't allow link-only answers, we shouldn't allow quote-only answers. Someone answering here needs to provide insight of their own. There can be value in finding a good reference to quote from, but that should come with at least some commentary of why the source is credible, how it was found, how that fits with personal experience or knowledge, etc.
Anyone can look up a question on the internet and copy whatever answers pop up. That's not adding much value, and without vetting, can be negative value. This is not what we want this site to be. We want answers based at least on some personal contribution.
We want the kind of answers that others will quote.
-
Not definitive. Quoting should be done from sources that are reputable, vetted, and there is reasonable cause for considering the answer well-informed. ChatGPT is none of these. We don't know what data sources were drawn on, nor what inferences were made. We are not hearing the voice of experience or credible expertise.
While it might be valid to use AI to find inferences you didn't think of, those inferences are only starting points for investigation, not ready-made answers. Quoting them as if they were the latter is likely more damaging to a store of knowledge than useful.
- Just plain lazy. If you're not going to put some personal effort, expertise, or report of personal experience into an answer, then we don't want you. Lazy answerers are not helpful in building a community of contributors. These are not the type of users we want to interact with here.
I therefore propose that quoting ChatGPT be banned, except possibly for small snippets with significant discussion added by the answerer.
0 comment threads
It is a relevant but not decisive point whether ChatGPT can produce citations to back up its output (and whether the citations exist and are on point).
I asked it for a citation today and got a sound one. Such things have to be manually checked. I've followed up in another case and found the citations irrelevant, and one doctor who challenged some output was given a journal reference which did not even exist.
0 comment threads
My oppion is that they should NOT be allowed if they are not checked by a human for correctness and/or 100% copy pasted without a single thing changed. If they (the poster) checks for correctness and changes it so its not fully copy pasted I think it should be allowed just like any other answer.
0 comment threads
Good question, but solid ‘meh’ on the issue. I don't think a preemptive ban is warranted. It's not as if we're being flooded, and it's also not as if that one ChatGPT answer is worse than the median human answer here. I'd say let's upvote and downvote answers per usual without any particular restrictions on sources. (Although citing the source of an answer, as in all cases where the author of the post is not the source, should be required.)
0 comment threads
This may be a bit out of left-field, but I don't see anybody else taking the approach.
Let's face it, ChatGPT and AI and all its quandaries are not going away. If we put a ban in, some clever clogs will work out a way to bypass, just for the sake of it. If we allow ChatGPT open slather, the quality of answers becomes hopelessly variable.
I suggest we take the bull by the horns, and put some smarts into the question writing function that, once the question is completed, submits it to ChatGPT. Codidact then inserts ChatGPT's answer and the community are then allowed to try to reword the question so the AI provides a better (more accurate, more precise, etc) answer.
Heck you could even submit the question again to query if there's a better way to write it and get a positive feedback loop going with ChatGPT.
I know nothing about licensing, so this may be expensive. At the same time, the competitive advantage by exploring this niche (before others jump on the wagon - it will happen) may make the expense worth it.
An alternate idea (prompted from a comment) might be to have an official 'ChatGPT' community member, and give mods, or some group created for the purpose, privileges to use that account to write a response (including the text of the question used) as given by ChatGPT.
My reasoning for this approach is that an official use of the AI tech will reduce the noise - whether from the multiple ChatGPT answers given if it's allowed, or the attempts to sneak AI-assisted answers through if it's not - by providing a single AI focus point for each question.
0 comment threads