Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Meta

Welcome to Software Development on Codidact!

Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.

Should we allow answers generated by ChatGPT?

+17
−0

We got our first (mostly) ChatGPT answer in our community. Also, a question includes an adapted ChatGPT code that does not seem to do the job.

StackOverflow has already banned ChatGPT answers and I am wondering how we should proceed in this case.

From my perspective, we should also ban ChatGPT answers because it is very likely to include subtle errors and lack any citations (ChatGPT actually had the option to answer the question of sources, but this was removed).

What do you think? Should we allow answers generated by ChatGPT?

History
Why does this post require moderator attention?
You might want to add some details to your flag.
Why should this post be closed?

0 comment threads

9 answers

You are accessing this answer with a direct link, so it's being shown above all other answers regardless of its score. You can return to the normal view.

+5
−0

The expectation for all posts is, that a post is always understood as the own work of the poster, who has the copyright and responsibility for it and offers it to the site according to the site's conditions regarding editing etc.

What we should ban, therefore, is adding of verbatim ChatGPT responses that are marked as such. Such answers would violate the aforementioned expectations:

  • Responsibility: Someone has to be responsible for the answer. If the answer is just verbatim ChatGPT output marked as such, the poster does obviously not take responsibility for the text.

  • Editability: If the output is marked as verbatim ChatGPT output, this would be a barrier for editing this - you don't edit citations, do you?

Note that I did not mention the quality aspect as argument: People can give bad/wrong answers even without help from ChatGPT :-)

What we should not ban is, that someone creates an answer with the support of ChatGPT, that is, using information from ChatGPT output: We in general can not limit where posters get their information from - we would not even know, if they don't tell. They may also have information from other non-reliable resources.

What if someone takes verbatim output from ChatGPT without classifying it as such? We may discourage or even ban it (I am not sure about possible copyright issues, for example), but it could be hard to prove anyway. However, again, the aforementioned expectation for all posts would simply apply - and then this is not a special case from Codidact's perspective.

The site should also not include ChatGPT generated output automatically: Like, putting a section at the bottom of each page like "see what ChatGPT says about it". I don't like this idea for the following reasons:

  • Users can simply do this themselves if they want it.

  • The output is non-deterministic: Asking the same question repeatedly may deliver different answers each time. Which answer would we put to the site? Would this be stored, or would it be dynamically re-created?

  • Alternatives to ChatGPT exist / will pop up. Should we add all those over time? This would really clutter the pages from user perspective.

What may be OK could be some box somewhere on our pages that directs users to other sources of information in general (that is, not just ChatGPT, maybe ChatGPT not even part of this list), like "other places you can look for answers". May sound a bit strange to re-direct people this way, given that we are hoping for the community to grow, but such an approach may also have the effect of people using Codidact as a starting point.

History
Why does this post require moderator attention?
You might want to add some details to your flag.

0 comment threads

+15
−0

I want to let you know that today we (Codidact team) posted our default Gen-AI policy. As far as this community is concerned I think it's consistent with what you're already doing and nothing surprising, but I want to make sure folks are aware.

What we posted is not a deviation from what we were all already doing across the network, but we hadn't made it clear before and the question came up, so we wanted to articulate it. We based it on principles that were already part of our network's expectations:

  • Presenting work as your own that you did not create is plagiarism.

  • When using another's work, make it clear that it's not your own work (such as through quote formatting) and attribute it.

  • Don't violate others' copyrights or licenses.

And we based it on one additional factor: the quality of generative-AI content, out of the box, is poor. It's the antithesis of the high-quality, peer-reviewed information we're all here for.

Most posts that use generative AI violate at least one of the existing policies. It's possible for a post to use gen-AI with disclosure and attribution, or for an author to use AI output as a starting point and then refine it so that the result is original or quotes appropriately and isn't just a dump of AI output. Each community on our network is free to decide whether this is ok. You're free to ban, restrict, or allow posts that include gen-AI output. We will support our communities and moderators whatever you decide.

This isn't, strictly speaking, an answer to your question; the Software Development community, not the Codidact team, gets to decide what this community's policy is. I just wanted to supply information that might be part of your deliberations, and I didn't want today's post to raise any concerns here. If anyone has any questions or concerns, don't hesitate to let me know. Thanks.

History
Why does this post require moderator attention?
You might want to add some details to your flag.

1 comment thread

Similar thread on Codidact meta (2 comments)
+14
−0

"Subtle errors" understates the problem. For example, I asked it about uses for std::equal_to, and it tragicomically gave me a code sample passing std::equal_to as the comparison predicate to std::sort.

No more citations?! I loved that feature!

I would be inclined to favor replies where someone took the ChatGPT output and ran it in a debugger before posting.

Another option would be to ask it to check its work. I tried the obvious nonstarter of asking "Does this code have a bug?" and pasting in the code sample it had provided. It was obvious that wouldn't work, but it did. ChatGPT explained that its code sample was wrong and explained exactly why.

One angle on this is that a person with a question can simply run ChatGPT if that's good enough, and that person will only have come here if there's a need for something better.

History
Why does this post require moderator attention?
You might want to add some details to your flag.

0 comment threads

+14
−0

After some more experience from this bot over a couple of months, I would say that we should ban it simply because:

The answers it gives are often wrong.

ChatGPT has been hyped up ridiculously. It is not that good, it is not that smart, it cannot be trusted to give correct answers to complex topics such as programming. It cannot write decent code. It is just guessing.

Whenever someone tries to make it answer a complex topic, it starts to blurt out cocksure statements that are factually incorrect, mixing those with things that are correct. Basically it is producing "the best kind of lies with a little bit of truth mixed in".

History
Why does this post require moderator attention?
You might want to add some details to your flag.

0 comment threads

+11
−2

Now that we've had a few of these answers, I really don't like them.

It seems there are three separate problems with the ChatGPT answers we have seen:

  1. Quote-only. Just like we don't allow link-only answers, we shouldn't allow quote-only answers. Someone answering here needs to provide insight of their own. There can be value in finding a good reference to quote from, but that should come with at least some commentary of why the source is credible, how it was found, how that fits with personal experience or knowledge, etc.

    Anyone can look up a question on the internet and copy whatever answers pop up. That's not adding much value, and without vetting, can be negative value. This is not what we want this site to be. We want answers based at least on some personal contribution.

    We want the kind of answers that others will quote.

  2. Not definitive. Quoting should be done from sources that are reputable, vetted, and there is reasonable cause for considering the answer well-informed. ChatGPT is none of these. We don't know what data sources were drawn on, nor what inferences were made. We are not hearing the voice of experience or credible expertise.

    While it might be valid to use AI to find inferences you didn't think of, those inferences are only starting points for investigation, not ready-made answers. Quoting them as if they were the latter is likely more damaging to a store of knowledge than useful.

  3. Just plain lazy. If you're not going to put some personal effort, expertise, or report of personal experience into an answer, then we don't want you. Lazy answerers are not helpful in building a community of contributors. These are not the type of users we want to interact with here.

I therefore propose that quoting ChatGPT be banned, except possibly for small snippets with significant discussion added by the answerer.

History
Why does this post require moderator attention?
You might want to add some details to your flag.

0 comment threads

+2
−0

It is a relevant but not decisive point whether ChatGPT can produce citations to back up its output (and whether the citations exist and are on point).

I asked it for a citation today and got a sound one. Such things have to be manually checked. I've followed up in another case and found the citations irrelevant, and one doctor who challenged some output was given a journal reference which did not even exist.

History
Why does this post require moderator attention?
You might want to add some details to your flag.

0 comment threads

+5
−4

My oppion is that they should NOT be allowed if they are not checked by a human for correctness and/or 100% copy pasted without a single thing changed. If they (the poster) checks for correctness and changes it so its not fully copy pasted I think it should be allowed just like any other answer.

History
Why does this post require moderator attention?
You might want to add some details to your flag.

0 comment threads

+6
−6

Good question, but solid ‘meh’ on the issue. I don't think a preemptive ban is warranted. It's not as if we're being flooded, and it's also not as if that one ChatGPT answer is worse than the median human answer here. I'd say let's upvote and downvote answers per usual without any particular restrictions on sources. (Although citing the source of an answer, as in all cases where the author of the post is not the source, should be required.)

History
Why does this post require moderator attention?
You might want to add some details to your flag.

0 comment threads

+0
−11

This may be a bit out of left-field, but I don't see anybody else taking the approach.

Let's face it, ChatGPT and AI and all its quandaries are not going away. If we put a ban in, some clever clogs will work out a way to bypass, just for the sake of it. If we allow ChatGPT open slather, the quality of answers becomes hopelessly variable.

I suggest we take the bull by the horns, and put some smarts into the question writing function that, once the question is completed, submits it to ChatGPT. Codidact then inserts ChatGPT's answer and the community are then allowed to try to reword the question so the AI provides a better (more accurate, more precise, etc) answer.

Heck you could even submit the question again to query if there's a better way to write it and get a positive feedback loop going with ChatGPT.

I know nothing about licensing, so this may be expensive. At the same time, the competitive advantage by exploring this niche (before others jump on the wagon - it will happen) may make the expense worth it.

An alternate idea (prompted from a comment) might be to have an official 'ChatGPT' community member, and give mods, or some group created for the purpose, privileges to use that account to write a response (including the text of the question used) as given by ChatGPT.

My reasoning for this approach is that an official use of the AI tech will reduce the noise - whether from the multiple ChatGPT answers given if it's allowed, or the attempts to sneak AI-assisted answers through if it's not - by providing a single AI focus point for each question.

History
Why does this post require moderator attention?
You might want to add some details to your flag.

1 comment thread

Noise (2 comments)

Sign up to answer this question »