Welcome to Software Development on Codidact!
Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.
Comments on Handling common wrong approaches and misguided motivations for basic technique questions
Parent
Handling common wrong approaches and misguided motivations for basic technique questions
Background
This is inspired to some extent by https://software.codidact.com/posts/289597 .
I'm trying to provide a large amount of content (gradually!) that novices (mainly to Python) will find useful. The goal of the actual content is to demonstrate standard techniques and clear up common misconceptions; the primary goal of having the Q&A is to anticipate future lower-quality questions (people who actually have these problems are unlikely to be able to explain them clearly, or with correct terminology, or with proper scoping).
In my view, the overwhelming majority of useful questions for a technical Q&A site fall into two categories: "Why (does this attempt have an undesired result)?" and "How (do I accomplish some elementary, well-defined task)?". This meta post is about the latter category, and I have two main issues I want to settle.
Why/how coupling
In many cases, there is an "obvious" but wrong answer (or a few such answers) for a "why" question. "In the wild", this often presents as someone explicitly stating a "why" question, but then framing and phrasing everything else as seeking debugging help.
From prior experience, it gets really annoying to try to fix up those questions, or close them as unclear or unfocused, etc. when it's already obvious that "oh, it's THIS issue again". Often people refer to this as a "gotcha"; but from the meta perspective, I think it's more useful to think in terms of how a "why question" and "how question" are coupled.
Obviously we would like to be able to close such questions as duplicates. But how do we present the canonicals? It seems clear enough that a "how" canonical will always be useful, but after that I'm quite indecisive. Should we:
-
Write a separate "why" question for each common non-working attempt and have an answer that explains why it doesn't work and then links to the corresponding "how" question? (I can see this causing problems if the reason it doesn't work is really just an example of some other common problem)
-
The same, but putting the question cross-link in a footnote to the question instead?
-
Just have the one "how" question, and let anyone who answers decide about mentioning non-working attempts "in stride"? (If I'm posting them self-answered and this is the preferred approach, I would almost always decide in favour of inclusion.)
-
Just have one question, and use a separate answer (or separate answers) to call out and explain common attempts that don't work (even if these aren't technically "answering" the "how" question)?
-
Just have one question, and include examples of common non-working attempts in the question (just showing what happens, without trying to explain it)?
-
Something else I haven't thought of?
The XY problem
Experienced programmers should be familiar with this, but, for reference: https://xyproblem.info/. (I'll keep the convention that X is the "real problem" and Y is the approach taken by the person seeking help). Typically what happens is that someone fails at Y and usually asks a "why" question about Y, although a "how" question is certainly possible. This is different from why/how coupling in that even if the question about Y is resolved, it results in a sub-optimal (and possibly dangerous) solution to X.
On the one hand, the mindset of detecting XY problems really applies to help-desk environments more than Q&A environments. In theory, a question shouldn't have to justify being asked, as long as it is on topic and properly asked. If you never allow people to ask about Y because they're constantly suspected of actually caring about X, then you don't end up with a Y Q&A; and if you've excluded questions a priori then by definition the result cannot be comprehensive.
On the other hand, people with an XY problem often can't be expected to recognize that fact. Even as an expert, the transition from needing X done to debugging Y can be so seamless that one doesn't even have the perspective to realize it has happened. And the consequences can be dire - for example, if you only know that an "SQL query" is a string that will be passed from your program to a query engine, and that it needs to include certain pieces of information in order (and that some of those pieces come from the user), then focusing on the problem of creating the string can end up costing a real business huge amounts of money.
So, should we:
-
Ignore the consequences as not being our problem?
-
Write a separate answer to contain such important caveats and anticipate common Xs underlying the Y in the question?
-
Add a caveat to a canonical answer, or separately to multiple answers?
-
Leave it up to everyone who is answering to decide?
-
Add a warning to the question instead, perhaps under a fold?
-
Something else I haven't thought of?
Post
I think this is important to consider because it doesn't only concern questions about bad practices or XY questions, but also if we should allow questions with artificial requirements or questions about code obfuscation. Currently we have no rule for/against any of this.
As for if we should call out dangerous practices or XY problems, this boils down to whether we strive to be an engineering site or a general programming site.
In engineering, we always seek to follow best practices and formal or informal industry standards. On an engineering site, an answer ought to be obliged to point out dangerous practices, obsolete functionality and similar.
Whereas a general programming site is more fuzzy and more tolerant to students/hobbyists building their own strange, non-recommended things. They could be doing so for learning purposes, or because of artificial school requirements, or even because they are doing something bad on purpose (obfuscation, code golf, illustrating vulnerabilities etc).
Another fundamental difference between engineering and general programming is that an engineer always questions if the requirements make sense and they don't start on a task before the requirements are made clear. Whereas (bad) students as well as misc inexperienced programmers never question the requirements (the common derogatory term for those who blindly follow senseless orders is "code monkeys").
If we peek at Stack Overflow, it is somewhere in between. SO has a tendency to close as unclear and/or down-vote XY problems. The requirements laid out in questions are often not questioned. Although answers pointing out flaws of reasoning and dangers with certain approaches are often well-received. Artificial homework assignments are allowed. Obfuscation is frowned upon and not well-received, even though there is no explicit rule against it.
To address your specific concerns and how I think Codidact should deal with them:
I think we should always prompt the poster for clarification in comments as a first step.
Answers that may answer the question without addressing obvious problems with it, and also uses clearly dangerous or obsolete functionality should be marked with our reaction feature as "dangerous" or "outdated".
Questions can also use the "language-lawyer" tag to show that the reason behind the question might not necessarily be practical use, but to understand how a programming language work in-depth and what constructs that are actually marked as safe/unsafe by standards etc.
Write a separate answer to contain such important caveats and anticipate common Xs underlying the Y in the question?
Add a caveat to a canonical answer, or separately to multiple answers?
It can be a separate answer or written together with an answer that does at the same time give the "X answer". It might be a good way to illustrate why a certain method isn't good, by posting a code example explicitly labelled "bad".
0 comment threads