Hand coding is a mechanism used by the Wikimedia Foundation to help analyse the impact our software and changes have on the community. The intent is to provide a way for editors to use their judgment to evaluate the data: editors are far better at working out the impact changes will have than we are, because editors are the ones who will have to deal with such changes. Hand coding is currently used with the Article Feedback Tool, Version 5 project, but will probably be used elsewhere too. If you're interested in making the software we ask you to use better, sign up here.
What do you mean by "hand-coding"?
Although it may sound like it, volunteers will not be asked to do any computer programming. "Hand-coding", in qualitative research, refers to the process of evaluating and categorizing items by hand (and with human eyes), as opposed to using a computer or other machine to do the work. Hand-coding is often used when human judgement is necessary in order to perform a proper evaluation. In the case of feedback received via AFTv5, a computer algorithm is insufficient for determining the usefulness of comments, so volunteer Wikipedians will need to perform the evaluation manually. Therefore, the Feedback Evaluation System (described below) has been constructed to make the process of manually categorizing AFTv5 feedback quick and easy.
Feedback Evaluation System
FES is designed to quickly organize the information necessary for evaluating feedback into a coherent graphical user interface. The system breaks down to three major parts:
- Completed list: A list of feedback items that have been evaluated (coded)
- Feedback form: A web form that will capture the volunteers evaluation.
- Available list: A list of feedback items that need evaluation.
Feedback items lists
These two lists surround the Feedback form and represent the feedback items that a volunteer has been asked to evaluate. When the interface first loads, the available list on the bottom should be full and the completed list on the top should be empty. As a volunteer evaluates feedback using the interface, the feedback items will move from the available list to the completed list. Once the available list is empty, the assigned set has been completed.
The horizontal bars in the two feedback items lists represent an individual feedback items that have been randomly selected for evaluation. The information included for each feedback items includes:
- Id: the internal id of the feedback item
- Title: the title of the page for which the feedback was submitted
- This is the page that will be loaded into the feedback form's article pane when the feedback item is selected.
- Whether it was submitted anonymously (ip) or by someone with an account (user)
Any feedback item can be selected (whether completed or available) to be loaded into the feedback form by clicking on it. The feedback item can then be updated by changing the values in the feedback form and clicking "save" again.
The feedback form is the main mechanism of FES. The form consists of two components:
- Article pane: a scroll-able pane containing the content of the article at the time that feedback was posted
- Evaluation form: a header containing information about the feedback item (id & page title), the feedback itself, and buttons to evaluate the feedback.
When the FES interface first loads, the next available feedback item will automatically be loaded into the Feedback form. The revision of the article at which the feedback was submitted will be loaded into the article pane. When the user completes the form and saves the evaluation, the feedback item will automatically be moved from the available list to the completed list with an updated set of icons and the next available feedback item will be loaded into the form.
To complete an evaluation, look at the feedback and select the appropriate category ("useful", "unuseable", "inappropriate" or "oversight") for it. For explanations of what falls into each category, there are tooltips which can be accessed by moving the mouse over the icon.
Is this useful?
This question is a judgement call that Wikipedians will be the most qualified individuals to answer. Here, volunteers should use their best judgement, but think broadly of the possible usefulness of feedback to any editors. It is only the most entirely useless feedback that should be categorized as "no" (not useful). Additionally, feedback items where the evaluation is unclear can be marked as "unsure".
What's the intent?
Although it is a necessity in Wikipedia to assume good faith, Wikipedians tend to have a keen eye for the likely intentions of others. Volunteers should take the feedback in the context of the article and try to express the likely intentions of the user who left the feedback. Many of the possible intentions can be selected, or none if none apply.
Sign up here to start using the tool! We will give each of you answers to any questions you have, as well as a run-through of the tool itself.