Page MenuHomePhabricator

Midterm evaluation for "Accuracy review of Wikipedia"
Closed, ResolvedPublic10 Estimated Story Points


Midterm evaluations for GSoC are on. All students and mentors must submit their evaluation on before 27th June 2016 19:00 UTC. Please note that only one mentor needs to evaluate each student, not both.

If you have any concerns regarding your project/student which you do not wish to discuss on Phabricator, feel free to reach out to me or @01tonythomas directly.

Please complete the following in order to evaluate your student:

  • Mid-term goals as outlined in the project timeline are complete: Milestone 1 and 2 are complete.
  • MVP is completed and hosted on Labs/elsewhere: ; student has a labs account
  • Weekly reports are up-to-date and complete: yes, on phabricator, blog, and e.g. schema documentation although the current week's report is still forthcoming
  • The student is in regular touch with mentors: daily or better emails recently

Additional comments: where was the MVP requirement imposed? It does not appear to be on the schedule as submitted in the proposal. The delay in providing an MVP is not due to the student.

Evaluation: passing pending data persistence review

Event Timeline

Where was the MVP requirement imposed? It does not appear to be on the schedule as submitted in the proposal. The delay in providing an MVP is not due to the student.

Where was the MVP requirement imposed? It does not appear to be on the schedule as submitted in the proposal. The delay in providing an MVP is not due to the student.

This was from a standard template earlier @Jsalsman, and its mentioned somewhere in the Life_of_a_successful_project too. Eitherway, it can change from project to project. Even the labs one. So, if you think your student is in the right path, worry not about the requirements set here.

PS: Please do not communicate about the review evaluation result to the candidate before the official declaration by Google.

@prnk28 you do have a labs account, right?

Yes I do.

@01tonythomas, we are DEFINETLY doing the MVP as follows:

You can do the MVP without roles, usernames, admins, reputations, and
only the following filesystem-based schema! I am very excited about
this. No logins, no IDs, nothing but pure blind review!

Five subdirectories:

The files in the /asked/ subdirectory should be named with zero-padded
serial numbers (use format(N, '09') followed by 'q' for ten billion
unique names starting with '000000000q') and contain the text of a
question. We will load these in from the WP:BACKLOG category you
selected, containing the HTML formatted description of the context of
the backlog category-containing template tag, with a <a href=... link
at the top to click through to the source article. But these files can
contain ANY HTML with ANY question in them. Proposed web app url
endpoint: /ask [POST] text=.... which takes input from a textarea form
field followed by a [submit] button. Anyone can load questions, but we
will make scripts to do it from the backlog category.

Then another web app url endpoint called /answer, will select the
file(s with the same numbers) from /asked/, /answered/, or /tied/ at
random. If the selection is from /asked/, there is just one file and
its HTML contents will be displayed followed by a textarea for an
answer. Two buttons will be displayed: [submit] and [skip]. If
[submit] is selected, the file is moved from /asked/ to /answered/,
and a new file with the same name followed by a (e.g. 000000000a) will
be created with the contents of the text area. Pressing [skip] will
select a different random file (or the same one if there is only one,
or some message saying if there aren't any left.)

If the /answer end point selected something from /answered/, then the
contents of the question file (e.g. 000000000q) will be shown followed
by a <hr> or some other very clear delimiting (columns? colors? that's
up to you!) and then presentation of the e.g. 000000000a file. Those
file contents will be followed by a textarea for comments, followed by
three buttons [agree], [disagree], and [skip], which acts as described
above. Pressing [agree] moves the 000000000q and 000000000a files to
/recommend/ along with a new file 000000000c with the comments, if
any, and selects a different random file. Pressing [disagree] moves
000000000q, 000000000a, and 000000000c to /tied/ and then selects a
different random file.

If the /answer endpoint selected something from tied, all three files
(-q, -a, and -c) are displayed delimited in that order, another
tie-comments textarea form field is shown followed by three buttons,
[endorse original answer], [endorse contrary comments], and [skip]
which, again, just selects another random set of file(s). Pressing
[endorse original answer] moves the three files to /recommend/ with
the tie-comments text in a fourth file named 000000000e. Pressing
[endorse opposing comments] moves the three files to /recommend/ but
the tie-comments go in a fourth file with a different name,

There will be a third web app endpoint called /recommend, used by
editors instead of reviewers, which randomly selects a set of files
with the same number from the /recommend/ subdirectory, displays all
of them in -q, -a, -c, and, if available, -e or -o, in that order
order, and explains that the original answer was endorsed if there is
no -o file, or that the opposing -c comments were endorsed if there is
an -o file. There will be one more textarea form field called "diff"
with instructions to paste in the diff link for an edit implementing
the recommendation. Two buttons follow: [done] and [skip]. Done moves
all the files into /archive/ along with a new file 000000000d with the
diff in it.

Maybe a fourth endpoint called /log will display everything in all the
directories. Also keep a log of everything in a file with IP addresses
and timestamps of all the files involved, their length in bytes, and
everything else, every time someone presses a button including [skip].

@Jsalsman This design makes a lot of sense in vlermv! I will start coding this. I will get the review system completed before the password and username part that you mentioned in a later email.

Jsalsman set the point value for this task to 10.

@prnk28 please fork into your repos and get it running on labs, then complete the /inspect function.