User Details
- User Since
- Mon, Mar 29, 5:20 PM (2 w, 2 d)
- Availability
- Available
- LDAP User
- Unknown
- MediaWiki User
- Tru2198 [ Global Accounts ]
Sun, Apr 11
@Mike_Peel, I am picking the topic Category: JudoInside_template_with_ID_not_in_Wikidata. I hope that won't be an issue!
Sat, Apr 10
@Mike_Peel, For Bonus: Explore how to identify the correct item when multiple terms are returned, can we approach the problem with the idea that for every QID returned through the searching title, we refine the correct one by parsing each item for a specific property? For instance, if my title is "Harry Potter". Now that returns Harry Potter movie, book, and character. But only the book will have the property "Language of the work", or "author", or "publication date"?
Fri, Apr 9
Also, do review by task_2:
https://www.wikidata.org/w/index.php?title=User:Tru2198/Outreachy_2
Thu, Apr 8
“Print out the information alongside the property name (e.g., "P31 = human"). “
Isn't this only in wikidata, as wikidata stores the information in the form of properties and QIDs?
So we have to print from both Wikipedia and wikidata?
Hi @MSGJ, I have completed task_3 which awaits your feedback!
Also, I think this tutorial is also ideal for task_4 that needs adding information to the wiki data.
Wed, Apr 7
Hello @Mike_Peel, @MSGJ
I have completed Task_3. Though, I haven't used the parsing through the regex.
@Shristi0gupta, wonderful programme! I wanted to know how did you deal with the parameter values who didn't have label? and also, some values without Qnumbers like dates?
Tue, Apr 6
Although yesterday, my task_1 got approved, I have added few extra properties and an article that might deviate a bit.
@Mike_Peel, @MSGJ, I have completed task 2. I have mailed you.
Though here is the link:
https://public.paws.wmcloud.org/User:Tru2198/Outreachy2_synchronizing_pywikibot-Copy1.ipynb
Mon, Apr 5
@Mike_Peel Thank you for the heads up! No matter the interns, I am really enjoying the learning in the contribution!
Hello @Mike_Peel!
I have tried my best with Task_1, and look forward to your feedback. Kindly review it at your time.
Here is the link:
https://www.wikidata.org/wiki/User:Tru2198/Outreachy_1
I am having few confusions on the final tasks for Compare Reader Behavior across Languages. Finding the correct article that matches both the datasets and consequently comparing their (their respective source and destination) values across languages is not ideal and time-consuming as I have to resort to a trial and error approach. I would like to know how others have moved forward with the task?
Thanks!
Fri, Apr 2
I have completed my first notebook's microtask with the basic implementation of the functions and analysis that are required. However, there are many updates and enhancements that I would act upon in the upcoming days that I have mentioned in this notebook. Should I record my first contribution and await the review on the Outreachy site?
Thu, Apr 1
for the task: Compare Reader Behavior across Languages, will the comparison between two languages suffice, as most articles are supported by only one or two languages, other than the English language that intersects with the available clickstream data and that of the langlinks API?
Wed, Mar 31
Hello, to address my query, I used this approach.
data_frame.group by('Destination').nunique()>=20 returns most of the values with false in Source, destination and link columns. Also, I am facing a deadlock in combining the queries of the TODO task: Choose a destination article from the dataset that is:
"relatively popular (at least 1000 page views (where I am using: data_frame.group by('Destination').sum()>=1000) and 20 unique sources in the dataset"
Hello, for the microtask, I am trying to convert the file to CSV with pandas and subsequently data frames, but I am getting the error where several rows have conflicting columns. So the parameter, error_bad_lines=False to ignore the troubling lines can be used. Is that option viable?