update: [26.06.2023 - 27.06.2023]
- Final presentation preparations, feedback and discussions
- Packaged released in PyPI
- Wrapped up with a presentation with research team!
update: [26.06.2023 - 27.06.2023]
update: [21.06.2023]
update: [13.06.2023]
update: [06.06.2023]
update: [30.05.2023]
update: [23.05.2023]
update: [16.05.2023]
update: [09.05.2023]
update: [02.05.2023]
update : [25.04.2023]
update: [18.04.2023]
update: [03.04.2023] - [11.04.2023]
update: [28.03.2023]
update: [21.03.2023]
update: [14.03.2023]
update: [07.03.2023]
update: [21.02.2023] - [28.02.2023]
update: [15.02.2023]
update: [07.02.2023]
update: [01.02.2023]
update: [24.01.2023]
update: [9.01.2023] and [17.03.2023]
update: [3.01.2023]
update: [20.12.2022]
update: [13.12.2022]
update: [06.12.2022]
update: [18.11.2022] and [27.11.2022]
update:[11.11.2022]
update: [04. 11.2022]
update: [18.10.2022] and [25.10.2022]
Hey @Dzahn , for now I have updated my email with the contractor email. Hope this helps!
update: [07.10.2022]
update : [30.09.2022]
@FatimaArshad-DS , have you tried Beautifulsoup's soup.prettify() ?
If you want to adopt a more customizable approach, here's a SO post about it.
In T302242#7856303, @Radhika_Saini wrote:@Appledora Can you please share some links, How can I make my own dataset? from this HTML dump
@Radhika_Saini , hmm... I didn't really use any other blogs or SO posts to create my dataset. Kind of merged the ideas presented in the starter notebook and my previous pandas experience to create the dataset. I don't think sharing code is allowed, but here's my thought process :
1. PAWS server has an internal dump directory, where the Wikipedia dumps are stored as tarfile. The tarfiles are further divided into chunks of around 10 GBs. You can programmatically pick any chunk you want. 2. Each line of a chunk corresponds to a single Wikipedia article and its related information in the form of a json. 3. Python has a tarfile library/module which can be used to iterate over the tarfile line by line. You can use a counter in combination with this library to iterate over your preferred number of article samples and store them in a list. 4. Now you can iterate over this article list and load each item as a json (because that's what they are) . 5. Json files essentially just have a key-value structure. Familiarize yourself with the structure of these jsons and go-ahead to extract whatever features you want to extract from them. 6. I store the features in lists as i go along and I feel that it is a rather nasty way of doing it. 7. Once you're done storing your features, you can convert it to a pandas dataframe and manipulate as you wish.
@Talika2002 , I think HTML Specs might give you some helpful pointers on your first query.
In T302242#7849681, @FatimaArshad-DS wrote:Does it happen to anyone else... PAWS stops saving notebook after a while?
@FatimaArshad-DS , in a very basic sense, the template is exactly what you would expect it to be. It is officially defined in Wikipedia as :
Wikimedia pages are embedded into other pages to allow for the repetition of information
Templates can be interpreted as prebuilt structures, where you can insert data against certain keys. There are templates for all sorts of things. For example, this is a template for emojis where by changing the internal values you can show different emojis on the webpage.
Pretty much the only thing you need to look for to identify a template is it's Template Namespace .
@Radhika_Saini , no, I am not using any external dataset. I created my own from the html dump.
In T302242#7843731, @Radhika_Saini wrote:@Appledora. Do you find the 1000 articles in one place or any database present?
or you find out individually?
In T302242#7843327, @Appledora wrote:Basically, I wanted to be flexible about what I can extract or not and implemented the function likewise. Otherwise, just using the default bs4 get_text() method should suffice for the purpose. However, as you mentioned in your earlier comment, mwparserfromhell output, extracts more text than bs4 and I wanted to remedy that in my custom implementations and hence taking the long way of iterating over tags, which is not perfect either tbh. I hope I understood your question properly this time :3
@Radhika_Saini
Basically, I wanted to be flexible about what I can extract or not and implemented the function likewise. Otherwise, just using the default bs4 get_text() method should suffice for the purpose. However, as you mentioned in your earlier comment, mwparserfromhell output, extracts more text than bs4 and I wanted to remedy that in my custom implementations and hence taking the long way of iterating over tags, which is not perfect either tbh. I hope I understood your question properly this time :3
@Radhika_Saini
@Radhika_Saini, if you don't mind, this is what I did, I iterated over all the visible tags in the HTML and extracted text from them. Optionally, I also iterated over other Page elements like templates and categories to extract text (if present) from them. My approach also extracts some stub information, which I couldn't omit tbh. Hope this helps.
@FatimaArshad-DS , you have to write a generic function because the later tasks ask you to work on more than one (atleast 100) articles.
Hi @Mike_Peel , I am very very late to the party. But I have started getting familiar with WikiData and creating my page. My question is probably redundant and dumb, but I am curious about what's your expectation from this microtask. For our created pages, would you prefer it be structured and represented in a conventional Wikipedia article style? Or would you rather prefer a more descriptive page (something like a jupyter notebook) that would represent the creator's thought process and explorations? Thanks.
@FatimaArshad-DS, hello. The HTML saved inside the HTML-dump is generated by an internal Wikipedia API (see Parsoid for reference) from the Wikitext code. This is why the generated HTML and the browser HTML are entirely different things. Hope this helps.
This definitely helps. I really apologize for being so redundant, and thanks for bearing with me.
hi @MGerlach , just for the sake of clarifications, recording contributions and making the final application is not the same right? I know that contributions can be updated, something like a version controlling mechanism. But can we also edit our applications once we send them in? Thanks.
@Isaac , it seems parsing wikitext has a significantly long way to go to be accurate, so far :v
@SamanviPotnuru and @Talika2002 , I personally did not quite get the relevance of Named External Links as an explanation for the question. I think NELs are basically those external links that have text, in between the tags (e.g : "link to it", "related articles" etc ).
However, after digging around, I found these directives on what can be linked as external links here. This tells us that it is okay to add other wikiarticles as external links.
Yes @Isaac , I think I got it more or less now. Thanks!
@Isaac and @MGerlach , I am a little confused about the following TODO :
Are there features / data that are available in the HTML but not the wikitext?
What exactly should I be showing here? Codes or just study references?
Similarly here,
are there certain words that show up more frequently in the HTML versions but not the wikitext? Why?
what do you mean by words here? tags, attributes, patterns?
In T302242#7822871, @Talika2002 wrote:I don't understand what's going on in this <link> tag in the HTML code:
<link about="#mwt13" data-mw='{"parts":[{"template":{"target":{"wt":"Infobox Korean name\n","href":"./Template:Infobox_Korean_name"},"params":{"context":{"wt":"north"},"hangul":{"wt":""},"hanja":{"wt":""},"rr":{"wt":""},"mr":{"wt":""}},"i":0}}]}' href="./Category:Articles_needing_Korean_script_or_text#Chang%20Gum-chol" id="mwCg" rel="mw:PageProp/Category" typeof="mw:Transclusion"/>What is this <link> tag doing? Is this linking a category? Or is it a template?
In T302242#7809181, @Appledora wrote:@Antima_Dwivedi Hi, I noticed you are having problems with downloading the notebook. I hope you're still not facing it, but here's what I did.
- Concatenated ?format=raw after the ipynb URL, which prompted me to a raw text page.
- Right-click and select Save as... -> which saved the page as a .txt file
- Simply rename the extension .txt -> .ipynb and upload
Hope this helps.
I went through the thread again and dug around about magic words more :D Thanks both of you!
@Isaac and @Talika2002 , I didn't quite get the question posed here. Could I kindly have some more examples/explanations on it?
That clears up a lot of things. Thanks, @Isaac !
Thanks , @Isaac for the explanations. But as you mentioned and as I have discovered while working on the data, HTML does seem to have more content than the wikitext. Is it owing to the inner workings of the parser, i.e: mwparserfromhell or it's the actual case? And once again, really appreciate you for bearing with me today.