User Details
- User Since
- Mar 27 2022, 5:38 PM (118 w, 4 d)
- Availability
- Available
- LDAP User
- Unknown
- MediaWiki User
- Appledora [ Global Accounts ]
Jun 29 2023
update: [26.06.2023 - 27.06.2023]
- Final presentation preparations, feedback and discussions
- Packaged released in PyPI
- Wrapped up with a presentation with research team!
update: [21.06.2023]
- Exploratory notebook generation
- Test PyPI uploads
- Listed newer low priority issues
update: [13.06.2023]
- Benchmarking output and log analysis
- Closed remaining MRs
- Discussing packaging decisions
update: [06.06.2023]
- Shifted to paragraph-level dataset from sentence-level due to bad segmentation error
- Trained individual + combined + clusterwise SPC models
update: [30.05.2023]
- Sentence Evaluation Dataset expansion
- Fixed recall greater than 1 error
- Clustered NWS language scripts
update: [23.05.2023]
- Word tokenization benchmarking error logging integration
- Fixed stat-machines related errors + upgraded notebooks
- Identified newer edge-cases for sentence tokenization
update: [16.05.2023]
- Completed MR on Word Tokenization Evaluation dataset generation
- FLORES language name alignment sheet
May 9 2023
update: [09.05.2023]
- Addressed reviews on current MR
- Created new benchmarking dataset for sentence tokenization evaluation
- Scripts on wikilink parsing
May 2 2023
update: [02.05.2023]
- 3-way alignment between BN/EN/DE splits
- Script writing on word tokenization wikipedia benchmark dataset building
Apr 28 2023
update : [25.04.2023]
- Addressed reviews on the MR
- Additional alignment stats for BN-EN splits
- Upgraded existing notebooks to spark3
Apr 18 2023
update: [18.04.2023]
- Pushed MR on NWS word tokenization evaluation
- Calculated BN-EN split alignment
- Working on upgrading spark version
Apr 17 2023
update: [03.04.2023] - [11.04.2023]
- Informal quarterly review
- Annotated sentence evaluation dataset for Bangla
- Addressed reviews on earlier MRs
- Identified some more issues
update: [28.03.2023]
- Added test modules for NWS sentence tokenizations.
- Integrated abbreviations in SPC tokenization
- Defined the unsupervised word tokenization performance evaluation scheme
update: [21.03.2023]
- Added MR on SPC integration with training and corpus collection scripts
- Adopted the test suit for the new repo structure
- Updated the ground truth dataset format for evaluation
update: [14.03.2023]
- Merged remaining MRs on packaging and code restructuring.
- Started working on NWS word tokenization with sentencepiece
- Corpus collection for SPC tokenizer and training
update: [07.03.2023]
- Identified newer issues stemming from the tokenization class implementations
- Set-up repo for packagin/easy installation
- Added testing modules
update: [21.02.2023] - [28.02.2023]
- Restructured tokenizer class
- Addressed reviews on optimizing the word-tokenization schemes
- Reorganized
Feb 15 2023
update: [15.02.2023]
- Using a JSON of the abbreviations files
- Implemented character-level word tokenization scheme
- Minor CI/CD reconfiguration
Feb 7 2023
update: [07.02.2023]
- Adapted to use the abbreviation lists from a pickle file
- Minor modifications in the notebook
- Implemented a rule-based word tokenization method for whitespace-delimited languages (resources from this paper)
- Some issue clean-up
Feb 2 2023
update: [01.02.2023]
- Uploaded language wise filtered abbreviations lists with an MR
- Trained sentencepiece on sample group of languages
- Created new MR for sentencepiece scripts
- More gitlab issue-cleanup
update: [24.01.2023]
- Updated abbreviation pipeline to consider minimum word frequency
- Moved tasks to phabricator to establish hierarchical structure
- Built the pipeline for sentencepiece corpus generation
update: [9.01.2023] and [17.03.2023]
- Informal review
- Started cleaning up issues on GitLab, to make them more verbose
- Debugged some pyspark issues related to resource requirements + working with distributed files
- Grouping languages by cluster (delimiter + fallback wise)
- Started annotating FLORES101 dataset for sentence segmentation task.
update: [3.01.2023]
- Finished adapting abbreviation filtering code for each wiki project
- Got started on word tokenization
- Addressed reviews of an open MR
update: [20.12.2022]
- Mostly spent time trying to get familiarized with pyspark
- Updated the Word tokenization literature review with more information on existing opensource tools
- Discovered some additional edge-cases on sentence tokenizations (e.g: parenthesis and quotation tracking)
update: [13.12.2022]
- Adapted Martin's code on wikitext processing for abbreviation filtering
- Moved to statmachine for running simulations using pyspark
- Started literature review + background study on Word Tokenization
update: [06.12.2022]
- Addressed reviews on the abbreviation filtering scheme
- discussion on sentence segmentation evaluation datasets
- wrote algorithm for abbreviation filtering
Feb 1 2023
Jan 30 2023
Nov 28 2022
update: [18.11.2022] and [27.11.2022]
- Implemented abbreviation replacement scheme
- performance analysis of segmentation before and after abbreviation post-processing.
- Implement a filtration scheme for the wiktionary abbreviations
- Performance analysis of abbreviation filtration, across a range of frequency ratio threshold
Nov 11 2022
update:[11.11.2022]
- Curated list of abbreviations for all languages with a wiktionary project.
- Working on integrating the abbreviation search as a replacement scheme.
Nov 4 2022
update: [04. 11.2022]
- Server Onboarding
- Building a deterministic Benchmark Module and dataset development
- Going through the example pySpark notebook by Martin and other walkthrough documentations by Isaac
Oct 29 2022
update: [18.10.2022] and [25.10.2022]
- compiled list of unicode sentence terminators
- built a benchmark sample for four langauges ( EN ES DE AR)
- implemented the naive rule-based sentence segmenter
- collected dataset for testing and future supervised training
Oct 26 2022
Hey @Dzahn , for now I have updated my email with the contractor email. Hope this helps!
Oct 20 2022
Oct 16 2022
Oct 7 2022
update: [07.10.2022]
- Building a report on Sentence Tokenization link
- Renewed focus on memory-footprint and compute- cost
Sep 30 2022
update : [30.09.2022]
Jun 26 2022
Apr 27 2022
Apr 17 2022
@FatimaArshad-DS , have you tried Beautifulsoup's soup.prettify() ?
If you want to adopt a more customizable approach, here's a SO post about it.
Apr 15 2022
@Radhika_Saini , hmm... I didn't really use any other blogs or SO posts to create my dataset. Kind of merged the ideas presented in the starter notebook and my previous pandas experience to create the dataset. I don't think sharing code is allowed, but here's my thought process :
1. PAWS server has an internal dump directory, where the Wikipedia dumps are stored as tarfile. The tarfiles are further divided into chunks of around 10 GBs. You can programmatically pick any chunk you want. 2. Each line of a chunk corresponds to a single Wikipedia article and its related information in the form of a json. 3. Python has a tarfile library/module which can be used to iterate over the tarfile line by line. You can use a counter in combination with this library to iterate over your preferred number of article samples and store them in a list. 4. Now you can iterate over this article list and load each item as a json (because that's what they are) . 5. Json files essentially just have a key-value structure. Familiarize yourself with the structure of these jsons and go-ahead to extract whatever features you want to extract from them. 6. I store the features in lists as i go along and I feel that it is a rather nasty way of doing it. 7. Once you're done storing your features, you can convert it to a pandas dataframe and manipulate as you wish.
@Talika2002 , I think HTML Specs might give you some helpful pointers on your first query.
Apr 14 2022
@FatimaArshad-DS , in a very basic sense, the template is exactly what you would expect it to be. It is officially defined in Wikipedia as :
Wikimedia pages are embedded into other pages to allow for the repetition of information
Templates can be interpreted as prebuilt structures, where you can insert data against certain keys. There are templates for all sorts of things. For example, this is a template for emojis where by changing the internal values you can show different emojis on the webpage.
Pretty much the only thing you need to look for to identify a template is it's Template Namespace .
Apr 11 2022
@Radhika_Saini , no, I am not using any external dataset. I created my own from the html dump.
Apr 10 2022
Basically, I wanted to be flexible about what I can extract or not and implemented the function likewise. Otherwise, just using the default bs4 get_text() method should suffice for the purpose. However, as you mentioned in your earlier comment, mwparserfromhell output, extracts more text than bs4 and I wanted to remedy that in my custom implementations and hence taking the long way of iterating over tags, which is not perfect either tbh. I hope I understood your question properly this time :3
@Radhika_Saini
Apr 9 2022
@Radhika_Saini, if you don't mind, this is what I did, I iterated over all the visible tags in the HTML and extracted text from them. Optionally, I also iterated over other Page elements like templates and categories to extract text (if present) from them. My approach also extracts some stub information, which I couldn't omit tbh. Hope this helps.
@FatimaArshad-DS , you have to write a generic function because the later tasks ask you to work on more than one (atleast 100) articles.
Apr 8 2022
Hi @Mike_Peel , I am very very late to the party. But I have started getting familiar with WikiData and creating my page. My question is probably redundant and dumb, but I am curious about what's your expectation from this microtask. For our created pages, would you prefer it be structured and represented in a conventional Wikipedia article style? Or would you rather prefer a more descriptive page (something like a jupyter notebook) that would represent the creator's thought process and explorations? Thanks.
@FatimaArshad-DS, hello. The HTML saved inside the HTML-dump is generated by an internal Wikipedia API (see Parsoid for reference) from the Wikitext code. This is why the generated HTML and the browser HTML are entirely different things. Hope this helps.
This definitely helps. I really apologize for being so redundant, and thanks for bearing with me.
hi @MGerlach , just for the sake of clarifications, recording contributions and making the final application is not the same right? I know that contributions can be updated, something like a version controlling mechanism. But can we also edit our applications once we send them in? Thanks.
Apr 6 2022
Apr 5 2022
Apr 4 2022
@Isaac , it seems parsing wikitext has a significantly long way to go to be accurate, so far :v
Apr 2 2022
@SamanviPotnuru and @Talika2002 , I personally did not quite get the relevance of Named External Links as an explanation for the question. I think NELs are basically those external links that have text, in between the tags (e.g : "link to it", "related articles" etc ).
However, after digging around, I found these directives on what can be linked as external links here. This tells us that it is okay to add other wikiarticles as external links.
Apr 1 2022
Yes @Isaac , I think I got it more or less now. Thanks!
Mar 31 2022
@Isaac and @MGerlach , I am a little confused about the following TODO :
Are there features / data that are available in the HTML but not the wikitext?
What exactly should I be showing here? Codes or just study references?
Similarly here,
are there certain words that show up more frequently in the HTML versions but not the wikitext? Why?
what do you mean by words here? tags, attributes, patterns?
Mar 30 2022
I went through the thread again and dug around about magic words more :D Thanks both of you!
@Isaac and @Talika2002 , I didn't quite get the question posed here. Could I kindly have some more examples/explanations on it?
That clears up a lot of things. Thanks, @Isaac !
Mar 29 2022
Thanks , @Isaac for the explanations. But as you mentioned and as I have discovered while working on the data, HTML does seem to have more content than the wikitext. Is it owing to the inner workings of the parser, i.e: mwparserfromhell or it's the actual case? And once again, really appreciate you for bearing with me today.