Page MenuHomePhabricator

GSoC'24 Proposal - Scribe-Data: Refactor into a Multi-Purpose Wikidata Language Pack CLI Tool
Closed, ResolvedPublic

Description

Profile Information

Name: Mahfuza Humayra Mohona
IRC nickname on Matrix: mhmohona
Resume | GIthub | LinkedIn
Location - Bangladesh (GMT +6)

Synopsis

The goal of this project is to transform the Scribe-Data tool into a versatile, multi-purpose Command Line Interface (CLI) tool. This tool will enable direct access to language data from Wikidata through command-line commands, supporting a broader range of languages and word types. A significant enhancement is the distribution of the tool as both pip and conda-forge packages, facilitating easy installation and deployment across various environments.

  • Have you contacted your mentors already? - Yes

Project Overview

The CLI tool will be designed with a modular architecture, allowing for easy extension and maintenance. It will consist of a core command-line interface layer, a data processing layer that interacts with Wikidata and Scribe applications, and a user interface layer that provides feedback and instructions to the user. The tool will use SPARQL queries to interact with Wikidata, including the types of queries it will support and how it will handle responses. It will also integrate with Scribe applications, using APIs or data formats to exchange information.

Related Work and Existing Tools

Several tools and projects have explored CLI interfaces for accessing and querying knowledge bases like Wikidata. While these tools may differ in their specific implementations and feature sets, they provide valuable insights and inspiration for the proposed Scribe-Data CLI tool.

  • Wikidata Query CLI: A command-line tool written in Python that allows users to execute SPARQL queries against the Wikidata Query Service. It supports features like query history, result formatting, and query file execution.
  • Wikidata Query Launcher: A Node.js-based CLI tool that provides a user-friendly interface for executing SPARQL queries on Wikidata. It supports features like query autocompletion, formatting, and saving/loading queries.
  • Wikidata Integrator: A Python library and command-line tool for working with Wikidata. While not exclusively a CLI tool, it provides command-line utilities for tasks like querying, editing, and managing Wikidata data.
  • Wikidata Toolkit: A Java-based toolkit for working with Wikidata, including a command-line interface for executing SPARQL queries and managing Wikidata entities.
  • DBpedia CLI: A command-line tool for querying and exploring the DBpedia knowledge base, which is closely related to Wikidata. It supports features like interactive querying, result formatting, and data extraction.

While these tools provide valuable functionality, the proposed Scribe-Data CLI tool aims to go a step further by offering a more comprehensive and user-friendly interface designed specifically for accessing language data from Wikidata. By utilizing the modular architecture and integrating with existing Scribe applications, the cli tool aims to provide a seamless and efficient experience for users working with language data across various use cases.

Technical Specifications

The CLI tool will utilize pip for package management and Click for creating user-friendly command-line interfaces. It will support arguments, options, and subcommands, making it suitable for building powerful and feature-rich CLI tools. The tool will also include a comprehensive test suite, including unit tests, integration tests, and end-to-end tests, to ensure its reliability and robustness.

Why Choose pip and Click?
pip
  • Standardization: pip is the standard package manager for Python, ensuring compatibility and ease of use across different Python projects.
  • Wide Adoption: pip is widely used in the Python community, making it a reliable choice for distributing and installing packages.
  • Integration with PyPI: pip integrates seamlessly with the Python Package Index (PyPI), allowing for easy distribution and access to a vast repository of Python packages.
Click
  • Ease of Use: Click simplifies the creation of command-line interfaces, allowing developers to focus on the application logic rather than CLI creation and management.
  • Extensibility: Click supports arguments, options, and subcommands, making it suitable for building powerful and feature-rich CLI tools.
  • Help Page Customization: Click allows for easy customization of help pages, enhancing the user experience by providing clear and informative documentation.
Alternatives to pip and Click
  • Poetry: While pip is the standard, Poetry offers a more modern and feature-rich approach to package management and dependency resolution. It uses pip under the hood but provides additional features like dependency resolution, building, and publishing packages
  • argparse: The default CLI framework in the Python standard library, argparse is a viable alternative to Click for creating CLI tools. However, it lacks some of the conveniences and features that Click offers, such as automatic help page generation and subcommand support.

Performance and Scalability

The CLI tool is expected to perform efficiently, with key performance indicators (KPIs) such as response time, throughput, and resource utilization. Strategies for ensuring scalability like optimizing data processing algorithms to support a growing number of users or larger datasets.

Tools for Performance and Scalability

Client-Side Performance Testing Tools: Tools like Google Lighthouse or Perftester can be used to test the speed, responsiveness, and stability of the CLI tool under real-world conditions. These tools simulate user interactions and provide insights into potential performance issues.

Deliverables

  • CLI Tool Development: Implement a CLI tool using Click for user-friendly command-line interfaces and pip for package management and distribution.
  • Detailed Documentation: Provide comprehensive technical documentation on the CLI tool's architecture, SPARQL queries, and contribution guidelines.
  • Test Suite: Develop a robust test suite, including unit tests, integration tests, and continuous integration (CI) pipelines.
  • Docker Deployment: Package the CLI tool in a Docker container for easy deployment and usage.
  • Integration with Scribe Applications: Integrate the CLI tool into existing Scribe applications, ensuring seamless data access and processing.
  • Community Engagement Plan: Outline methods for engaging with the Wikimedia community, including feedback mechanisms and stakeholder engagement strategies.

flowchart

For better view of project flowchat - click here

Project Process

1. Requirement Analysis and Design

Objective: Analyze the current Scribe-Data process, identify dependencies on Scribe applications, and design a modular architecture for the CLI tool.

Technical Details: The analysis involves reviewing the existing codebase, understanding the limitations of the current implementation, and identifying the functionalities that need to be extended or modified. The design phase will outline the architecture of the CLI tool, including how it will interact with Wikidata and the Scribe applications. Here's a detailed explanation of how this interaction will work:

  • CLI Tool Architecture: The CLI tool will be designed with a modular architecture, allowing for easy extension and maintenance. It will consist of a core command-line interface layer, a data processing layer that interacts with Wikidata and Scribe applications, and a user interface layer that provides feedback and instructions to the user.

Proposed CLI Structure:

scribe-data
├── query
│   ├── language
│   │   ├── --language
│   │   ├── --word_type
│   │   └── --output (json, csv, table)
│   ├── item
│   │   ├── --item_id
│   │   └── --output (json, nt, turtle)
│   └── word
│       ├── --word
│       └── --output (json, csv, table)
├── describe
│   ├── --item_id
│   └── --output (json, nt, turtle)
├── list
│   ├── languages
│   └── word-types
├── search
│   ├── --word
│   ├── --language
│   └── --word_type
├── config
│   ├── set
│   │   ├── --endpoint
│   │   └── --user-agent
│   ├── get
│   │   ├── --endpoint
│   │   └── --user-agent
│   └── reset
├── update
│   └── --force (force update of the tool)
├── help
└── version

The above CLI structure for Scribe-Data is designed to be both modular and user-friendly, offering a comprehensive set of commands for querying, describing, listing, and searching language data from Wikidata, along with configuration and update options, ensuring a seamless and efficient experience for users.

High level architectural diagram for CLI tool:

CLI flowchart

For better view - click here.

  • Setting Up SPARQLWrapper: First, set up SPARQLWrapper to interact with the Wikidata SPARQL endpoint. This involves creating an instance of SPARQLWrapper with the Wikidata endpoint URL.
from SPARQLWrapper import SPARQLWrapper, JSON
sparql = SPARQLWrapper("https://query.wikidata.org/sparql")
  • Constructing and Executing SPARQL Queries: The CLI tool will construct SPARQL queries based on user input for language and word type. These queries will be executed against the Wikidata Query Service. The results will be returned in JSON format for easy processing.
def get_language_data(language, word_type):
    query = f"""
        SELECT ?item ?itemLabel WHERE {{
            ?item wdt:P31 wd:{word_type};
            wdt:P424 ?lang_code .
            FILTER(STR(?lang_code) = "{language}")
            SERVICE wikibase:label {{ bd:serviceParam wikibase:language "[AUTO_LANGUAGE],en". }}
        }}
    """
    sparql.setQuery(query)
    sparql.setReturnFormat(JSON)
    results = sparql.query().convert()
    return results
  • Processing Query Results: After executing the query, the results are processed. The JSON results can be iterated over to extract the relevant information. For example, for printing the labels of the items returned by the query.
results = get_language_data("en", "Q108712") # Example query
for result in results["results"]["bindings"]:
    print(result["itemLabel"]["value"])
  • Handling Different Query Types: The CLI tool will also handle different types of SPARQL queries, such as DESCRIBE and CONSTRUCT, depending on the user's needs. For instance, a DESCRIBE query can be used to fetch detailed information about a specific item.
sparql.setQuery("DESCRIBE <http://www.wikidata.org/entity/Q108712>")
sparql.setReturnFormat(JSON)
results = sparql.query().convert()
print(results)
  • Error Handling: It's important to handle potential errors that might occur during the query execution. This can be done using try-except blocks to catch exceptions and provide meaningful feedback to the user.
try:
    results = sparql.query().convert()
except Exception as e:
    print(f"An error occurred: {e}")

The CLI tool will interact with Wikidata by constructing and executing SPARQL queries using the SPARQLWrapper library. This allows for dynamic querying of language data based on user input, providing a powerful and flexible interface to Wikidata's vast knowledge graph.

2. Development

Objective: Implement the core functionality of the CLI tool, including argument parsing and basic SPARQL query formation.

Technical Details:

  • Implement the CLI tool: In the Development phase of the Scribe-Data project, the core functionality of the CLI tool is implemented, focusing on argument parsing and basic SPARQL query formation. This phase is crucial for creating a user-friendly interface that allows users to query language data from Wikidata based on language and word type. The Click library is utilized to build the command-line interface, while SPARQLWrapper is employed for executing SPARQL queries against the Wikidata Query Service.

Example Commands:

The CLI tool is designed to support various commands, enhancing user interaction and data retrieval capabilities. Here are some example commands that users might use:

  1. Query Language Data: This command enables users to query language data from Wikidata for a specific language and word type.
scribe-data query --language en --word_type
  1. Describe Item: This command fetches detailed information about a specific Wikidata item.
scribe-data describe --item
  1. List Languages: This command lists all languages available in Wikidata.
scribe-data list-languages
  1. Search for Word: This command allows users to search for a specific word across all languages.
scribe-data search --word "example"

Code Example: The code example shows how to implement the query_wikidata function using the Click library for argument parsing and SPARQLWrapper for executing SPARQL queries. This function is designed to accept user input for language and word type, construct the appropriate SPARQL query, and return the results in a user-friendly format.

import click
from SPARQLWrapper import SPARQLWrapper, JSON

@click.command()
@click.option('--language', help='Language code to query.')
@click.option('--word_type', help='Type of word to query.')
def query_wikidata(language, word_type):
    sparql = SPARQLWrapper("https://query.wikidata.org/sparql")
    query = f"""
        SELECT ?item ?itemLabel WHERE {{
            ?item wdt:P31 wd:{word_type};
            wdt:P424 ?lang_code .
            FILTER(STR(?lang_code) = "{language}")
            SERVICE wikibase:label {{ bd:serviceParam wikibase:language "[AUTO_LANGUAGE],en". }}
        }}
    """
    sparql.setQuery(query)
    sparql.setReturnFormat(JSON)
    results = sparql.query().convert()
    click.echo(results)

if __name__ == '__main__':
    query_wikidata()
  • Utilize SPARQLWrapper: SPARQLWrapper will be used to execute SPARQL queries against the Wikidata Query Service. This involves setting up the SPARQLWrapper instance, constructing the SPARQL query based on user input, and processing the results.
from SPARQLWrapper import SPARQLWrapper, JSON

def get_language_data(language, word_type):
    sparql = SPARQLWrapper("https://query.wikidata.org/sparql")
    query = f"""
        SELECT ?item ?itemLabel WHERE {{
            ?item wdt:P31 wd:{word_type};
            wdt:P424 ?lang_code .
            FILTER(STR(?lang_code) = "{language}")
            SERVICE wikibase:label {{ bd:serviceParam wikibase:language "[AUTO_LANGUAGE],en". }}
        }}
    """
    sparql.setQuery(query)
    sparql.setReturnFormat(JSON)
    results = sparql.query().convert()
    return results

This example code defines a function get_language_data that uses the SPARQLWrapper library to execute a SPARQL query against Wikidata, fetching items of a specified word_type that are associated with a given language, and returns the results in JSON format.

3. Testing

Objective: Develop a comprehensive test suite, including unit tests, integration tests, and end-to-end tests.

Technical Details:
Develop a comprehensive test suite: The testing phase will involve writing tests for individual functions, integration tests for the CLI tool's overall functionality, and end-to-end tests to ensure the tool works as expected with real Wikidata data.

Code Example:

import unittest
from module import query_wikidata

class TestQueryWikidata(unittest.TestCase):
    def test_query_wikidata(self):
        # Assuming query_wikidata returns a list of results.
        results = query_wikidata(language='en', word_type='Q108712')
        self.assertIsNotNone(results)
        # Add more assertions based on expected results.

if __name__ == '__main__':
    unittest.main()
  • Implement continuous integration (CI): GitHub Actions will be used for CI, automatically running the test suite on code changes to ensure the tool's reliability and robustness.
# .github/workflows/python-app.yml
name: Python application test with Github Actions

on: [push]

jobs:
 build:

    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v2
    - name: Set up Python 3.8
      uses: actions/setup-python@v2
      with:
        python-version: 3.8
    - name: Install dependencies
      run: |
        python -m pip install --upgrade pip
        pip install -r requirements.txt
    - name: Run tests
      run: |
        python -m unittest discover tests

In the above code example is shown GitHub Actions workflow to automatically test a Python application whenever code is pushed to the repository. It sets up a Python 3.8 environment on an Ubuntu runner, installs dependencies from a requirements.txt file, and runs unit tests using the unittest discovery method.

4. Deployment

Objective: The objective of this phase is to package the CLI tool in a Docker container for easy deployment and usage in a production environment. This involves creating a Dockerfile that specifies the tool's dependencies and how to run it, encapsulating the CLI tool and its environment within the Docker container. This approach ensures that the CLI tool can be deployed and run on any system that supports Docker, providing a consistent and reproducible environment.

Technical Details: The Dockerfile for the CLI tool will be based on the Python 3.8-slim image, which is a lightweight version of Python 3.8. This image will be used as the base for the CLI tool's Docker container. The Dockerfile will include instructions to copy the requirements.txt file into the container and install the dependencies listed in this file using pip. Additionally, the Dockerfile will copy the current directory's contents into the /src directory inside the container and set the default command to run script.py with Python.

Code Example:

# Dockerfile
FROM python:3.8-slim

WORKDIR /src

COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt

COPY . .

CMD ["python", "script.py"]

By following the best practices, like using official and verified Docker images, the CLI tool can be deployed and maintained in a production environment with security, efficiency, and maintainability in mind. Regular updates to Docker images and dependencies will ensure that the CLI tool benefits from the latest bug fixes, performance improvements, and security patches

5. Integration

Objective: Integrate the CLI tool into existing Scribe applications, ensuring seamless data access and processing.

Technical Details: This phase involves developing APIs or command-line interfaces that allow the CLI tool to interact with the Scribe applications. It may also involve modifying existing Scribe applications to utilize the CLI tool for data access.

6. Community Engagement

Objective: Develop and implement a community engagement plan, including establishing communication channels and scheduling initial meetings.

Technical Details: This involves identifying the best channels for engaging with the Wikimedia community, such as forums, mailing lists, or social media groups. It also includes planning and conducting meetings or webinars to gather feedback and solicit contributions.

7. Documentation

Objective: Enhance and maintain the existing comprehensive documentation for the CLI tool, covering installation, usage, and development guidelines.

Technical Details: The Scribe-data documentation is already written using Sphinx. The existing documentation will be updated including the following sections:

  • Introduction: An overview of the Scribe-Data CLI tool, its purpose, and features.
  • Installation: Step-by-step instructions for installing the tool via pip and conda-forge.
  • Usage: Detailed usage examples and explanations for each command, subcommand, and option.
  • Configuration: Information on configuring the tool, including setting the Wikidata endpoint and user-agent.
  • Development: Guidelines for contributing to the project, including information on the codebase structure, testing, and code style.
  • Troubleshooting: Common issues and solutions for troubleshooting problems with the tool.

Timeline

Phase 1: Requirement Analysis and Design (May 1 - May 15)

  • May 1-5: Analyze the current Scribe-Data process, identifying limitations and areas for improvement.
  • May 6-10: Define dependencies on Scribe applications and assess how the CLI tool can be integrated without disrupting current functionalities.
  • May 11-15: Design the modular architecture for the CLI tool, ensuring scalability and ease of integration with existing Scribe applications.

Deliverable: Detailed design document outlining the proposed architecture, integration points, and dependencies.

Phase 2: Development (May 16 - July 10)

  • May 16-31: Implement the core functionality of the CLI tool, including argument parsing and basic SPARQL query formation.
  • June 1-15: Integrate SPARQLWrapper for executing SPARQL queries against the Wikidata, including error handling and result processing and managing .sparql file templates for flexible queries.
  • June 16-30: Develop the API and command-line interfaces for integration with Scribe applications.
  • July 1-10: Implement additional features based on initial testing feedback, such as extended query capabilities and user-friendly output formatting.

Deliverable: A functional CLI tool with core features implemented and initial integration capabilities developed.

Phase 3: Testing and Documentation (July 11 - July 31)

  • July 11-20: Develop a comprehensive test suite, including unit tests, integration tests, and end-to-end tests.
  • July 21-25: Set up continuous integration pipelines using GitHub Actions.
  • July 26-31: Write detailed technical documentation covering architecture, SPARQL queries, usage guides, and contribution guidelines.

Deliverable: Complete test suite with CI integration and comprehensive documentation for the CLI tool.

Phase 4: Deployment Preparation and Initial Integration (August 1 - August 10)

  • August 1-5: Package the CLI tool in a Docker container, ensuring it meets all dependencies and can be easily deployed.
  • August 6-10: Provide detailed Docker deployment instructions and begin preliminary integration into existing Scribe applications.

Deliverable: Docker package for the CLI tool and deployment instructions, with initial integration steps outlined.

Phase 5: Community Engagement, Documentation and Final Integration (August 11 - August 20)

  • August 11-15: Develop and start implementing the community engagement plan, including establishing communication channels and scheduling initial meetings. Write up cli documentation.
  • August 16-20: Finalize the integration of the CLI tool into Scribe applications, ensuring seamless data access and processing. Solicit feedback from early users and stakeholders.

Deliverable: Complete integration of the CLI tool into Scribe applications and initiation of community engagement activities.

Conclusion and Review (August 20)

Review project outcomes against objectives, document lessons learned, and outline next steps for future development based on community feedback.

User Interface and Experience

To ensure a user-friendly experience, the CLI tool will prioritize the following features:

  • Interactive Command-Line Interface: Leveraging Click library for intuitive navigation through commands and subcommands, with auto-completion and intelligent prompts.
  • Contextual Help and Documentation: Comprehensive help messages and documentation accessible through the --help option.
  • User Input Validation: Robust input validation mechanisms with clear error messages and guidance.
  • Customizable Output Formatting: Support for various output formats (JSON, CSV, tabular) through options like --output.
  • Progress Indicators and Feedback: Visual feedback and progress indicators for long-running operations.
  • Cross-Platform Compatibility: Cross-platform compatibility for seamless usage across different operating systems.

By adding these user-centric features alongside the modular architecture and proposed CLI structure, the tool will provide an intuitive and efficient experience for accessing and working with Wikidata language data.

Conclusion

The proposed project will significantly enhance the accessibility and utility of Scribe-Data by transforming it into a multi-purpose CLI tool. This tool will enable a wider audience to easily access language data from Wikidata, thereby expanding the reach of the service to non-Scribe languages and enhancing its overall value. The integration of the CLI tool into existing Scribe applications will further streamline data processing and access, providing a seamless experience for users. The project will also include a comprehensive community engagement plan to ensure that the tool meets the needs of its users and the broader Wikimedia community.

Participation

Communication Medium: Community Martix room
Communication Frequency: Attending Community meeting, 1:1 call with mentors
Source Code Publication: via Github

About Me

  • Education: Completed BSc in Computer Science and Engineering
  • How did you hear about this program - From LinkedIn
  • Will you have any other time commitments, such as school work, another job, planned vacation, etc, during the duration of the program - No
  • We advise all candidates eligible for Google Summer of Code and Outreachy to apply for both programs. Are you planning to apply to both programs and, if so, with what organization(s)? - I am applying only for GSoC, and only for this project.
  • What does making this project happen mean to you?

To answer this question in one line, I really love working for wikimedia, wikidata, wikipedia! Secondly, the required skill set perfectly matches my skill. However, in the contribution period, while working with the mentors I realized there is so many things I can learn from the mentors. They are super friendly and helpful which makes the experience even more enjoyable.

Past Experience

  • Describe any open source projects you have contributed to as a user and contributor (include links).
    • Mentor - Grace Hopper Celebration Open Source Day : Took session on creating PR in Node.js Github organization and served as a mentor during the Grace Hopper Celebration Open Source Day-2023, assisting individuals in making their first pull request to Node.js.
    • Conda-Forge : Created mwparserfromhel| Conda-Forge package.
    • Mozilla Glean Dictionary : Created [[ https://github.com/mozilla/glean-dictionary/pulls?q=is%3Apr+author%3Amhmohona+is%3Aclosed | 10 Pull requests ]]and 5 Issues, playing a significant role in improving the functionality and reliability of the Glean Dictionary.
    • Microsoft Azure Machine Learning Scholarship Project Showcasing Challenge : Organized and maintained the GitHub repository for this challenge, which involved mentoring participants, managing submissions, and ensuring smooth operation of the challenge.
    • Numpy : Contributed to the translation of the 2020 NumPy User Survey, making the survey content accessible to a wider international audience.

Event Timeline

Mhmohona renamed this task from GSoC Project Proposal - Scribe-Data: Refactor into a Multi-Purpose Wikidata Language Pack CLI Tool to GSoC'24 Proposal - Scribe-Data: Refactor into a Multi-Purpose Wikidata Language Pack CLI Tool.Apr 1 2024, 11:17 AM

Week 1 : Weekly Internship Report (27 May -2 June)

  1. Overview of Tasks Completed:
  1. Key Accomplishments:
    • Accomplishment 1: overcame the panic of not getting the excepted output for Bangla via my amazing mentor's comment. 😁
  1. Learnings and Skills Gained:
  1. Goals for Next Week:

Week 2 : Weekly Internship Report (2 June - 9 June)

Overview of Tasks Completed:

Task 1: Scribe-data CLI tool implementation - list of available languages
Task 2: Scribe-data CLI tool implementation - word type for every available languages
RP link - https://github.com/scribe-org/Scribe-Data/pull/140

Learning and Skills Gained:
Learning 1: Learnt in depth about CLI Structure.

Goals for Next Week:
Goal 1: Implement query functionality within CLI
Goal 2: Implement CLI query sqlite --output-type functionality via --convert

Week 3 : Weekly Internship Report (10 June - 16 June)

Overview of Tasks Completed:

Task 1: Finalize file structure of Scribe-data CLI tool in Github repository.
Task 2: Scribe-data CLI tool implementation - query based on language and/or word type for every available languages
RP link - https://github.com/scribe-org/Scribe-Data/pull/140

Learning and Skills Gained:
Learning 1: Learnt more about Scribe-data structure

Goals for Next Week:
Goal 1: Address all feedback on submitted PR
Goal 2: Implement CLI query sqlite --output-type functionality via --convert

Week 4 : Weekly Internship Report (17 June - 23 June)

Overview of Tasks Completed:

Task 1: Got my PR merged (Yay!).
Task 2: Implement --output-dir and --overwrite functionality for a JSON file output
Task 3: Implement CLI query .csv and .tsv --output-type functionality via convert

Goals for Next Week:
Goal 1: Finish all the functionality for CLI tool including total, convert ect.
Goal 2: Bug fix.
Goal 3: Start documentation, if time allows.

Week 5 : Weekly Internship Report (24 June - 30 June)

Overview of Tasks Completed:
Submitted 3 PRs -

Goals for Next Week:
Goal 1: Bug fix for the submitted PRs.
Goal 2: Pip Implementation

Week 6 : Weekly Internship Report (1 July - 7July)

Overview of Tasks Completed:
Submitted 3 PRs -

Goals for Next Week:
Goal 1: make progress on pip implementation
Goal 2: make progress on docker deployment

Week 7 : Weekly Internship Report (8 July - 14 July)

Overview of Tasks Completed:

Submitted 3 PRs - - Work on progress -

Goals for Next Week:
Goal 1: complete doc development
Goal 2: make progress on docker deployment

Week 8 : Weekly Internship Report (15 July - 23 July)

Couldnt do much because of internet issue.
Details - https://www.aljazeera.com/news/2024/7/23/bangladesh-curfews-internet-blackout-batter-economy-amid-quota-protests

Week 9 : Weekly Internship Report (24 July - 28 July)

Overview of Tasks Completed:

Week 10 : Weekly Internship Report (29 July - 4 August)

Overview of Tasks In progress:

  • Studying about docker deployment

Week 11 : Weekly Internship Report (5 August - 11 August)

Overview of Tasks In progress:

  • No progress on docker -_-
  • Version command and Upgrade command

Week 11 : Weekly Internship Report (13 August - 19 August)

Overview of Tasks
Completed:

In progress

Resolving this as the program was finished successfully :) Great work, @Mhmohona 🎉