Page MenuHomePhabricator

recover-mariadb should use logging (logger) to indicate actions taken
Open, LowPublic

Description

recover-dump doesn't use logging to log every action taken, like backup-mariadb does.

Probably logger should be setup so it logs to /var/log/mariadb-backups/recoveries.log (next to the existing /var/log/mariadb-backups/backups.log) when run.

This will not only allow to track its progress, but also to debug in case of error.

Event Timeline

Change 671942 had a related patch set uploaded (by Rohitesh20; owner: Rohitesh20):
[operations/software/wmfbackups@master] Add logger to recover-dump::to indicate actions taken

https://gerrit.wikimedia.org/r/671942

Hi @jcrespo,

I would be grateful if you may have a look at my PR. I have just set up a logger, but I feel logger is yet to be used, I will be glad to make further enhancements here 😄

Additionally, I feel that we should probably add instruction to set up a virtual environment and to install the requirements for running tests in the readme file. Also, I believe that tox is probably missing from the test-requirements file, should I add it in the next commit?

Another enhancement I feel is that presently the logger is using the hour: minute: second format for logging time, should we use Month/day/year hour: minute: second instead?

@Rohitesh-Kumar-Jain Thank you for your contribution.

A few comments:

  • The commit looks fine, but because it has not (yet) impact on actual functionality, I would like to see at least some extra commits making use of it before reviewing it. It doesn't have to be all a single commit, it can be 1 or several, but the change is too non-impacting now to merge/review it as is. I would like to also see some relevant test added to demonstrate good development practices (even if the existing codebase is not currently great at that :-))
  • Any addition to documentation would be great, just send a patch if you think it will make things better
  • regarding tox, I don't see useful to add it to test-requirements because that files is "run" by tox itself (unless I am wrong and missing something obvious). Or maybe you mean something else, like missing from the build-deps debian package??? The documentation tells you to use tox, but like I said before, if you can improve documentation with a patch, it is certainly welcome :-)
  • Regarding log format, I DO believe the current one can be improved, and an ISO format would be preferred. However, what exactly should be the format should probably agreed between all wmfbackup executables at the same time, and probably also related packages such as wmfmariadbpy. We would also need input from @Marostegui and other people to decide and sync the changes. Maybe also from the observability team on what format they suggest.

Because it is very easy to make the scope of the changes larger than the original task, I suggest you do the following:

  • Choose among the improvements you suggested the 1 you prefer to focus on/work first (adding logging, change format to all related packages/improve documentation). Do not work, for now, on more than 1 task at a time- the tasks we proposed are "easy to code", but all task are hard to get merged/coordinated for deploy to production!
  • If it doesn't have a ticket on Phabricator specifically for it (change format, improve build documentation), create one, explaining what you want to do and why- this is vital for coordination with other people that will be involved in review
  • You would need, if possible, a testing enviroment and build a debian package to test the changes (there is currently not automated integrations testing for this repo). If this is not possible, I could help on that- but assuming this is related GSoC, a Linux Debian Buster virtual machine is the target environment to aim for (we may be able to provide some of that for the GSoC student).

This- the coordination required to merge a patch- is more important than the patches themselves. Thanks again for your contribution.

Hi @jcrespo,

Thanks for giving detailed comments, I would like to focus on the logger, for now, will work on change the format and build documentation later by creating a Phabricator ticket for them.

I tried to run the unit test on the existing code base,

tox -e unit

but I got the error unit: commands failed.

So firstly I am required to install a Debian virtual machine on my mac, as I will only be able to run the unit test and build a Debian package on a Debian OS like ubuntu??
I would be glad if you & @Marostegui may guide me in setting my development environment and creating a Debian package.

Thanks & Regards

So firstly I am required to install a Debian virtual machine on my mac, as I will only be able to run the unit test and build a Debian package on a Debian OS like ubuntu??

For running the unit tests for this, it should work on any os.

However, I may not be able to help you with your environment on Mac, as I don't have one. You are likely to have to install a few dependencies through homebrew. Maybe @Marostegui can help you better. For the final project you will need a Debian 10 environment to build packages, as both a web development and command line interface are very tied to the needs of our specific environment (it is a systems project, that has to run on our server environment).

For questions related to env setup/general errors (not directly related to this task), please use the T274636 task, as it will be useful for many people. I can help you setup e.g. a virtualbox or qemu vm with Debian there, if needed.

I would not encourage to set up your environment on Mac, I'd recommend you use a virtual machine for that, using for instance Virtual Box.

Hi @jcrespo & @Marostegui,

Thanks for answering my queries, I will try to set up the environment on a virtual machine and will ask for help on the T274636 task if needed to set up the environment.

Can I claim this task please? @jcrespo
Within the scope of this ticket, this is what I plan to do

  • Implement a function to initialise logging into the directory specified
  • Convert existing print statements to logger statements (I believe they print into the console as well) - will assign the messages to logging.INFO or logging.ERROR depending on the nature of the messages
  • Add unit tests to test within the program to see if the logs show up (this is slightly tricky and more difficult than adding in the actual logger - but I can give it a try perhaps in a different patchset to the above)
  • Integrate T277160 (probably merge that one first, and then we can change the print to a log statement)

Please go ahead, I think it makes sense, as an exception, as otherwise could be blocker for your work on T277160 as you proposed.

However, we will let @Rohitesh-Kumar-Jain continue working on their version of the patch- as this is not a competition (and collaborating with others is also a big part of Wikimedia development): the important point is learning to contribute, not the actual results. Sadly we don't have tons of small task on data persistence that can be worked independently.

Sure, that sounds great. I will make it as separate patch so that @Rohitesh-Kumar-Jain can also collaborate as well with their patch. Happy to collaborate :)

Could someone please add a code base project tag so this task could be found? Is this maybe Data-Persistence-Backup? Or Data-Services?

@Aklapper Sadly there is no available tag for this- I would like to tag it as Data-Persistence-Backup but I am unable to (not my decision). :-( We would need a new tag for non-data-persistence team backups. Data-Services is for cloud resources only.

Change 673714 had a related patch set uploaded (by H.krishna123; owner: H.krishna123):
[operations/software/wmfbackups@master] Add logger functionality to recover-dump, add logger statements

https://gerrit.wikimedia.org/r/673714

Just wanted to say I am still working on this. I've added a commit implementing the logger. (https://gerrit.wikimedia.org/r/673714)
I'm not sure how to run jenkins-bot, so can you invoke it for me (to see if the tests pass? Tox passes locally)
I have been a bit stuck on testing the logger, I will try to write some unit tests tomorrow to see if the logs are logged, I'll do some research on that. In the mean time, can you review the above for me? (to see if the functionality is correct and it is what you had in mind when the ticket was made) @jcrespo Thank you
The logs written to file looks like this

[2021-03-20 17:10:58.245]: [recover_logical_dump()] - [INFO] - Attempting to recover 'None' ...

Sorry for replying late, but I am down with Dengue fever, was I feel that it's nice someone else is working on it.

@Rohitesh-Kumar-Jain Sorry to hear that. Please, when/if you feel ok, as I said above, feel free to send your own version of the patch (I will review both), or ask for another small task, whatever you prefer.

@Rohitesh-Kumar-Jain Oh no, sorry to hear that and hope you're feeling a bit better now / get well soon! :)
Not sure how I can help but I had a look and the task T277754 doesn't look like it's claimed yet - if it's possible, you can give it a try when you are feeling better.

@jcrespo I've just added a new patch https://gerrit.wikimedia.org/r/c/operations/software/wmfbackups/+/673714.
I've added you as a reviewer - please review when possible :)
I think this should be final. I've added one unit test to verify that initialising logger works correctly. To test whether each log statement in recover-dump is "written to file" is a bit difficult - this would require elimination of global variables in the recover-dump file and resort to using configurations from a file (like YAML file) so that we can use a "test" config and I think some functions need to be slightly modified to make it unit testable.
I think that would be out of scope of this ticket, so I've just added one test to verify that logging initialises correctly and I think that should be reasonable for this ticket. It's easy to spend so many days on testing :)
Let me know what are your thoughts, once this is merged in, I will close this and then I can then rebase T277160 to log the times into the file.

I will be checking this contribution soon as part of my job maintaining backups, and hopefully, getting it merged.