Mar 23 2021
Sep 1 2020
Aug 4 2020
@Andrew Thank you for your suggestions. Just now I tried and it is working as expected. So, as you said, I must have tried before the VMs were ready. Thanks again!
Hi @Andrew I have recreated them once again and now the error has been changed to this:
Hi @Andrew, I am facing an issue with VM. I have recreated them but not able to ssh from my machine. It is showing the following error:
Jul 31 2020
I think we can close this! What do you think?
I think we can close this! What do you think?
Jul 28 2020
Thank you for the update. Please don't worry about VM recreation. Please let us know(ping here as you said) once it is ready. Thank you!
Jul 23 2020
Yeah, Okay, Thank you!
I agree on this, As you suggested we will split it based on class.
Jul 22 2020
Sounds good to me. And yeah, I am currently concentrating on the Gerrit comments and documentation. Thank you!
Yeah, I think it is a good idea to have transfer time. Thank you!
Jul 17 2020
I have just updated the commit message so that it is visible here!
Yeah, I agree with you, I am working in this issue currently. Thank you.
Jul 16 2020
Sure, I will do the rebasing! Thank you for mentioning it.
transfer.py --parallel-checksum source target
Jul 13 2020
The following scenarios come under this ticket:
Jul 12 2020
Jul 9 2020
Jul 8 2020
I have created a code for parallel data transfer using multiprocessing. I have benchmarked the code in our test machines and the results are given below:
Jul 5 2020
I have run the source multiprocess checksum and the results are given below:
Jul 2 2020
Change in opinion :D Our code has specific tests for all the cases, So I will make one for host validity also, that way it will be aligned to the existing code.
Parsing cumin output seems to be a better idea, let me check the output of cumin in this kind of cases.
Jul 1 2020
@jcrespo Can you please tell me a way to corrupt the source socket in xtrabackup. By corruption, I meant some changes,
Jun 30 2020
Sorry, I forgot to give the sysbench outputs.
Yeah, I will have a search on this. Let's this ticket be here so that we can keep an eye on it!
I have run benchmarks with the new cloud test machines.
manySmallFiles300: 293GB (150 000 files)
Actually, I never got that error. I will have a look for the possibility of that error.
Can we get a --verbose output, that would tell us if it is the problem with Cumin?
In our testing environment, I am currently using the Debian package only. Let me see what could be this issue!
Jun 29 2020
Jun 26 2020
This race condition can be solved/reduced by making a directory (mkdir) in temp as soon as we see a free port.
Jun 25 2020
Thank you for the machines, I am able to log in and work on it :-D
Okay, As per our IRC discussion 1GB of memory is sufficient. Thank you!
@jcrespo We need to run MariaDB xtrabackup and all, Will 1GB be sufficient?
Jun 24 2020
Thank you! Yeah, I will do that.
Merged the packaging patch!
Yeah, the transferpy is now available at https://doc.wikimedia.org/ :-)
Jun 22 2020
I don't think we will be able to incorporate data transfer progress information. The RemoteExecution works in such a way that it will send the full data as a whole without any communication to the main program. Since the netcat command runs in the remote machine, the framework running machine has no information about it! What do you think?
Jun 19 2020
Jun 18 2020
(Machine spec: i5-2nd Gen with SATA HDD and 6GB DDR3 RAM)
Jun 15 2020
I tried incorporating the parallel md5sum in the code. But not working as expected!
Jun 10 2020
Thank you for the information, and yes, it was helpful :-)
Oh okay, how about giving the user a choice?
- Checksum parallel to transfer (document the issues we find at testing)
- Checksum after the transfer (document the delay issues)
I would like to calculate the checksum for the actual tarred file. We can do this parallel to transfer like this:
At sender: tar cf - <directory> | tee >(echo $(md5sum) > /tmp/transfer_send) | remaining-commands
At receiver: commands | tee >(echo $(md5sum) > /tmp/transfer_recv) | tar xf - <directory>
Then we can compare those two checksum-temp-files at the end of the transfer. It will surely reduce the overall time.
What do you think?
Jun 8 2020
Jun 4 2020
May 28 2020
Yeah, I will think about it, Thank you.
I have uploaded a new patch set with a working deb file inside dist folder (https://gerrit.wikimedia.org/r/c/operations/software/wmfmariadbpy/+/598984/2/dist/transferpy_1.0-1_amd64.deb)
May 27 2020
May 26 2020
May 25 2020
Okay, I will use GSOC column for the tickets. Thank you!
I will try with fuser then! Thank you!
We are happy to help :D
Thank you for the suggestion, I will try with netcat.
what we do now is, we start a nc-listen command in the target machine with the start_job which makes a new process in the framework running machine (with netcat listen waiting) and kill_job function uses terminate function inside multiprocessing/process.py (given below) to kill that job.
Just have a look at our documentation: https://transferpydoc.imfast.io/index.html :-)
(I uploaded it there to just have an easy look)