@Marostegui Yes, I am not working on this now. Thank you for resetting.
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Nov 5 2021
Mar 23 2021
Sep 1 2020
Aug 4 2020
@Andrew Thank you for your suggestions. Just now I tried and it is working as expected. So, as you said, I must have tried before the VMs were ready. Thanks again!
Hi @Andrew I have recreated them once again and now the error has been changed to this:
Hi @Andrew, I am facing an issue with VM. I have recreated them but not able to ssh from my machine. It is showing the following error:
Jul 31 2020
I think we can close this! What do you think?
I think we can close this! What do you think?
In T257601#6349705, @jcrespo wrote:16:28:13 Warning, treated as error: 16:28:13 /src/transferpy/transfer.py:docstring of transferpy.transfer.to_bool:5:Field list ends without a blank line; unexpected unindent. 16:28:13 ERROR: InvocationError for command /src/.tox/sphinx/bin/sphinx-build transferpy/doc transferpy/doc/.build --color -W (exited with code 2)
Jul 28 2020
Thank you for the update. Please don't worry about VM recreation. Please let us know(ping here as you said) once it is ready. Thank you!
Jul 23 2020
Yeah, Okay, Thank you!
I agree on this, As you suggested we will split it based on class.
Jul 22 2020
Sounds good to me. And yeah, I am currently concentrating on the Gerrit comments and documentation. Thank you!
Yeah, I think it is a good idea to have transfer time. Thank you!
Jul 17 2020
I have just updated the commit message so that it is visible here!
In T254979#6315023, @jcrespo wrote:So the largest issues on how options work, which makes them very confusing:
If I do --no-checksum, I expect to not get any checksum; however, I get a parallel checksum.
If I do --parallel-checksum, I expect to get a parallel checksum; however, I get a normal checksum
Yeah, I agree with you, I am working in this issue currently. Thank you.
Jul 16 2020
Sure, I will do the rebasing! Thank you for mentioning it.
Make sure
transfer.py --parallel-checksum source target
Jul 13 2020
The following scenarios come under this ticket:
Jul 12 2020
Jul 9 2020
In T254979#6292417, @jcrespo wrote:I don't think this setup is adequate for testing paralelism, given we only have 1 host to transfer to (in parallel). I believe this could be way more interesting when using a 10Gb host with multiple 1Gb targets, plus it would help a lot with target checksum parallelism (which is the use case I mentioned to you in our meeting). Did you create a prototype for this or did you run a command manually? If you did some code (even if not good enough), I would like to see it so I can test it on my own.
Jul 8 2020
I have created a code for parallel data transfer using multiprocessing. I have benchmarked the code in our test machines and the results are given below:
Jul 5 2020
I have run the source multiprocess checksum and the results are given below:
Jul 2 2020
Change in opinion :D Our code has specific tests for all the cases, So I will make one for host validity also, that way it will be aligned to the existing code.
Parsing cumin output seems to be a better idea, let me check the output of cumin in this kind of cases.
Jul 1 2020
@jcrespo Can you please tell me a way to corrupt the source socket in xtrabackup. By corruption, I meant some changes,
Jun 30 2020
Sorry, I forgot to give the sysbench outputs.
In T254979#6267654, @jcrespo wrote:A preliminary result from this suggests that parallel checksum should be able to be disabled, but be enabled by default (unless cpu usage increased a lot).
Yeah, I will have a search on this. Let's this ticket be here so that we can keep an eye on it!
I have run benchmarks with the new cloud test machines.
bigfile: 1.4TB
manySmallFiles300: 293GB (150 000 files)
Actually, I never got that error. I will have a look for the possibility of that error.
Can we get a --verbose output, that would tell us if it is the problem with Cumin?
In our testing environment, I am currently using the Debian package only. Let me see what could be this issue!
Jun 29 2020
Thank you @jcrespo @Majavah I was using 1.26, Just now updated and everything is working fine, Thank you for your help!
Jun 26 2020
This race condition can be solved/reduced by making a directory (mkdir) in temp as soon as we see a free port.
Jun 25 2020
Thank you for the machines, I am able to log in and work on it :-D
Okay, As per our IRC discussion 1GB of memory is sufficient. Thank you!
@jcrespo We need to run MariaDB xtrabackup and all, Will 1GB be sufficient?
Jun 24 2020
Thank you! Yeah, I will do that.
Merged the packaging patch!
Yeah, the transferpy is now available at https://doc.wikimedia.org/ :-)
Jun 22 2020
I don't think we will be able to incorporate data transfer progress information. The RemoteExecution works in such a way that it will send the full data as a whole without any communication to the main program. Since the netcat command runs in the remote machine, the framework running machine has no information about it! What do you think?
Jun 19 2020
In T253219#6237417, @jcrespo wrote:We can close this, but let's remember to keep the help up-to-date with the new features implemented, as well as everything that is currently missing as it has not yet been fully decided.
Jun 18 2020
Okay!
(Machine spec: i5-2nd Gen with SATA HDD and 6GB DDR3 RAM)
Jun 15 2020
I tried incorporating the parallel md5sum in the code. But not working as expected!
Jun 10 2020
Thank you for the information, and yes, it was helpful :-)
Oh okay, how about giving the user a choice?
- Checksum parallel to transfer (document the issues we find at testing)
- Checksum after the transfer (document the delay issues)
In T254979#6210033, @jcrespo wrote:I think it is a good starting point- I suggest you do some benchmarking (doesn't need to be implemented on code yet) of how much expensive this strategy would be compared to the current method and compared to no checksum to understand the impact/improvement.
I would like to calculate the checksum for the actual tarred file. We can do this parallel to transfer like this:
At sender: tar cf - <directory> | tee >(echo $(md5sum) > /tmp/transfer_send) | remaining-commands
At receiver: commands | tee >(echo $(md5sum) > /tmp/transfer_recv) | tar xf - <directory>
Then we can compare those two checksum-temp-files at the end of the transfer. It will surely reduce the overall time.
What do you think?
Jun 8 2020
Jun 4 2020
May 28 2020
Yeah, I will think about it, Thank you.
I have uploaded a new patch set with a working deb file inside dist folder (https://gerrit.wikimedia.org/r/c/operations/software/wmfmariadbpy/+/598984/2/dist/transferpy_1.0-1_amd64.deb)
May 27 2020
May 26 2020
May 25 2020
Okay, I will use GSOC column for the tickets. Thank you!
I will try with fuser then! Thank you!
We are happy to help :D
Thank you for the suggestion, I will try with netcat.
what we do now is, we start a nc-listen command in the target machine with the start_job which makes a new process in the framework running machine (with netcat listen waiting) and kill_job function uses terminate function inside multiprocessing/process.py (given below) to kill that job.