Feb 11 2018
Aug 26 2017
Jun 22 2017
Mar 21 2017
I can help with this project. I have experience in Python for software development, and I maintain a few tools on Tool Labs. I've also used Wikimedia's OAuth in one of my tools.
Mar 15 2017
Feb 23 2017
It only happens sometimes, for example this message a few hours before the task in the description links to the correct task:
<wikibugs> Labs, Tool-Labs, Tool-Labs-tools-Database-Queries: nlwiki Labs replica with problems - https://phabricator.wikimedia.org/T138927#3043967 (Sjoerddebruin) Resolved>Open It shouldn't be that this page and two **privacy violations** keep appearing on pages as https://tools.wmflabs.org/wikidata-to...
Feb 14 2017
PyICU requires the underlying C++ library to work, and this is in the libicu52 apt package on Debian Jessie. When this is installed, PyICU can then be installed in a virtualenv. Alternatively, the python-pyicu apt package can be installed, which installs PyICU and the underlying C++ library systemwide. One of the above packages would need to be installed in the python 2 Docker image to be used in Kubernetes containers.
Jan 12 2017
The current version that is on Tool Labs, v0.10.25, was made end of life over 3 months ago. This means that it's a security risk.
Jan 4 2017
Dec 17 2016
Dec 4 2016
Nov 24 2016
Nov 11 2016
The interim solution of using a compiled wheel of pyenchant no longer works because the wheel at http://tools-docker-builder-01.eqiad.wmflabs/pyenchant-1.6.7-py2.py3.cp27.cp32.cp33.cp34.cp35.pp27.pp33-none-any.whl has been taken down. Having the enchant C library (libenchant1c2a) installed in the image would remove the need for the wheel to be compiled and hosted.
Nov 2 2016
Being able to use custom Docker images would be extremely helpful, because it removes the need to have to compile source or find binaries and/or ask for packages to be installed.
Oct 23 2016
Oct 1 2016
Sep 17 2016
Why not just redirect the .wiki domains to the .org domains?
Sep 9 2016
Sep 5 2016
Aug 26 2016
- Take load off NFS - logs are stored on an Elasticsearch cluster
- Make it far faster to see the actual logs from processes - doesn't depend on NFS, so should be fast
- Be able to search through logs easier - searching is easy: http://docs.graylog.org/en/2.0/pages/queries.html, "The search syntax is very close to the Lucene syntax. By default all message fields are included in the search if you don’t specify a message field to search in."
- Automatically drop older logs - index rotation can be configured based on message count, index size, or index time: http://docs.graylog.org/en/2.0/pages/index_model.html#eviction-of-indices-and-messages
- Provide a Filesystem based interface for log ingress - Graylog supports this: http://docs.graylog.org/en/2.0/pages/sending_data.html#reading-from-files, "we provide the Collector Sidecar which acts as a supervisor process for other programs, such as nxlog and Filebeats, which have specifically been built to collect log messages from local files and ship them to remote systems like Graylog."
- Provide more standard and modern interfaces (gelf? etc) for log ingress - Graylog supports GELF and syslog: http://docs.graylog.org/en/2.0/pages/sending_data.html
- Provide a filesystem based interface for log reading - I don't think this is supported, but you can export search results to CSV: http://docs.graylog.org/en/2.0/pages/queries.html#export-results-as-csv
- Provide a more modern interface for log reading as well - Graylog's interface looks fairly modern and easy to use to me:
- Be secure in allowing only authenticated members to read a particular tool's logs. - Graylog has access control out-of-the-box, and can integrate with LDAP users and groups: http://docs.graylog.org/en/2.0/pages/users_and_roles/external_auth.html#ldap-active-directory
Graylog has Streams (basically categories for log messages): http://docs.graylog.org/en/2.0/pages/streams.html, alerting based on those streams: http://docs.graylog.org/en/2.0/pages/getting_started/stream_alerts.html, and dashboards: http://docs.graylog.org/en/2.0/pages/dashboards.html.
Aug 25 2016
OpenVPN server can be configured to only provide routes to certain IP ranges/hosts
- Using LDAP to authenticate would be easy: https://openvpn.net/index.php/access-server/docs/admin-guides/190-how-to-authenticate-users-with-active-directory.html
- Would only carry traffic to Labs hosts (e.g. *.wmflabs/*.labsdb hosts), and not to internet.
- OpenVPN has clients for many different platforms, including Windows.
- OpenVPN is fast and secure.
Aug 24 2016
Aug 19 2016
Aug 12 2016
Aug 8 2016
But Horizon tells you how much RAM/CPU/storage that flavour gives the instance when you pick the flavour on there
Aug 5 2016
Aug 4 2016
I have a replica.my.cnf in my home dir that was created in January 2016. There has been far more than 6 new users on Tool Labs so far this year.
Aug 1 2016
Jul 26 2016
Perhaps it would make sense to have a separate notice, given that the username isn't as confidential as say, an IP address or similar
I don't think that's the case. The user using OAuth via a tool will have their username disclosed to the tool though (subject to the usual private info restrictions at present).
Jul 25 2016
Jul 22 2016
@Magnus: T140110: Packages to be installed in Toolforge Kubernetes Images (Tracking) is the tracking task for packages to be installed in Kubernetes containers. You'll need to create a subtask of that.
Jul 20 2016
Java 1.8 is supported on Kubernetes in Tool Labs. If the SGE scripts could be converted or replaced with Kubernetes equivalents, then MerlBot could probably run on that. I recall that not being able to use Java 1.8 was a blocker in getting MerlBot to use HTTPS because it could not use an updated version of a library or additional features provided in Java 1.8 (I might be remembering this wrong. Probably)
Jul 19 2016
Works for me.
Quarry would be non compliant because the ToU classes usernames as private information. The ToU states that you *must* show this disclaimer before collecting the private information (in this case the username):
By using this project, you agree that any private information you give to this project may be made publicly available and not be treated as confidential.
Jul 15 2016
Jul 12 2016
tools.piagetbot@tools-bastion-03:~$ virtualenv ~/venv -p /usr/bin/python3 Running virtualenv with interpreter /usr/bin/python3 Using base prefix '/usr' New python executable in /data/project/piagetbot/venv/bin/python3 Also creating executable in /data/project/piagetbot/venv/bin/python Installing setuptools, pip...done.
(of course replace the "~/venv" with the directory you want the virtualenv to go)
Jul 11 2016
Jul 5 2016
tom29739@tools-bastion-03:~$ update-alternatives --config editor There are 12 choices for the alternative editor (providing /usr/bin/editor).
Just typing editor in a shell prompt brings up joe, which on further investigation:
tom29739@tools-bastion-03:~$ type editor editor is /usr/bin/editor tom29739@tools-bastion-03:~$ file /usr/bin/editor /usr/bin/editor: symbolic link to `/etc/alternatives/editor' tom29739@tools-bastion-03:~$ file /etc/alternatives/editor /etc/alternatives/editor: symbolic link to `/usr/bin/joe'
Jul 4 2016
I think it's that.
That's weird. It should be back on then...
It doesn't appear to be working. It's not on IRC, so I'd assume it's not working. @yuvipanda did you delete the pid files when you tried to get it working? It crashed last time, so those files need to be removed.
sudo rm -iv /mnt/share/wm-bot/*.pid
Should do that according to the docs.
Or alternatively wait for one of the bot's roots to come along, but that
might take a while.
Jul 3 2016
Jul 1 2016
Jun 28 2016
Perhaps ulimits are causing it?
NFS was much faster about 5-6 months ago, when I joined. It's having an impact on the speed of other things too (e.g. the speed of webservices, especially with PHP).
Jun 27 2016
(newsopel)tools.piagetbot@tools-bastion-03:~$ time pip -V pip 1.5.4 from /data/project/piagetbot/.virtualenvs/newsopel/local/lib/python2.7/site-packages (python 2.7)
@valhallasw looks like it's down again.
It looks like it's spewing out incorrect gzipped data, which the browser can't decode so it downloads it as a file instead.
Maybe the default DocumentRoot is in the wrong place so that the webserver is selecting the wrong file for the user's browser to interpret? That's the only plausible explanation that I can think of where setting the DocumentRoot would fix it.
@yuvipanda Chrome Version 51.0.2704.103 m (32 bit) on Windows 7. download.gz is 707 bytes from the https:// version. WinRAR can open it, there's a file named 'download' inside which is 677 bytes.
@yuvipanda neither 1. or 2. work for me, chrome tries to download a file called 'download.gz'. Both work with curl.
@chasemp it just happens randomly, not every time but still often enough to get annoying. I don't think it's anything that I'm doing because Niharika was having the exact same issue on their project.
A potential problem I foresee is different languages wanting to use the same page name. If wikis A, B and C all want to use page name X, then which wiki's content is used?
Jun 22 2016
I got really confused by this yesterday, nothing worked, because I didn't know about this.
My tools often need different packages to be installed, and sometimes these can be installed easily (binaries available to download) but often the only options for binaries is a custom apt repo, or compiling from source, which is very time consuming and doesn't always work. These packages would make it possible to make a chroot as a non root user, so packages can be installed inside the chroot with apt and can be used.
Jun 21 2016
No longer needed.
That project is no longer needed.
Jun 18 2016
Jun 17 2016
I have requested a new labs project for testing the impact that xdebug has: T138097: Create new labs project tools-xdebug-testing
I discovered that NFS was a huge slowdown of all webservices, and that running a webservice in /tmp, I copied the files to the bastion /tmp and made a symlink, and it worked (strangely, lighttpd could access the bastion's /tmp, because there wasn't anything in the webserver host's /tmp when I checked).
Although NFS did slow down the webservice hugely, other factors (like xdebug) are still affecting the webservice. I'm going to assign myself this task and do some more testing to find out whether xdebug (and maybe other factors) make a difference in the speed of the webservice.