Tue, Sep 18
floating IPs should work now too.
The above patch fixes PTRs for internal and floating eqiad1 addresses. Floating IPs still need work:
Mon, Sep 17
Thank you for keeping touch with HP about this. This is dumb :(
@arturo thanks! I'm pondering whether or not we should just do that cloud-wide... maybe just the bastions and the proxies.
Fri, Sep 14
This is working, but I note that the originating IP is getting natted when I go between regions:
Update: VMs created in eqiad1 appear there, but migrated VMs do not.
Thu, Sep 13
The 'figure out' parts of this are done now; all that's left is to do it.
Wed, Sep 12
Tue, Sep 11
Approved, but let's try to hold off for a week so that we can start this in the new region and avoid a future migration
I renamed these servers but they're still complaining about missing batteries.
I reimaged these and made all the puppet/dns changes needed. All that remains is the datacenter bits.
Mon, Sep 10
This is all done except for the physical label changes in eqiad.
Approved during today's SRE meeting
Thu, Sep 6
I'm trying to keep a handle on VM growth right now because that will limit the things I have to migrate in a few weeks... ping me if you run out of quota in the meantime, otherwise let's revisit this post-neutron.
Wed, Sep 5
I just now looked at this project and it looks to me like there's already enough headroom for 2 more m1.large VMs (and then some). So... we're all set for now, correct?
The context here is that a while ago I moved the local labtestwikitech database off of labtestweb2002 because Jaime asked me to 'productionize' it and I misunderstood his request. I could certainly just move it back there (although that would leave me with the puzzle of what 'productionize' means) or we could move labtestwikitech to db1073 (which would require some kind of tunnel encryption) or... I'm open to suggestions.
Hi all! I'm a bit lost because I think this task no longer has anything to do with its original post (which is about moving the databases off of the local wikitech server, long since done but to m5 rather than s5.) If I understand it, there are two different issues under discussion:
Tue, Sep 4
The two new m1.larges are approved -- I'll handle this shortly.
I'm reluctant to create big disks during the eqiad->eqiad1 transition; if y'all can wait a few weeks I'd like you to hold off until we have some hardware capacity in eqiad1 and then we can set you up there. Please ping me later in the month :)
This bot was just now gobbling up CPU throughout the cluster.
Mon, Sep 3
This is all working in the region-migrate script.
If we have cross-region routing then we don't need to do this. The migration script already moves each proxy to the migrated VM; later when we actually migrate the proxy project everything should switch over just fine.
I apologize -- I'm not ready to create your VMs in the new region, so you should go ahead and create things in eqiad as usual.
Done. Thanks all!
After https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/457460/ and fixing a silly typo, I now think that the ptr records should be updating correctly and promptly.
Sat, Sep 1
Fri, Aug 31
I merged https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/445310/ and the entries created look right to me. I can't actually dig -x yet though, presumable because we still need the delegation patch.
I created a special flavor, 'parsingtest' and a new big VM, 'parsing-qa-01'. That should work as a promethium replacement. It might be a bit hard to migrate things over from the old promethium until T202636 is resolved.
@dbarratt: I apologize; one of the blockers for moving forward with the new region didn't get done this week and will now be delayed until mid-September. You should go ahead and use the self-serve Horizon interface to get set up in the 'eqiad' region in the meantime; we'll have to migrate you over like everyone else when things are ready.
Wed, Aug 29
puppet is running now. Thank you!
(I altered it to 127 in the meantime so that @Krenair can get on with his work)
Tue, Aug 28
With the name 'antiharassment' this is approved. You can set up a web proxy with whatever http name you want :)
Seems reasonable -- approved
Mon, Aug 27
Sun, Aug 26
I don't see anything in the syslog to warn about the coming crash... it just stops dead at 00:50:01
Fri, Aug 24
*bump* -- I'm happy to do the OS install &c. if that helps move this along. Thanks!
Thu, Aug 23
Wed, Aug 22
Does this mean merging nova in both deployments?
@chasemp two questions: 1) was there a reason we requested these with 10G? (Or, did we?) 2) Is it important that these be in a particular rack for neutron purposes?
Tue, Aug 21
I tested this myself with a few different configurations (including with an account with limiter rights) but I can't reproduce the issue. It looks to me like you're not project admin in any of those projects, so I'd expect you to be able to see a list of proxies but not modify them.
Ah, I'm wrong, nova still uses a local database. So, to restate: We should move everything currently using local mysql there to m5, then we can uninstall mysql and this issue will go away :)
As far as I know, nothing running on that host still uses mysql. If that's right, then the solution is obvious :)
I think our preference would be to import these packages into our repo rather than point at the external repo. Arturo will follow up.
I think this is fine. It's not 100% obvious to me what we need to do to implement this; is it just editing a line on a wiki?