Page MenuHomePhabricator

Request increased quota for analytics Cloud VPS project
Closed, DeclinedPublic


Project Name: analytics
Type of quota increase requested: <cpu/ram>

Hi! The analytics team runs in its Analytics project several clusters like Kafka, Hadoop, Druid, etc.. to test changes and settings before hitting production. The Hadoop cluster is the most important, and needs big(ish) instances to be able to test properly things like HDFS, multi-node upgrades, etc.. We are about to test new important Hadoop features (improved security, replacement of master nodes, etc..) and we'd need to spin up new hosts (likely 4/5) all possibly m1.large. We are going to clean up the instances that we don't use soon, but from a quick count we wouldn't get all the resources needed.

The project is currently not completely full, but I will not be able to create the next round of m1.larges for my next project (test the replacement of the Hadoop master nodes).

Thanks in advance! Let me know if you have further questions :)

Event Timeline

elukey created this task.Sep 3 2018, 8:09 AM
Restricted Application added a subscriber: Aklapper. · View Herald TranscriptSep 3 2018, 8:09 AM
elukey added a subscriber: chasemp.Sep 3 2018, 8:10 AM
bd808 added a subscriber: bd808.Sep 3 2018, 11:36 PM

Type of quota increase requested: <cpu/ram>

@elukey can you quantify the short term (during migration) and long term increase you are asking for here? I see a mention of adding 4-5 new m1.large instances and some mention of cleanup.

It would also be useful for us to know if you need to build the new instances in the next week or two or if you have time flexibility to wait until October to add a lot of new things. My reason for asking this is that we are in the preparation phase for a complete migration of instances from the current OpenStack region to a new region that has a different software defined network layer. The WMCS team will handle moving instances from one region to the other, but any instance that can wait to be created is one less instance that we will have to actually move. :)

elukey added a comment.Sep 4 2018, 5:52 AM

Thanks for the answer! So for the immediate future (next couple of weeks) I think that two m1.larges would be good for what I am doing, there are a couple of instances that I'll probably be able to delete (after asking to my team) but we constantly need multiple clusters to be active in labs to test, so I am afraid I'll not be able to remove more instances (but I'll check with @Ottomata today).

The rest of the capacity can wait October without any issue!

I'd also add that we will eventually need some space to test the new data lake service (likely running apache presto) in the analytics project. This will also be 3ish medium-large instances. We'd like to keep these instances online for testing before making changes in production.

Waiting until after oct will be fine.

Andrew added a subscriber: Andrew.Sep 4 2018, 3:37 PM

The two new m1.larges are approved -- I'll handle this shortly.

bd808 assigned this task to Andrew.Sep 4 2018, 3:37 PM
Andrew added a comment.Sep 5 2018, 8:44 PM

I just now looked at this project and it looks to me like there's already enough headroom for 2 more m1.large VMs (and then some). So... we're all set for now, correct?

elukey added a comment.Sep 6 2018, 6:48 AM

I just now looked at this project and it looks to me like there's already enough headroom for 2 more m1.large VMs (and then some). So... we're all set for now, correct?

We cleared some VMs to make some space, but having the possibility to spin up a couple more m1.larges wouldn't be bad :)

Andrew added a comment.Sep 6 2018, 1:38 PM

I'm trying to keep a handle on VM growth right now because that will limit the things I have to migrate in a few weeks... ping me if you run out of quota in the meantime, otherwise let's revisit this post-neutron.

Andrew removed Andrew as the assignee of this task.Oct 2 2018, 7:55 PM

Is this still needed?

GTirloni removed a subscriber: GTirloni.Mar 21 2019, 9:06 PM
bd808 closed this task as Declined.May 21 2019, 11:54 PM

Closed for inactivity