Page MenuHomePhabricator

Point wikipedia.in to 205.147.101.160 instead of URL forward
Open, NormalPublic

Description

wikipedia.in domain name is owned by WMF.

Now if we take wikipedia.in ---> it goes to http://wikimedia.in/wikipedia.html

We need to update this to server ip 180.179.52.130 in the domain settings.

Event Timeline

Naveenpf created this task.Sep 1 2016, 5:07 PM
Restricted Application added a subscriber: Aklapper. · View Herald TranscriptSep 1 2016, 5:07 PM
Ijon added a subscriber: Ijon.Sep 1 2016, 7:21 PM
Dzahn added a project: DNS.Sep 2 2016, 12:28 AM
Restricted Application added a project: Traffic. · View Herald TranscriptSep 2 2016, 12:28 AM
Dzahn renamed this task from Instead of url forward for wikipedia.in ... add server ip. to Instead of url forward for wikipedia.in ... add server ip. (point wikipedia.in to 180.179.52.130).Sep 2 2016, 12:29 AM
Dzahn added a subscriber: Dzahn.Sep 2 2016, 12:35 AM

for context of this request and other ops

Currently, in the DNS templates, wikipedia.in is a symlink to wikipedia.org, so it get's all the same records, pointing to our loadbalancers. There, in Apache config, we have this redirect to http://wikimedia.in/wikipedia.html

The request here means that we would have to unlink it from wikipedia.org and write a new template, with an A record of 180.179.52.130, the IP under the control of Wikimedia India.

wikimedia.in is owned and hosted by Wikimedia India Chapter.

wikipedia.in is owned by Wikimedia Foundation.

Questions are if WMF is ok with setting it up this way for the wikipedia domain, since usually just wikimedia domains are used by chapters.

Another question is what should happen with the MX records for email to this domain.

Change 307959 had a related patch set uploaded (by Dzahn):
point wikipedia.in to 180.179.52.130

https://gerrit.wikimedia.org/r/307959

Peachey88 added a subscriber: Peachey88.

Probably want WMF-Legal to weigh in on it.

But from my personal [non official in anyway] point of view, is that any project domain that is owned by the foundation should either be pointing to the [relevant] project or parked [depending on request rate].

I wonder if Security-Team has some relevant thought here?

Restricted Application added a subscriber: JEumerus. · View Herald TranscriptSep 2 2016, 12:54 AM

We (WM-ES) have requested the same thing for wikipedia.es a long time ago. It should be somewhere in phabricator (can't find it am). We even have a basic portal page at such vhost.

Security wise, these domains are not a problem, since they are completely unrelated from the browsers POV. Of course, they could be used for eg. a phishing attack impersonating wikipedia, but we are talking about delegating control over its content to the national chapter, which is a trusted party. And by delegating it at dns level (it's not a domain transfer), control can be regained back in minutes.

If you want to be more explicit about whom is delivering the domain content, the www could be made a CNAME instead of an A. But really, I don't think anyone would have a problem with this. I can't think on a case where someone relied on the registration data and having the remote server owned by a different team would be a problem. Specially since those domains could perfectly have been (and were) registered by evildoers instead of chapters / wikimedians.

@Peachey88: For es, there's not one relevant project. It should be a portal page. Looking at http://wikimedia.in/wikipedia.html it seems that the same case applies for India.

ops is not interested in creating portal pages for every case, and it would require them to buy many certs (cf. T101060#1697044). However, this could be handled by the local chapter. In addition of being the ones with the local knowledge about the portal organization best suited for their country.

BBlack added a subscriber: BBlack.Sep 2 2016, 1:43 AM

Security wise, these domains are not a problem

I disagree. In general, we can't expect volunteer/chapter portals hosted elsewhere to enforce HTTPS like we do, or any of the other many security factors involved. If users are commonly visiting these portal/redirect domains because they're showing up in search engines, social media links, and/or bookmark collections, they're more-vulnerable to state-level adversaries (national firewalls, censorship, etc) than they would be if they first visited one of our canonical and well-protected domains. And if they're not commonly reaching us through these names, then they serve little purpose in the first place.

Aklapper renamed this task from Instead of url forward for wikipedia.in ... add server ip. (point wikipedia.in to 180.179.52.130) to Point wikipedia.in to 180.179.52.130 instead of URL forward.Sep 2 2016, 8:56 AM
grin added a subscriber: grin.Sep 2 2016, 10:22 AM

I disagree. In general, we can't expect volunteer/chapter portals hosted elsewhere to enforce HTTPS like we do, or any of the other many security factors involved. If users are commonly visiting these portal/redirect domains because they're showing up in search engines, social media links, and/or bookmark collections, they're more-vulnerable to state-level adversaries (national firewalls, censorship, etc) than they would be if they first visited one of our canonical and well-protected domains. And if they're not commonly reaching us through these names, then they serve little purpose in the first place.

Let me chime in from the perspective of a chapter or a local community.

WMF have a long-standing history of badly supporting local domains, websites, redirectors, statistics, and some other related issues. Getting things done at WMF is often harder than getting something from the state; if I know the exact people personally and I can ask them p2p then things work, for other people they usually don't, as "we don't have the infrastrucure for that", "we don't have the security process for that", "we don't support it because nobody wanted it so far", and alike.
I can see why anyone not want WMF to handle a domain: they expect it to work.
It is neither nice nor fair to say "can't expect volunteer enforce whatever", since often volunteers implemented things years before WMF, and there are professionals outside WMF, incidentally.

For example there is the absolutely common requirement for wikipedia.xy to be able to handle https://wikipedia.xy/wiki/Foobar and https://wikipedia.xy/Foobar, and either provide the article and its related content or redirect to the proper place. Basically all national domains should do that, and not DNS redirect to the language portal and not redirect to wherever Main_Page or Portal_page. Should be fully indexable by spiders, have proper TLS cert, etc.

Can WMF do it, without fuss and lengthy discussions? If yes, probably chapters don't want to run just a web server to do it properly. If not, however, they don't quite want to hand over the domain because the content gets lost when someone say "oh it's too small, too not used, too whatever" and it gets a cname to wikipedia.org and kindly asked not to try again very soon.

That's not a technical question and probably not quite a legal or a security one; I'd say it's politics... forcing a process on people without properly supporting their needs.

If you want to help it [and get assurances on security and legal aspects in due course], gather the needs, and get them implemented. I have offered one point of view, maybe there are others, for example of the volunteers in India.

BBlack added a comment.EditedSep 2 2016, 12:00 PM

I disagree. In general, we can't expect volunteer/chapter portals hosted elsewhere to enforce HTTPS like we do, or any of the other many security factors involved. If users are commonly visiting these portal/redirect domains because they're showing up in search engines, social media links, and/or bookmark collections, they're more-vulnerable to state-level adversaries (national firewalls, censorship, etc) than they would be if they first visited one of our canonical and well-protected domains. And if they're not commonly reaching us through these names, then they serve little purpose in the first place.

Let me chime in from the perspective of a chapter or a local community.
WMF have a long-standing history of badly supporting local domains, websites, redirectors, statistics, and some other related issues.

I'm not here to argue about history, I'm here to discuss the realities of the present situation.

Getting things done at WMF is often harder than getting something from the state; if I know the exact people personally and I can ask them p2p then things work, for other people they usually don't, as "we don't have the infrastrucure for that", "we don't have the security process for that", "we don't support it because nobody wanted it so far", and alike.

Exactly. It's not that we lack infrastructure, or lack security processes, or lack the ability to support projects. We clearly do well on all of these fronts for projects and mechanisms we explicitly support. But we try not to take on arbitrary projects for which we know we can't provide that level of support. Our resources and policies only work together in a certain well-defined and constrained scope.

I can see why anyone not want WMF to handle a domain: they expect it to work.

I don't think this is a fair statement. The services we offer do work. That our limited menu of available services doesn't meet your needs at this time is an entirely different sort of problem.

It is neither nice nor fair to say "can't expect volunteer enforce whatever",

I'm sincerely apologize if you perceive it as meanness, but it's entirely a fair judgement of the situation. Developing, evolving, and continuously enforcing the policies that we do in the name of privacy, security, and reliability is difficult work at this scale. If we hand off part of that responsibility to N other organizations (and N is quite large for all things in scope of this discussion), how would we scale that process out to them?

Even if we could reduce what we're doing to a relatively-static and short list of important bullet points about security and reliability (which we can't), it would be nearly impossible to monitor or enforce compliance and maintain it over years-long periods through possible organizational upheavals (* N), etc. This isn't about whether any one of the third parties has good intentions and/or good skills in the present, it's about the long-term broad view of all of these third parties collectively. They're not all going to be staying current on best practices, and they're especially not going to stay on top of it and evolve with it 24/7.

since often volunteers implemented things years before WMF,

I'm sure they do in some cases. It's far easier to experiment in an environment unconstrained and unencumbered by our policies, and it's far easier to handle a singular use-case than to think about all use-cases bundled together. There are many things I could do in a heartbeat in a one-off hosted server instance at Linode that I would never dare attempt to do properly in our infrastructure, within our policies. Again, those policies are there for a reason.

and there are professionals outside WMF, incidentally.

Some volunteers are undoubtedly amazing professionals in various fields, and a few of them probably posses better skills than the WMF's own employees do in various areas relevant to this discussion. This isn't about skill at all, it's about practicing policy.

For example there is the absolutely common requirement for wikipedia.xy to be able to handle https://wikipedia.xy/wiki/Foobar and https://wikipedia.xy/Foobar, and either provide the article and its related content or redirect to the proper place. Basically all national domains should do that, and not DNS redirect to the language portal and not redirect to wherever Main_Page or Portal_page. Should be fully indexable by spiders, have proper TLS cert, etc.
Can WMF do it, without fuss and lengthy discussions? If yes, probably chapters don't want to run just a web server to do it properly.

No, we can't do that today, and we won't even try to do that today (modulo any existing legacy domains set up that way in the past). I really wish nobody else was doing it with their own 3rd party servers, either, because it's a real problem.

We have in our current possession something on the order of 700 such non-canonical DNS domainnames, and who knows how many more currently fall into this categorically but are owned by 3rd parties and not currently visible to our system, but would become so if we offered a standard solution. It is not currently possible for us to set up general article-level redirect services for these 700 domains in a way that keeps our privacy, security, and reliability in check policy-wise.

The core issue is TLS certificates, many of which would be for wildcard language-subdomains to implement this properly. It's not just their cost, but the management overhead of maintaining so many of them. The emergence of LetsEncrypt has offered us a way forward on this, but even with the power of LE on our side, there's significant engineering work that remains on our end before we can offer such redirect domains securely as a standard simple option. LE doesn't support wildcards, and scipting reliable services and software around it to handle a very large count of arbitrary domains (including all of the language subdomains explicitly) requires real work on our end that isn't yet complete. There's a task about solving this particular sub-problem here: T133548 , and it's primarily blocked on TechOps freeing up enough human resources in our quarterly planning to get the remaining engineering work done.

In the third party scenario, acquiring a single TLS cert is easy enough, but then you're facing various other challenges we're solving here centrally. What team will be supporting each of these N sites in third-party hosting? Will their practices prevent security breaches and privacy leaks in general? Will the site be reliable in the face of component/server/datacenter/software failure? Will the organization responsible for that support persist the right practices indefinitely? Will they enforce them in the face of opposing pressure from local government?

In general, very few chapters or other movement-related organizations and individuals have the basic organizational size and momentum necessary to make any kind of realistic claim in these areas. Most such 3rd party movement sites that we know of have fairly egregious issues that are obvious even at the surface level when we look at them in practice. That's to be expected because of all the challenges in this area. Even in the rare cases that we try to outsource some of our own services to paid 3rd party professional services, we often find those commercial services unable to meet the demands of our policies. Most of the industry doesn't care about privacy and security in the way that we do.

And again: these things matter. If you're going to offer a nationalized (or other) alternate, non-canonical domain entry-points to Wikipedia or other movement projects:

  • What purpose does it serve?
  • If we make it some kind of "alternate canonical", how will that split search results and rankings? If we don't and keep the present URLs canonical, then search engines will basically never index results through this alternate domain for users. Will you then rely on social popularization of the links to achieve a sort of canonicalization within the community?
  • Assuming it's somehow it *does* become semi-canonical and sees widespread adoption/linking/use within a national community, we then face more problems:
    • Is it reliable? When the small, singular virtual webhost machine that commonly holds the portal page crashes or becomes a DDoS (or other attack) victim, do users perceive this as "Wikipedia is down?". How does this affect public perception and popularity of the movements and organizations involved?
    • Is it performant? The canonical domains are hosted at 4 datacenters around the world (hopefully with more to come, budgets willing!), and part of the drive for that is getting our edge termination closer to users for latency. Sending them round-tripping through a small server in just one place in the world first before reaching us could have a pretty severe performance impact.
    • Is the traffic private? Who's managing the access logs here, with information correlating user IP addresses to articles being read and edited? What's the policy on protecting and purging this information? Who's in control of that policy and who do they answer to under what scenarios? (Is it a virtual server btw? If so, all of this may apply to the hosting company as well, as they can see through virtualization)
    • Is it secure at the server level? Is there a large team on top of managing the server(s) 24/7? Are they constantly staying up to date on security practices and patches? ( Is it a virtual server by the way? Because a lot of those everywhere in the world became potentially-compromised by sibling VMs from unrelated customers the other day: http://arstechnica.com/security/2016/08/new-attack-steals-private-crypto-keys-by-corrupting-data-in-computer-memory/ ).
    • Is the traffic secure? This isn't just purchasing a TLS cert. It's choosing the right software, configuring it well, making smart ciphersuite choices, HSTS, STS-preloading, key management, etc. As with everything else above, the situation on the Internet constantly evolves, and thus policies and practices here must constantly evolve as well.

Those last two points about server and traffic security are critical. In many countries, state-level adversaries would absolutely love the ability to selectively filter the flow of Wikipedia's information across their national borders. Providing a popular-yet-insecure alternate redirect domain provides them with the perfect weakest-link-in-the-chain to attack the integrity of our content for all the users of that alternate domain easily.

That's not a technical question and probably not quite a legal or a security one; I'd say it's politics... forcing a process on people without properly supporting their needs.

We're not forcing a process on people without properly supporting their needs. We're refusing to offer new services where we cannot securely, privately, and reliably meet your needs currently. In the future when we can, we will offer that service (but I think we would still consider all alternate domains non-canonical and discourage their use), but I don't think there's any "force" involved here today.

grin added a comment.Sep 2 2016, 3:02 PM

@BBlack thanks for the detailed reply. I try not to talk apart this task, so I try hard to be brief.

Exactly. It's not that we lack infrastructure,

By "infrastructure" I meat the broadest sense which includes human resources as well, what WMF clearly lacks (on the side of R&D), as you correctly stated in a different paragraph. That's not a sin per se, it's a fact we have to handle.

We clearly do well on all of these fronts for projects and mechanisms we explicitly support. But we try not to take on arbitrary projects for which we know we can't provide that level of support.

That translates to "we provide ops but not development", which is okay. (Yes, exaggerated generalisation, I know as well.)

Developing, evolving, and continuously enforcing the policies that we do in the name of privacy, security, and reliability is difficult work at this scale. [...]

And specifically this is one reason why WMF cannot take on, as you phrased, "arbitrary" project development, since it seems more and more impossible due to growing amount of restraints of policies. More restrictions mean less flexibility, and less room for suiting specialised needs. The masses win, and the smaller projects are forced to follow. I know this is one possible way to run a huge centralised system, and it is one reason how those "arbitrary" solutions may be implemented on a non-centrally-governed way.

The elitist view is hard to fight, too (some people happen to run larger infrastructures); there are cases when the demand should be examined regardless of the mood of the "elite participants", and the stance should be that the users are knowledgeable.

since often volunteers implemented things years before WMF,

I'm sure they do in some cases. It's far easier to experiment in an environment unconstrained and unencumbered by our policies, and it's far easier to handle a singular use-case than to think about all use-cases bundled together. There are many things I could do in a heartbeat in a one-off hosted server instance at Linode that I would never dare attempt to do properly in our infrastructure, within our policies.

Exactly. Encumbered was the word you have used, and you suggesting not to act because of that. Redirections we're talking about are a fairly closed and well definable service, and it's not on par with running a geographically redundant cache farm. Being encumbered means no will to change or adapt, since it feels almost impossible. It become a mindset in a while, and you start to fend off anything not originating from inside.

Again, those policies are there for a reason.

Some of them are, yes. Some of them are not quite so. But that's a story for another session. :-)

and there are professionals outside WMF, incidentally.

Some volunteers are undoubtedly amazing professionals in various fields, and a few of them probably posses better skills than the WMF's own employees do in various areas relevant to this discussion.

"some" rotfl ;-)

This isn't about skill at all, it's about practicing policy.

However you try to talk about both sides at once. First, it is good if a service gets moved from a lax security and policy viewpoint to a stricter one; second, it is usually unacceptable when this happens by rejecting/dropping the service and replacing it by something else for the sake of policy. I debate not the usefulness of the policy but the result of the inability of change due to the policies. And that policies (in general sense) can be followed by volunteers, not necessarily worse than WMF, but with definitely different priorities about specific problems. Or in other words: volunteers get done things which is important for them while WMF gets things done which are important for the WMF. In an ideal world these would meet....

No, we can't do that today, and we won't even try to do that today (modulo any existing legacy domains set up that way in the past). I really wish nobody else was doing it with their own 3rd party servers, either, because it's a real problem.

This is a hard to handle attitude: it is a real problem, because we can't and won't do it and someone does, without considering it a challange to solve based on the demand of the userbase, and finding ways to resolve "real problems".

It is not currently possible for us to set up general article-level redirect services for these 700 domains in a way that keeps our privacy, security, and reliability in check policy-wise.

I absolutely disagree, but that's not a problem if there is way to see your privacy, security and reliability problems, and trying to address them (not here and now, probably). I believe you would not show me any really impossible problem on this area policy-wise, apart from the fact that "we don't do it right now™".

The core issue is TLS certificates, many of which would be for wildcard language-subdomains to implement this properly. It's not just their cost, but the management overhead of maintaining so many of them.

I kind of happen to have see such thing around, so I would kindly call this statement not in par of general practice. Wildcarding or not depends on taste (I almost never use wildcards where the service doesn't force me to, but it's a matter of taste), but nowadays it's extremely cheap nevertheless (I create EV wildcards in unlimited amounts for $200 for an almost full browser coverage) . And manging them through APIs are absolutely painless and automatized.

The emergence of LetsEncrypt has offered us a way forward on this, but even with the power of LE on our side, there's significant engineering work that remains on our end before we can offer such redirect domains securely as a standard simple option.

You see, that's a problem which would be resolved by a volunteer in a few hours, and testing and reliability assurance would use a few more days, so they obviously do it. For you it means significant engineering work so you consider it a policy-wise real/unsolvable problem. This is really hard to handle from the "other side".

And yes, LetsEncrypt would be the way to go, and given the size and might of WMF there may be even collaboration between LE and WMF and the process could be put on fast track -- provided people don't get encumbered by strict policies and reject the idea to begin with. But it would even work with their present infrastructure.

There's a task about solving this particular sub-problem here: T133548 , and it's primarily blocked on TechOps freeing up enough human resources in our quarterly planning to get the remaining engineering work done.

But it wouldn't get us closer to the resolution of our current topic, right? :-) It's a tech issue, not a policy one, an as such, it's possible to resolve.

In the third party scenario,

This would not be preferred by WMF for really obvious reasons: lack of control (of any kind).

Will their practices prevent security breaches and privacy leaks in general? Will the site be reliable in the face of component/server/datacenter/software failure? Will the organization responsible for that support persist the right practices indefinitely? Will they enforce them in the face of opposing pressure from local government?

Apart from our topic it is a interesting thought experience to ask these questions related to WMF ops, especially the last one, where the datacenter is in the US and you are handed a gag order. God on the $100 bill knows whats going on inside the WMF network for how long related to Uncle Sam. [No, you wouldn't know. You're not in the position. And remember the surprised face of those Google engineers a few years back.] Many of us live in the EU where privacy protection laws are very different (to put it euphemistically that it's "incomparably better") from the US. Not perfect, but much less scary.

In general, very few chapters or other movement-related organizations and individuals have the basic organizational size and momentum necessary to make any kind of realistic claim in these areas.

Here I agree.

Most of the industry doesn't care about privacy and security in the way that we do.

Here I don't.

And again: these things matter. If you're going to offer a nationalized (or other) alternate, non-canonical domain entry-points to Wikipedia or other movement projects:

  • What purpose does it serve?

This is the first point which actually tris to address the problem instead of talking it away. Yes, almost the most important question to examine, since this is the reason whether we have need of the service or not.

Avoiding verbosity, just a few bullets, based on experience:

  • people like to use national cctld better, they want it, ask for it, use it [that's the main reason]
  • search services are tailored to prefer cctlds in the results
  • search filtering regions and languages often works better when paired with cctlds
  • some [commercial] filtering may restrict browsing to cctlds
  • If we make it some kind of "alternate canonical", how will that split search results and rankings? If we don't and keep the present URLs canonical, then search engines will basically never index results through this alternate domain for users. Will you then rely on social popularization of the links to achieve a sort of canonicalization within the community?

I prefer usage to rankings, if you asked me. :-) Still, internal stats are consistent (redir doesn't cache, req goes through) and rankings would reflect real usage. (Would it make you uncomfortable if you'd realise that de.wikipedia.org is absolutely dispreferred to wikipedia.de?)

The original idea behind using subdomains was that originally wikipedia wasn't big enough to be able to handle 800+ separate domain registrations and not that they were somehow "better" (suited for the readers and editors). It doesn't mean it should stay that way forever, though. (I would say it should for other reasons, but I can be convinced.)

  • Assuming it's somehow it *does* become semi-canonical and sees widespread adoption/linking/use within a national community, we then face more problems:

We should not talk about WMF hosted and community hosted solutions mixed all the time.
If WMF can do the hosting, it probably should.
If not, then we have to resolve to the community hosting anyway.

  • Is it reliable?

It usually depends on the demand, I'd say. But to put up a quite reliable site is getting easier nowadays. (Don't forget that it's a rather simple service.)

When the small, singular virtual webhost machine that commonly holds the portal page crashes or becomes a DDoS (or other attack) victim, do users perceive this as "Wikipedia is down?". How does this affect public perception and popularity of the movements and organizations involved?

I could come up with a dozen issues which negatively affect the public perception of Wikimedia, and not one of them are technical. People get used to problems, and "it's better than not to have it at all".

  • Is it performant?

Oh c'mon. We're talking about capacities of many thousands per second and demands of many thousands per hour. Extremely low traffic. Uses almost no resources.

((HTTPS would, and even that is easy to resolve by using the required tools.))

Sending them round-tripping through a small server in just one place in the world first before reaching us could have a pretty severe performance impact.

Funny that you mention. A test showed 17ms for the redirect and 167+148ms (=315 ms) for xx.wikipedia.org to answer.

  • Is the traffic private?

Yeah, this was the reason that we don't have reliable and detailed traffic data anymore for years now. It may be up to debate whether what we won worths what we have lost.
It is a good question with lots of bad answers, as from the point of view of a huge organisation nobody should be trusted who is not controlled. I'd say yes, privacy is possible to assure but I wouldn't even start a discussion about it with WMF anymore, it's easier to convince the Borg not to assimilate me and give up the fight. :-P

  • Is it secure at the server level?

Possibly. It is a question (or requirement) which could be asked and rather easily followed, apart from the more extreme requirements.

(Is it secure at server level at WMF? Can I get a login on a semipublic server and start rowhammering it? Have you replaced your RAM, tested your servers? No, probably not, so it's not "secure".)

Are they constantly staying up to date on security practices and patches?

Oh god, we're under a bloody php based service! It is designed to be insecure and crappy, and security errors come up almost every week... what possibly could compare to it in a static redirector config? :-(

( Is it a virtual server by the way? Because a lot of those everywhere in the world became potentially-compromised by sibling VMs from unrelated customers the other day: http://arstechnica.com/security/2016/08/new-attack-steals-private-crypto-keys-by-corrupting-data-in-computer-memory/ ).

Would you like me to scetch up an attack on WMF infrastructure based on servers accessed by external people? :-/ Your rowhammer reference is a good start.
Absolute security is nonexistant (ask NSA about recent shit happening), and security is almost always related to the worth of the data protected and the required efforts to compromise it. I don't believe Wikipedia in general is is danger for most of the places on Earth apart from subdemocratic countries, but there it's broken already by other methods. And pretty good security is achievable.

(By the way there is another rowhammer-based usenix paper about hypervisors, where no KSM involved. :-) Or rather: :-( )

  • Is the traffic secure? This isn't just purchasing a TLS cert. It's choosing the right software, configuring it well, making smart ciphersuite choices, HSTS, STS-preloading, key management, etc. As with everything else above, the situation on the Internet constantly evolves, and thus policies and practices here must constantly evolve as well.

This is the easiest point to resolve: I give you a prefabricated file to insert into your webserver, and it's done.

(Btw wikipedia.org is pretty good, only points my test complained about was supporting 3DES [probably due to compatibility] and possible BREACH and BEAST vulnarability, neither of them are criticaly exact test.)

Those last two points about server and traffic security are critical.

They can be resolved for medium risk, and for high-risk cases they're as vulnerable as WMF infrastructure (obviously most people are not preparing their servers physically taken over by hostile governments, but even that could be done if it's requested).

In many countries, state-level adversaries would absolutely love the ability to selectively filter the flow of Wikipedia's information across their national borders.

They do that for years. Some of them simply fake certs and MITM the whole pipe. I've seen it in the wild around.

Providing a popular-yet-insecure alternate redirect domain provides them with the perfect weakest-link-in-the-chain to attack the integrity of our content for all the users of that alternate domain easily.

That's just a question of guidelines about external servers. For a given assurance level it's possible to define requirements.
Still it would not resolve the conflict with encumbering policies. ;-)

That's not a technical question and probably not quite a legal or a security one; I'd say it's politics... forcing a process on people without properly supporting their needs.

We're not forcing a process on people without properly supporting their needs. We're refusing to offer new services where we cannot securely, privately, and reliably meet your needs currently. In the future when we can, we will offer that service (but I think we would still consider all alternate domains non-canonical and discourage their use), but I don't think there's any "force" involved here today.

Neither the expected move from volunteer-supported hostings to WMF in the foreseeable future. :-) And we expect WMF to help volunteers to run their service on wikipedia related cctlds, and it works, really. I just commented here to have the "other side reasoning" represented. Right now I have no problem at all.

And I'm happy to hear that it could be possible to move stuff over to WMF and keep it working in the future as I believe, despite my opposition, that it would be the right general direction.

Change 307959 abandoned by Dzahn:
point wikipedia.in to 180.179.52.130

https://gerrit.wikimedia.org/r/307959

This Phabricator task is specifically about pointing wikipedia.in to 180.179.52.130.

For general high-level discussion what the WMF can or should do (or not) there are better suited venues such as wikimedia-l@. Thanks for your understanding.

BBlack added a comment.Sep 2 2016, 4:33 PM

This Phabricator task is specifically about pointing wikipedia.in to 180.179.52.130.
For general high-level discussion what the WMF can or should do (or not) there are better suited venues such as wikimedia-l@. Thanks for your understanding.

While I generally agree, I think this is an important discussion on the meta-topic at hand here, and a chance to clear up any confusion.

@BBlack thanks for the detailed reply. I try not to talk apart this task, so I try hard to be brief.

Exactly. It's not that we lack infrastructure,

By "infrastructure" I meat the broadest sense which includes human resources as well, what WMF clearly lacks (on the side of R&D), as you correctly stated in a different paragraph. That's not a sin per se, it's a fact we have to handle.

We clearly do well on all of these fronts for projects and mechanisms we explicitly support. But we try not to take on arbitrary projects for which we know we can't provide that level of support.

That translates to "we provide ops but not development", which is okay. (Yes, exaggerated generalisation, I know as well.)

Yes, I'm speaking from the perspective of WMF's TechOps groups in general. And yes, that's a generalization, as our "operations" ends up entailing quite a large volume of both research and development.

Developing, evolving, and continuously enforcing the policies that we do in the name of privacy, security, and reliability is difficult work at this scale. [...]

And specifically this is one reason why WMF cannot take on, as you phrased, "arbitrary" project development, since it seems more and more impossible due to growing amount of restraints of policies. More restrictions mean less flexibility, and less room for suiting specialised needs. The masses win, and the smaller projects are forced to follow. I know this is one possible way to run a huge centralised system, and it is one reason how those "arbitrary" solutions may be implemented on a non-centrally-governed way.
The elitist view is hard to fight, too (some people happen to run larger infrastructures); there are cases when the demand should be examined regardless of the mood of the "elite participants", and the stance should be that the users are knowledgeable.

I'm not being elitist, I'm being pragmatic. The threats we protect against are very real, the performance and reliability benefits are very real, and users / smaller orgs are on average easily demonstrated to be less-capable in these areas, and it doesn't much matter whether that's due to lack of knowledge or resources. I think this is one key area of real disagreement.

Circling back to the topic of this ticket, go take a deep comparison of these two ssllabs results, which is really just scratching the surface, as the security points we're discussing go far beyond basic TLS configuration:

Our terminators: https://www.ssllabs.com/ssltest/analyze.html?d=en.wikipedia.org&s=208.80.153.224
The IP address this ticket is asking us to point a domain at: https://www.ssllabs.com/ssltest/analyze.html?d=server384.spikecloud.net.in

Even within the limited scope of publicly-visible TLS issues, it would take me pages just to explain in depth the huge range of problems with that server. I can only imagine how many problems would be apparent on deeper inspection from within. We just don't know. It's outside of any policy control for us.

I'll try to be more selective from here out because this conversational quote-thread is getting pretty huge, and as Andre said this isn't the right forum for such a broad debate.

This isn't about skill at all, it's about practicing policy.

However you try to talk about both sides at once. First, it is good if a service gets moved from a lax security and policy viewpoint to a stricter one; second, it is usually unacceptable when this happens by rejecting/dropping the service and replacing it by something else for the sake of policy. I debate not the usefulness of the policy but the result of the inability of change due to the policies. And that policies (in general sense) can be followed by volunteers, not necessarily worse than WMF, but with definitely different priorities about specific problems. Or in other words: volunteers get done things which is important for them while WMF gets things done which are important for the WMF. In an ideal world these would meet....

I think there's topical confusion bundled up in here about the task at hand, and some confusion on this whole skill/policy divide. Let's go after the task first:

We're not seeking to reject or drop any existing service, in general. We have a number of insecure redirect domains hosted today at the WMF, which are legacy constructs we're aiming to improve as quickly as we can, by implementing a secure redirect service for them. For the ones that show almost no real traffic, we've parked (disabled) some of them, but many of them remain viable for now, pending a future upgrade to a more-secure implementation.

Because we know these insecure redirects are a problem, we are in the present trying not to add any more to the pile before we get the real solution in place. However, anyone else in the world, both bad actors and good, are easily capable of setting up their own novel domainname and redirect service into the projects. We cannot stop you from doing that, and we're not taking any action to do so. I dislike it when those are popularized because I think it's a security threat to the global userbase, but that's neither here nor there when it comes to actions and policies.

On the skill vs practice angle: To be clear, I'm quite sure the community has many non-WMF members who are capable of setting up some kind of secure server and doing a good job of it. The question is whether they're really willing to invest the time and energy over the long term that it will require to support the necessary levels of privacy, security, and reliability. It's difficult to have faith in such an investment in the long term if it's not backed by an organization with the proper size, goals, and motivation. We work hard on this stuff at the WMF every day and it's a constant challenge even with the resources and skills we have available here.

No, we can't do that today, and we won't even try to do that today (modulo any existing legacy domains set up that way in the past). I really wish nobody else was doing it with their own 3rd party servers, either, because it's a real problem.

This is a hard to handle attitude: it is a real problem, because we can't and won't do it and someone does, without considering it a challange to solve based on the demand of the userbase, and finding ways to resolve "real problems".

You misunderstand me. I'm not saying it's a "real problem" because anything outside of our control is by-definition a problem (it's not). I'm saying it's a real problem because by popularizing an insecure redirect service into the primary projects, you are potentially actively harming real-world users in a variety of ways.

We do try to solve all the challenges in front of us based on the needs of the userbase, but as a donation-driven non-profit we also have limited resources and cannot pursue every possible goal at maximum speed in parallel. We have to allocate our human resources to priorities in some sane way. Right now the "secure redirect service" project is backlogged behind other ongoing tasks, but it will not be dropped off the radar.

The core issue is TLS certificates, many of which would be for wildcard language-subdomains to implement this properly. It's not just their cost, but the management overhead of maintaining so many of them.

I kind of happen to have see such thing around, so I would kindly call this statement not in par of general practice. Wildcarding or not depends on taste (I almost never use wildcards where the service doesn't force me to, but it's a matter of taste),

We don't have a choice here for legacy reasons: since time immemorial now our public URL structure has used language-code subdomains. There are ~300 language codes we support. Usually for the few 2LDs we currently redirect, we redirect all of them, using a wildcard. Converting to another structure at this point has a very long time horizon before we could ever forget/drop all of the ancient Wikipedia links embedded all over the Internet.

but nowadays it's extremely cheap nevertheless (I create EV wildcards in unlimited amounts for $200 for an almost full browser coverage) .

EV Wildcards don't exist. EV explicitly disallows wildcard SANs. Also, "almost" isn't good enough when you're trying to be a universal service for hundreds of millions of users around the globe.

And manging them through APIs are absolutely painless and automatized.

APIs help, but it's absolutely not a trivial problem at this scale. To be clear, we're already using automated deployment of LE certs via APIs for a limited number of WMF technical services, as well as our deployment labs infrastructure with a large-san-count cert there as well. We're very familiar with how these things work. However, scaling it to the potential ~700 x ~300 (210,000?!) non-wildcard SAN elements we'd like to support in a fully-general secure redirect service is not easy in practice. It's doable, especially if we're careful about limiting those multipliers where we can, and it's on our radar to do it, but it's not a simple project in development or deployment terms.

The emergence of LetsEncrypt has offered us a way forward on this, but even with the power of LE on our side, there's significant engineering work that remains on our end before we can offer such redirect domains securely as a standard simple option.

You see, that's a problem which would be resolved by a volunteer in a few hours, and testing and reliability assurance would use a few more days, so they obviously do it. For you it means significant engineering work so you consider it a policy-wise real/unsolvable problem. This is really hard to handle from the "other side".

I don't think this makes much sense. Policy is the reason it needs to be TLS-secured properly in the first place. The significant work is in development/engineering of the LE-based solution to do so. And no, I don't think a volunteer could solve this problem for us in a matter of hours. But as our infrastructure code is completely open, you're welcome to submit patches that would do so!

And yes, LetsEncrypt would be the way to go, and given the size and might of WMF there may be even collaboration between LE and WMF and the process could be put on fast track -- provided people don't get encumbered by strict policies and reject the idea to begin with. But it would even work with their present infrastructure.

It does work with the present infrastructure. As I stated above, we're already using automated LE certs in places. Check the cert on https://gerrit.wikimedia.org/ or https://en.wikipedia.beta.wmflabs.org/ (and many other examples, all with automated issue and renewal). I wrote the underyling code we use to obtain and renew those certs automatically through LE's API, which builds on the public acme-tiny work and integrates with the rest of our infrastructure and handles things reliably. We have been involved with them on many levels since before they started public services. Other than the policy goal of requiring TLS at all in the first place, there is no policy encumberment here that's preventing us from using LE.

There's a task about solving this particular sub-problem here: T133548 , and it's primarily blocked on TechOps freeing up enough human resources in our quarterly planning to get the remaining engineering work done.

But it wouldn't get us closer to the resolution of our current topic, right? :-) It's a tech issue, not a policy one, an as such, it's possible to resolve.

Yes, it's possible to resolve, correctly, when we get to the point in our task priorities where we devote time to solving it. That's why that ticket is open. It's waiting on the work to be completed.

Will their practices prevent security breaches and privacy leaks in general? Will the site be reliable in the face of component/server/datacenter/software failure? Will the organization responsible for that support persist the right practices indefinitely? Will they enforce them in the face of opposing pressure from local government?

Apart from our topic it is a interesting thought experience to ask these questions related to WMF ops, especially the last one, where the datacenter is in the US and you are handed a gag order. God on the $100 bill knows whats going on inside the WMF network for how long related to Uncle Sam. [No, you wouldn't know. You're not in the position. And remember the surprised face of those Google engineers a few years back.] Many of us live in the EU where privacy protection laws are very different (to put it euphemistically that it's "incomparably better") from the US. Not perfect, but much less scary.

I think this entire paragraph is way off-base. Our TechOps team is actually primarily based on the EU and physically located there as they work (as are some of our servers in our Amsterdam datacenter, but that's beside the point). Our core DCs are in the US, and I live in the US, but we have a pretty good idea that we're pretty secure, as much as is reasonable anywhere in the world.

We're very aware of state-level threats on both a policy and technical front, and IMHO we do a fantastic job in the face of that adversity, probably a lot better than you seem to be expecting. That is the motivation behind much of what we're debating here. I have full faith that our team is committed to our values. I am in the position to know, and I would know about governmental coercion. It would be nearly impossible for something of that sort to happen without my awareness or involvement. I can offer you a personal guarantee and canary as well: I certainly would never comply with such an order, have never received or complied with such an order, and would quit my job if there was no other way to proceed in the face of one. If you're worried about these things from a values, policy, transparency, and/or legal standpoint, maybe you should look at: https://transparency.wikimedia.org/ and/or https://policy.wikimedia.org/ .

I disagree with the bulk of the remainder of your comments, but honestly I'm out of time and steam for pursuing the conversation here. If what I've said above doesn't convince you, nothing will anyways.

grin added a comment.Sep 9 2016, 7:06 AM

I respectfully disagree with most of the points, but as it's been said before: I have noted that the topic should be considered complex in case of a decision should be reached.

Some minor comments follow, no reply strictly required here, you may reply me here or directly at your decision.

Circling back to the topic of this ticket, go take a deep comparison of these two ssllabs results,

I have a strong belief in education opposing to rejection so I try to contact the operators and send them the standard security-aware config for whatever server they use (unless it's a microsoft™ product in which case I join the rejection camp :-))*. I have fixed many "F" and "D" sites up to "B" by the standard config and "A+" is usually only a matter of certs and restructuring a little. (As a sidenote I use testssl.sh which is an excellent bash-based test, similar to ssllabs but running locally and being able to use multiprotocol tests.)

* (it's apache in this case, but I can offer standard configs for nginx, lighty and more.)

it would take me pages just to explain in depth the huge range of problems with that server.

Please consider the mindset of telling them to use "this secure config" instead of educating them about the specific reasons behind the config (unless they request to). People don't have to be familiar of every vulnerability for using the protection against them.

I don't have the resources to educate them one by one but I can offer secure configs for them to implement. Would you like me to create a meta/tech page for that?

We don't have a choice here for legacy reasons: [...]
Converting to another structure at this point has a very long time horizon before [...]

Parallel running is possible, doesn't have to happen at once. But I don't advocate a parallel scheme, just mentioned that it's not impossible by threory (as an addition to the topic of being able to consider a different approach in a given topic).

but nowadays it's extremely cheap nevertheless (I create EV wildcards in unlimited amounts for $200 for an almost full browser coverage) .

EV Wildcards don't exist.

You are correct, I was meaning OV (Org verified) wildcards. We don't use wildcards at all, that may have caused my mind slip.

Also, "almost" isn't good enough when you're trying to be a universal service for hundreds of millions of users around the globe.

"100% working" is a fiction, and nobody ever offers you that assurance, not even the largest players. As I notice Verisign have kindly removed the page which tried to summarize non-supported units and browsers but other big players usually write "99.9%", which is "almost" good. (StartCom, Letsencrypt and other players in the field are either using a well-propagated root or an intermediate signed by one of those, but newer certs simply won't work on ancient or inherently broken implementations.) So "almost" was a phrase much like "pretty good" in the name of PGP. :-)

However, scaling it to the potential ~700 x ~300 (210,000?!) non-wildcard SAN elements we'd like to support in a fully-general secure redirect service is not easy in practice.

This is one point where I have to acknowledge that I haven't actually crunched these numbers. Indeed it's doable and not an easy task. Thanks, acknowledged that wildcards make sense here.

And no, I don't think a volunteer could solve this problem for us in a matter of hours.

Not for you but for them. A standalone server doesn't need a multi-datacenter shared-cache autodeploying multicluster (and other buzzword-ready™) infrastructure. It can be done fast.
Your task is not easy, not fast and definitely not simple, and takes years, due to the large infrastrucure. That's a significant contrast I wanted to emphasize.
Consider the former as a temporary, fast, working solution until you reach the point of being able to support it.

But as our infrastructure code is completely open, you're welcome to submit patches that would do so!

*grin* Yeah, right. Even writing issue comments takes significant time, you are probably aware what resources would be required to actually disassemble the WMF OPS structure just to be able to offer viable solutions to be implemented. I guess most inputs are simply rejected by the team because they lack the overall insight of the structure. It'd require significantly more pro-bono time that I have on hand.

I can offer you a personal guarantee and canary as well [...]

For me this is the best ever example of why we need open community movement in the world at all. I accept you as a canary and I will monitor your job status, and I strongly hope that you can stay where you want to.
Thank you.

it would take me pages just to explain in depth the huge range of problems with that server.

Please consider the mindset of telling them to use "this secure config" instead of educating them about the specific reasons behind the config (unless they request to). People don't have to be familiar of every vulnerability for using the protection against them.
I don't have the resources to educate them one by one but I can offer secure configs for them to implement. Would you like me to create a meta/tech page for that?

SSL grades only scratch the surface. Failing to make A or A+ on ssllabs just means you're doing one thing wrong. Making the grade means you're doing one thing right. There are already existing public resources teaching people how to do basically-correct SSL configs, e.g. https://wiki.mozilla.org/Security/Server_Side_TLS . However, there are many other things to get right that we can't even see from the outside easily.

On the whole, for lack of time or ability to audit all of these servers and operators or educate them personally, I tend to think if they're not going to get the basic SSL part right on your own, there are probably a hundred other failures on the inside we can't see or manage. And again, there's the longer-term view on this: even if someone knows what they're doing and sets it up "perfectly" the first time around, who's actually maintaining it for the next N years? Security is never automatic.

But as our infrastructure code is completely open, you're welcome to submit patches that would do so!

*grin* Yeah, right. Even writing issue comments takes significant time, you are probably aware what resources would be required to actually disassemble the WMF OPS structure just to be able to offer viable solutions to be implemented. I guess most inputs are simply rejected by the team because they lack the overall insight of the structure. It'd require significantly more pro-bono time that I have on hand.

I spent years working on a project to make this possible. Before I left we had an entire full time equivalent of changes coming into the puppet repo from people not on the ops team, and a large percentage of those were coming in from volunteers.

It's not that hard and your time would probably be better spent helping out the mission than arguing on phabricator ;)

grin added a comment.Sep 14 2016, 1:50 PM

I spent years working on a project to make this possible. Before I left we had an entire full time equivalent of changes coming into the puppet repo from people not on the ops team, and a large percentage of those were coming in from volunteers.
It's not that hard and your time would probably be better spent helping out the mission than arguing on phabricator ;)

I have stopped arguing since I have already said what I wanted, and circling around the details won't help.

I haven't checked the repo for already existing methods of redirection and forwarding; as far as I have understood there isn't any. Still, I do not reject the idea of designing one, however this probably would need an in-depth confession from BBlack about the wishes and fears of the WMF team. Since the preferred way to do such a thing is to play along with the wishes of the ops team I probably should stongly convince @BBlack to specify those.

From where I stand (which is not to have spent lots of consideration of the tech details yet) this service should:

  • have one (or maybe many, but it's hardly required) really barebone webserver (I'd go for nginx but it's a matter of taste), doing temp or perm redirects (depending on what you want to log) from http://wikipedia.xx/<uri> and https://wikipedia.xx/<uri> to https://xx.wikipedia.org/<uri>
  • deciding what, how and where to log
  • deciding on tls certs (I'd say letsencrypt; in that case the server probably needs a local .well-known/ to serve from)
  • whether you want to cache (I don't see why, but I'm not you)

I don't yet see where the complexity and possible security considerations lie, but I'm humbly open for the suggestions.

I think we should discuss about country portal here,

  1. Who should own the country portal ?
  2. What is are the server requirement if it is owned by a country chapter ?
  3. Do's and Dont's in wikimedia sites. wikisource.in, wikipedia.de

I'm not sure I understand this task's specific request. Is it that:

  1. wikipedia.in be changed to redirect to an IP address rather than a URL,
  2. wikipedia.in be changed to redirect to wikimedia.in/index.html instead of wikimedia.in/wikipedia.html, or
  3. both?

If it's (1), then I haven't been able to figure out from the discussion so far what problem currently exists that the request would solve.
If it's (2) or (3), then I think it would be inappropriate to change the redirect. Any wikipedia.[ccTLD] domain should redirect to a Wikipedia-specific page.

As a side note, I think wikimedia.in/wikipedia.html is great as a portal to Wikipedia for visitors from India. I wonder if it's less valuable to have such country-specific portals, though, now that wikipedia.org will automatically show users their browser's default/preferred language.

@Naveenpf: Not here in T144508. This task is only about pointing wikipedia.in to 180.179.52.130. This task is not about country portals in general. Thanks!

Naveenpf added a comment.EditedSep 23 2016, 6:01 AM

@Aklapper I know this phabricator ticket was opened for simple change from url forward to giving proper ip address to the website.
But discussions has moved around Country portals.
Who is the decision maker for this ticket ? Or do you want me close this ticket ?

The Tags above mentioned Operations and WMF-Legal.

ZhouZ moved this task from Backlog to Assigned on the WMF-Legal board.Sep 23 2016, 6:05 PM
ema moved this task from Triage to TLS on the Traffic board.Oct 4 2016, 11:32 AM
ema moved this task from TLS to DNS Names on the Traffic board.
elukey triaged this task as Normal priority.Oct 19 2016, 12:43 PM

If @Naveenpf or Operations is still interested in pursuing this task's specific request, I'd appreciate an answer to my clarifying question above so I can better understand the request's brand/trademark implications. Otherwise, I suggest we resolve the task.

Hi @CRoslof,

Please find my answer inline.

Thank you
naveenpf

I'm not sure I understand this task's specific request. Is it that:

  1. wikipedia.in be changed to redirect to an IP address rather than a URL,

It is only 1.

  1. wikipedia.in be changed to redirect to wikimedia.in/index.html instead of wikimedia.in/wikipedia.html, or

We have not requested for this

  1. both?

If it's (1), then I haven't been able to figure out from the discussion so far what problem currently exists that the request would solve.
If it's (2) or (3), then I think it would be inappropriate to change the redirect. Any wikipedia.[ccTLD] domain should redirect to a Wikipedia-specific page.
As a side note, I think wikimedia.in/wikipedia.html is great as a portal to Wikipedia for visitors from India. I wonder if it's less valuable to have such country-specific portals, though, now that wikipedia.org will automatically show users their browser's default/preferred language.

In India, not many uses indic languages browser. Usually everyone uses browser in english and system language will be english.

@Aklapper Can you please change title to.... add new IP address ? We have changed to new server for better performance.

Our new server IP is :- 205.147.101.160.

Dzahn renamed this task from Point wikipedia.in to 180.179.52.130 instead of URL forward to Point wikipedia.in to 205.147.101.160 instead of URL forward.Oct 25 2016, 1:37 AM

@Aklapper Can you please change title to.... add new IP address ?

Done

If @Naveenpf or Operations is still interested in pursuing this task's specific request, I'd appreciate an answer to my clarifying question above so I can better understand the request's brand/trademark implications. Otherwise, I suggest we resolve the task.

Any update ? I have answered the questions posted

@Aklapper Can you please change title to.... add new IP address ? We have changed to new server for better performance.

How often is the server IP address going to change?

I haven't been able to figure out from the discussion so far what problem currently exists that the request would solve.

This is still the case. What issues arise from having a URL redirect rather than an IP address redirect?

Also, what sort of traffic does http://wikimedia.in/wikipedia.html get? Assuming the URL redirect does cause problems, it would be good to know how many people are affected.

@Aklapper Can you please change title to.... add new IP address ? We have changed to new server for better performance.

How often is the server IP address going to change?

It is happens very rarely. We had same ip address for past 4 years.
Now we have upgraded the server for better performance.

I haven't been able to figure out from the discussion so far what problem currently exists that the request would solve.

This is still the case. What issues arise from having a URL redirect rather than an IP address redirect?

AFIAK there are no specific issues in ip address redirect or url redirect.

Also, what sort of traffic does http://wikimedia.in/wikipedia.html get? Assuming the URL redirect does cause problems, it would be good to know how many people are affected.

It is cosmetic change. Users are not impacted with this change

So, as I understand it, there is no problem with the current state of affairs that the requested change would fix. Also, I have not seen any argument that the requested change would actually benefit anyone. We shouldn't change our DNS templates if there is no clear reason to do so.

@CRoslof This is an enhancement request. If someone take wikipedia.in now it is redirecting to new URL. There is no point in doing so. For every other country portal there is no URL forward.

If there is a problem only we should change ? I am not understanding that logic.

Outsider comment:
The task summary currently says "Point wikipedia.in to 205.147.101.160 instead of URL forward".
If I currently go to http://wikimedia.in/ I see information about the "Wikimedia India Chapter".
If I currently go to http://205.147.101.160/ I see "Permission error - You do not have permission to read this page, for the following reason: The action you have requested is limited to users in one of the groups: Board, Member.".
So to me it looks like redirecting would make things worse.

As CRoslof already wrote, "there is no problem with the current state of affairs that the requested change would fix".
If you think for some reason that the change that you propose makes sense, please elaborate why.

Hi Aklapper,

We are having multiple websites in same server.
We are doing the same for all other Indic websites.

[root@e2e-14-160 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500

inet 205.147.101.160  netmask 255.255.252.0  broadcast

205.147.103.255

inet6 fe80::cdff:fe93:65a0  prefixlen 64  scopeid 0x20<link>
inet6 2001:df0:411:4011:0:cdff:fe93:65a0  prefixlen 64  scopeid

0x0<global>

ether 02:00:cd:93:65:a0  txqueuelen 1000  (Ethernet)
RX packets 90972616  bytes 30890877945 (28.7 GiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 27867854  bytes 22858327837 (21.2 GiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0



 port 80 namevhost blog.wikimedia.in

(/etc/httpd/conf/httpd.conf:772)

        alias www.blog.wikimedia.in
        alias webmail.blog.wikimedia.in
        alias admin.blog.wikimedia.in
port 80 namevhost wikimedia.in (/etc/httpd/conf/httpd.conf:838)
        alias www.wikimedia.in
        alias webmail.wikimedia.in
        alias admin.wikimedia.in
port 80 namevhost wikinews.in (/etc/httpd/conf/httpd.conf:895)
        alias www.wikinews.in
port 80 namevhost wikibooks.in (/etc/httpd/conf/httpd.conf:901)
        alias www.wikibooks.in
port 80 namevhost wiktionary.org.in

(/etc/httpd/conf/httpd.conf:907)

        alias www.wiktionary.org.in
port 80 namevhost wiktionary.in (/etc/httpd/conf/httpd.conf:913)
        alias www.wiktionary.in
port 80 namevhost wikiquote.in (/etc/httpd/conf/httpd.conf:919)
        alias www.wikiquote.in
port 80 namevhost wikisource.in (/etc/httpd/conf/httpd.conf:925)
        alias www.wikisource.in
port 80 namevhost wikipedia.in (/etc/httpd/conf/httpd.conf:931)
        alias www.wikipedia.in

Thanks,
Naveen Francis

Dzahn added a comment.Jan 3 2019, 8:55 PM

It seems this ticket is permanently stalled. It hasn't had updates since over 2 years now. Does anyone have new input? Did anything change here? Should we close it as declined?

Aklapper closed this task as Declined.Jul 3 2019, 10:55 AM
Aklapper removed a project: Patch-For-Review.

Unfortunately closing this report as no further information has been provided.

@Naveenpf: After you have provided the information asked for and if this still wanted, please set the status of this report back to "Open" via the Add Action...Change Status dropdown. Thanks.

It is required.

What is the further information required?

Aklapper reopened this task as Open.Jul 3 2019, 11:15 AM