It's not possible to do HTTP requests to Wikimedia websites from MW maintenance scripts running on k8s, because the change in network topology between bare metal and k8s is not fully hidden by changing $wgLocalHTTPProxy to point to a proxy that can reach those servers.
The traditional $wgLocalHTTPProxy feature was a performance feature allowing requests for certain domains to be done via localhost:80, bypassing the CDN. This is reflected in the documentation for $wgLocalVirtualHosts "This lists domains that are configured as virtual hosts on the same machine" and MWHttpRequest::isLocalURL "Check if the URL can be served by localhost".
In the old days, CLI requests were typically run on servers without Apache, so MWHttpRequest::isLocalURL() returns false in CLI mode regardless of $wgLocalVirtualHosts. This hasn't been true for a few years, since we've had the service mesh in operation. Moreover, this choice made the assumption that everyone running MediaWiki would run cli scripts on a different host than where they were running any webserver, which is more or less assuming the wikimedia setup is universal, while it's not.
Perhaps in hindsight this could have been done by setting $wgLocalVirtualHosts = [] in config.
One simple solution to the problem is to remove the conditional in the code that switches off local urls management in CLI mode, and let the user decide when to switch it on/off.
As a longer term solution, we could deprecate $wgLocalHTTPProxy and $wgLocalVirtualHosts. Instead, have callers specify a URL zone (internal or external) and configure proxies by zone.
Within each zone, we could allow specific configuration of proxies by domain name, to provide a migration path from $wgLocalHTTPProxy, in case anyone is still using it for its intended purpose.