Page MenuHomePhabricator
Paste P4959

k8s worker first 3 puppet runs
ActivePublic

Authored by chasemp on Feb 21 2017, 4:23 PM.
Tags
None
Referenced Files
F5741083: k8s worker first 3 puppet runs
Feb 21 2017, 4:23 PM
Subscribers
None
root@k8s-worker:~# puppet agent --test
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Info: Caching catalog for k8s-worker.chasetestproject.eqiad.wmflabs
Info: Applying configuration version '1487693315'
Notice: /Stage[main]/Packages::Flannel/Package[flannel]/ensure: ensure changed 'purged' to 'present'
Notice: /Stage[main]/Packages::Kubernetes_node/Package[kubernetes-node]/ensure: ensure changed 'purged' to 'present'
Error: /Stage[main]/Toollabs::Infrastructure/Motd::Script[infrastructure-banner]/File[/etc/update-motd.d/50-infrastructure-banner]: Could not evaluate: Could not retrieve information from environment production source(s) puppet:///modules/toollabs/40-chasetestproject-infrastructure-banner.sh
Notice: /Stage[main]/K8s::Proxy/File[/etc/default/kube-proxy]/content:
--- /etc/default/kube-proxy 2016-11-24 11:17:39.000000000 +0000
+++ /tmp/puppet-file20170221-8807-2gtsw2 2017-02-21 16:21:29.717192753 +0000
@@ -3,4 +3,4 @@
# default config should be adequate
-#DAEMON_ARGS=""
+DAEMON_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig --proxy-mode='iptables' --masquerade-all=true"
Info: Computing checksum on file /etc/default/kube-proxy
Info: /Stage[main]/K8s::Proxy/File[/etc/default/kube-proxy]: Filebucketed /etc/default/kube-proxy to puppet with sum 124b77726995a48596738ecdc267912f
Notice: /Stage[main]/K8s::Proxy/File[/etc/default/kube-proxy]/content: content changed '{md5}124b77726995a48596738ecdc267912f' to '{md5}152915d4fb61c58b4042357fd2792bf5'
Notice: /Stage[main]/Docker/Package[docker-engine]/ensure: ensure changed 'purged' to '1.12.6-0~debian-jessie'
Notice: /Stage[main]/Docker::Configuration/File[/etc/docker/daemon.json]/ensure: created
Error: Execution of '/sbin/vgcreate docker /dev/vda4' returned 5: Physical volume '/dev/vda4' is already in volume group 'vd'
Unable to add physical volume '/dev/vda4' to volume group 'docker'.
Error: /Stage[main]/Lvm/Lvm::Volume_group[docker]/Volume_group[docker]/ensure: change from absent to present failed: Execution of '/sbin/vgcreate docker /dev/vda4' returned 5: Physical volume '/dev/vda4' is already in volume group 'vd'
Unable to add physical volume '/dev/vda4' to volume group 'docker'.
Notice: /Stage[main]/Lvm/Lvm::Volume_group[docker]/Lvm::Logical_volume[data]/Logical_volume[data]: Dependency Volume_group[docker] has failures: true
Warning: /Stage[main]/Lvm/Lvm::Volume_group[docker]/Lvm::Logical_volume[data]/Logical_volume[data]: Skipping because of failed dependencies
Notice: /Stage[main]/Profile::Docker::Storage/Volume_group[vd]/ensure: removed
Notice: /Stage[main]/Profile::Docker::Flannel/Base::Service_unit[docker]/File[/etc/systemd/system/docker.service.d]/ensure: created
Notice: /Stage[main]/Profile::Docker::Flannel/Base::Service_unit[docker]/File[/etc/systemd/system/docker.service.d/puppet-override.conf]/ensure: created
Info: /Stage[main]/Profile::Docker::Flannel/Base::Service_unit[docker]/File[/etc/systemd/system/docker.service.d/puppet-override.conf]: Scheduling refresh of Exec[systemd reload for docker]
Notice: /Stage[main]/Profile::Docker::Flannel/Base::Service_unit[docker]/Exec[systemd reload for docker]: Triggered 'refresh' from 1 events
Notice: /Stage[main]/K8s::Flannel/Base::Service_unit[flannel]/File[/lib/systemd/system/flannel.service]/ensure: created
Info: /Stage[main]/K8s::Flannel/Base::Service_unit[flannel]/File[/lib/systemd/system/flannel.service]: Scheduling refresh of Service[flannel]
Info: /Stage[main]/K8s::Flannel/Base::Service_unit[flannel]/File[/lib/systemd/system/flannel.service]: Scheduling refresh of Exec[systemd reload for flannel]
Notice: /Stage[main]/K8s::Flannel/Base::Service_unit[flannel]/Exec[systemd reload for flannel]: Triggered 'refresh' from 1 events
Notice: /Stage[main]/K8s::Flannel/Base::Service_unit[flannel]/Service[flannel]/ensure: ensure changed 'stopped' to 'running'
Info: /Stage[main]/K8s::Flannel/Base::Service_unit[flannel]/Service[flannel]: Unscheduling refresh on Service[flannel]
Notice: /Stage[main]/K8s::Infrastructure_config/File[/etc/kubernetes]/ensure: created
Notice: /Stage[main]/K8s::Infrastructure_config/File[/etc/kubernetes/kubeconfig]/ensure: created
Info: /Stage[main]/K8s::Infrastructure_config/File[/etc/kubernetes/kubeconfig]: Scheduling refresh of Base::Service_unit[kubelet]
Info: /Stage[main]/K8s::Infrastructure_config/File[/etc/kubernetes/kubeconfig]: Scheduling refresh of Base::Service_unit[kube-proxy]
Notice: /Stage[main]/Lvm/Lvm::Volume_group[docker]/Lvm::Logical_volume[metadata]/Logical_volume[metadata]: Dependency Volume_group[docker] has failures: true
Warning: /Stage[main]/Lvm/Lvm::Volume_group[docker]/Lvm::Logical_volume[metadata]/Logical_volume[metadata]: Skipping because of failed dependencies
Info: Base::Service_unit[kube-proxy]: Scheduling refresh of Exec[systemd reload for kube-proxy]
Info: Base::Service_unit[kube-proxy]: Scheduling refresh of Service[kube-proxy]
Notice: /Stage[main]/K8s::Proxy/Base::Service_unit[kube-proxy]/File[/lib/systemd/system/kube-proxy.service]/content:
--- /lib/systemd/system/kube-proxy.service 2016-11-24 11:17:39.000000000 +0000
+++ /tmp/puppet-file20170221-8807-1izwg9q 2017-02-21 16:21:44.353119805 +0000
@@ -5,15 +5,16 @@
After=network.target
[Service]
-Environment=KUBE_MASTER=--master=127.0.0.1:8080
+# The common shared configuration file
EnvironmentFile=-/etc/kubernetes/config
+# The per service configuration file
EnvironmentFile=-/etc/default/%p
ExecStart=/usr/bin/kube-proxy \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$DAEMON_ARGS
-Restart=on-failure
+Restart=always
LimitNOFILE=65536
[Install]
Info: Computing checksum on file /lib/systemd/system/kube-proxy.service
Info: /Stage[main]/K8s::Proxy/Base::Service_unit[kube-proxy]/File[/lib/systemd/system/kube-proxy.service]: Filebucketed /lib/systemd/system/kube-proxy.service to puppet with sum e9ab304f39fe3525249dc107de6d28e0
Notice: /Stage[main]/K8s::Proxy/Base::Service_unit[kube-proxy]/File[/lib/systemd/system/kube-proxy.service]/content: content changed '{md5}e9ab304f39fe3525249dc107de6d28e0' to '{md5}cf3e74d0717f112114e79b92c9c3d44e'
Notice: /Stage[main]/K8s::Proxy/Base::Service_unit[kube-proxy]/File[/lib/systemd/system/kube-proxy.service]/mode: mode changed '0644' to '0444'
Info: /Stage[main]/K8s::Proxy/Base::Service_unit[kube-proxy]/File[/lib/systemd/system/kube-proxy.service]: Scheduling refresh of Service[kube-proxy]
Info: /Stage[main]/K8s::Proxy/Base::Service_unit[kube-proxy]/File[/lib/systemd/system/kube-proxy.service]: Scheduling refresh of Exec[systemd reload for kube-proxy]
Info: /Stage[main]/K8s::Proxy/Base::Service_unit[kube-proxy]/File[/lib/systemd/system/kube-proxy.service]: Scheduling refresh of Service[kube-proxy]
Info: /Stage[main]/K8s::Proxy/Base::Service_unit[kube-proxy]/File[/lib/systemd/system/kube-proxy.service]: Scheduling refresh of Exec[systemd reload for kube-proxy]
Notice: /Stage[main]/K8s::Proxy/Base::Service_unit[kube-proxy]/Exec[systemd reload for kube-proxy]: Triggered 'refresh' from 3 events
Notice: /Stage[main]/K8s::Proxy/Base::Service_unit[kube-proxy]/Service[kube-proxy]/enable: enable changed 'false' to 'true'
Notice: /Stage[main]/K8s::Proxy/Base::Service_unit[kube-proxy]/Service[kube-proxy]: Triggered 'refresh' from 3 events
Notice: /Stage[main]/Toollabs::Infrastructure/Security::Access::Config[labs-admin-only]/File[/etc/security/access.conf.d/50-labs-admin-only]/ensure: created
Info: /Stage[main]/Toollabs::Infrastructure/Security::Access::Config[labs-admin-only]/File[/etc/security/access.conf.d/50-labs-admin-only]: Scheduling refresh of Exec[merge-access-conf]
Notice: /Stage[main]/Security::Access/Exec[merge-access-conf]: Triggered 'refresh' from 1 events
Notice: /Stage[main]/K8s::Ssl/File[/var/lib/kubernetes]/ensure: created
Notice: /Stage[main]/K8s::Ssl/File[/var/lib/kubernetes/ssl]/ensure: created
Notice: /Stage[main]/K8s::Ssl/File[/var/lib/kubernetes/ssl/certs]/ensure: created
Notice: /Stage[main]/K8s::Ssl/File[/var/lib/kubernetes/ssl/private_keys]/ensure: created
Notice: /Stage[main]/K8s::Ssl/File[/var/lib/kubernetes/ssl/private_keys/server.key]/ensure: defined content as '{md5}37b5fb9b445719de339ad39ab8ad5f92'
Notice: /Stage[main]/K8s::Ssl/File[/var/lib/kubernetes/ssl/certs/ca.pem]/ensure: defined content as '{md5}9f3978d4816ae16ad737cf46ca10af19'
Notice: /Stage[main]/K8s::Ssl/File[/var/lib/kubernetes/ssl/certs/cert.pem]/ensure: defined content as '{md5}0c8c37146664378586e436b004f0246c'
Info: Class[K8s::Ssl]: Scheduling refresh of Class[K8s::Kubelet]
Info: Class[K8s::Kubelet]: Scheduling refresh of Base::Service_unit[kubelet]
Notice: /Stage[main]/K8s::Kubelet/File[/var/run/kubernetes]/owner: owner changed 'kube' to 'root'
Notice: /Stage[main]/K8s::Kubelet/File[/var/run/kubernetes]/group: group changed 'kube' to 'root'
Notice: /Stage[main]/K8s::Kubelet/File[/var/run/kubernetes]/mode: mode changed '0755' to '0700'
Info: Base::Service_unit[kubelet]: Scheduling refresh of Exec[systemd reload for kubelet]
Info: Base::Service_unit[kubelet]: Scheduling refresh of Service[kubelet]
Notice: /Stage[main]/K8s::Kubelet/File[/etc/default/kubelet]/content:
--- /etc/default/kubelet 2016-11-24 11:17:39.000000000 +0000
+++ /tmp/puppet-file20170221-8807-f7uovj 2017-02-21 16:21:49.601093647 +0000
@@ -2,16 +2,15 @@
# kubernetes kubelet (minion) config
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
-KUBELET_ADDRESS="--address=127.0.0.1"
+KUBELET_ADDRESS="--address=0.0.0.0"
# The port for the info server to serve on
-# KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname
-KUBELET_HOSTNAME="--hostname-override=127.0.0.1"
+KUBELET_HOSTNAME="--hostname-override=k8s-worker.chasetestproject.eqiad.wmflabs"
# location of the api-server
-KUBELET_API_SERVER="--api-servers=http://127.0.0.1:8080"
+KUBELET_API_SERVER="--api-servers=https://k8s-master-01.chasetestproject.eqiad.wmflabs:6443"
# Docker endpoint to connect to
# Default: unix:///var/run/docker.sock
@@ -23,4 +22,5 @@
# Other options:
# --container_runtime=rkt
# --configure-cbr0={true|false}
-#DAEMON_ARGS=""
+
+DAEMON_ARGS="--configure-cbr0=false --kubeconfig=/etc/kubernetes/kubeconfig --pod-infra-container-image=gcr.io/google_containers/pause:2.0 --tls-private-key-file=/var/lib/kubernetes/ssl/private_keys/server.key --tls-cert-file=/var/lib/kubernetes/ssl/certs/cert.pem --cluster-domain=kube"
Info: Computing checksum on file /etc/default/kubelet
Info: /Stage[main]/K8s::Kubelet/File[/etc/default/kubelet]: Filebucketed /etc/default/kubelet to puppet with sum 3f5907816e08d0489ff595fe2dc95b75
Notice: /Stage[main]/K8s::Kubelet/File[/etc/default/kubelet]/content: content changed '{md5}3f5907816e08d0489ff595fe2dc95b75' to '{md5}152a6ddd8d3da738b9c69274293d95b3'
Notice: /Stage[main]/K8s::Kubelet/File[/var/lib/kubelet]/mode: mode changed '0755' to '0700'
Notice: /Stage[main]/K8s::Kubelet/Base::Service_unit[kubelet]/File[/lib/systemd/system/kubelet.service]/content:
--- /lib/systemd/system/kubelet.service 2016-11-24 11:17:39.000000000 +0000
+++ /tmp/puppet-file20170221-8807-6tpgf0 2017-02-21 16:21:49.661093348 +0000
@@ -9,7 +9,9 @@
[Service]
WorkingDirectory=/var/lib/kubelet
+# The shared kubernetes configurations file
EnvironmentFile=-/etc/kubernetes/config
+# kubelet specific configuration
EnvironmentFile=-/etc/default/%p
ExecStart=/usr/bin/kubelet \
$KUBE_LOGTOSTDERR \
@@ -22,7 +24,7 @@
$DOCKER_ENDPOINT \
$CADVISOR_PORT \
$DAEMON_ARGS
-Restart=on-failure
+Restart=always
[Install]
WantedBy=multi-user.target
Info: Computing checksum on file /lib/systemd/system/kubelet.service
Info: /Stage[main]/K8s::Kubelet/Base::Service_unit[kubelet]/File[/lib/systemd/system/kubelet.service]: Filebucketed /lib/systemd/system/kubelet.service to puppet with sum 2380b465e6f4e5c3980a8f7e5b55a2a6
Notice: /Stage[main]/K8s::Kubelet/Base::Service_unit[kubelet]/File[/lib/systemd/system/kubelet.service]/content: content changed '{md5}2380b465e6f4e5c3980a8f7e5b55a2a6' to '{md5}9cce833799a16acf10d9f876ff4a08ef'
Notice: /Stage[main]/K8s::Kubelet/Base::Service_unit[kubelet]/File[/lib/systemd/system/kubelet.service]/mode: mode changed '0644' to '0444'
Info: /Stage[main]/K8s::Kubelet/Base::Service_unit[kubelet]/File[/lib/systemd/system/kubelet.service]: Scheduling refresh of Service[kubelet]
Info: /Stage[main]/K8s::Kubelet/Base::Service_unit[kubelet]/File[/lib/systemd/system/kubelet.service]: Scheduling refresh of Exec[systemd reload for kubelet]
Info: /Stage[main]/K8s::Kubelet/Base::Service_unit[kubelet]/File[/lib/systemd/system/kubelet.service]: Scheduling refresh of Service[kubelet]
Info: /Stage[main]/K8s::Kubelet/Base::Service_unit[kubelet]/File[/lib/systemd/system/kubelet.service]: Scheduling refresh of Exec[systemd reload for kubelet]
Notice: /Stage[main]/K8s::Kubelet/Base::Service_unit[kubelet]/Exec[systemd reload for kubelet]: Triggered 'refresh' from 3 events
Notice: /Stage[main]/K8s::Kubelet/Base::Service_unit[kubelet]/Service[kubelet]/ensure: ensure changed 'stopped' to 'running'
Info: /Stage[main]/K8s::Kubelet/Base::Service_unit[kubelet]/Service[kubelet]: Unscheduling refresh on Service[kubelet]
Notice: Finished catalog run in 36.61 seconds
root@k8s-worker:~#
root@k8s-worker:~#
root@k8s-worker:~#
root@k8s-worker:~#
root@k8s-worker:~#
root@k8s-worker:~#
root@k8s-worker:~#
root@k8s-worker:~#
root@k8s-worker:~#
root@k8s-worker:~#
root@k8s-worker:~# puppet agent --test
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Info: Caching catalog for k8s-worker.chasetestproject.eqiad.wmflabs
Info: Applying configuration version '1487693158'
Error: /Stage[main]/Toollabs::Infrastructure/Motd::Script[infrastructure-banner]/File[/etc/update-motd.d/50-infrastructure-banner]: Could not evaluate: Could not retrieve information from environment production source(s) puppet:///modules/toollabs/40-chasetestproject-infrastructure-banner.sh
Notice: /Stage[main]/Lvm/Lvm::Volume_group[docker]/Volume_group[docker]/ensure: created
Notice: /Stage[main]/Lvm/Lvm::Volume_group[docker]/Lvm::Logical_volume[data]/Logical_volume[data]/ensure: created
Notice: /Stage[main]/Lvm/Lvm::Volume_group[docker]/Lvm::Logical_volume[metadata]/Logical_volume[metadata]/ensure: created
Notice: /Stage[main]/Base::Monitoring::Host/File[/usr/local/lib/nagios/plugins/check_eth]/content:
--- /usr/local/lib/nagios/plugins/check_eth 2017-02-17 23:00:52.269083000 +0000
+++ /tmp/puppet-file20170221-11864-7tbta6 2017-02-21 16:22:22.720928560 +0000
@@ -1,6 +1,6 @@
#!/bin/sh
EXIT_CODE=0
-for INTERFACE in eth0 ifb0 lo ; do
+for INTERFACE in eth0 ifb0 lo ; do
REQ_SPEED=1000 # The default for now
STATUS=`ip link show ${INTERFACE}`
if [ "$?" != "0" ]; then
Info: Computing checksum on file /usr/local/lib/nagios/plugins/check_eth
Info: /Stage[main]/Base::Monitoring::Host/File[/usr/local/lib/nagios/plugins/check_eth]: Filebucketed /usr/local/lib/nagios/plugins/check_eth to puppet with sum 9f497eda68bf30e2c68ee788af2211fc
Notice: /Stage[main]/Base::Monitoring::Host/File[/usr/local/lib/nagios/plugins/check_eth]/content: content changed '{md5}9f497eda68bf30e2c68ee788af2211fc' to '{md5}96d6aca97028e5b28007ae10ae509c23'
Notice: /Stage[main]/K8s::Kubelet/Base::Service_unit[kubelet]/Service[kubelet]/ensure: ensure changed 'stopped' to 'running'
Info: /Stage[main]/K8s::Kubelet/Base::Service_unit[kubelet]/Service[kubelet]: Unscheduling refresh on Service[kubelet]
Notice: Finished catalog run in 11.64 seconds
root@k8s-worker:~#
root@k8s-worker:~#
root@k8s-worker:~#
root@k8s-worker:~#
root@k8s-worker:~#
root@k8s-worker:~#
root@k8s-worker:~#
root@k8s-worker:~#
root@k8s-worker:~#
root@k8s-worker:~# puppet agent --test
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Info: Caching catalog for k8s-worker.chasetestproject.eqiad.wmflabs
Info: Applying configuration version '1487693315'
Error: /Stage[main]/Toollabs::Infrastructure/Motd::Script[infrastructure-banner]/File[/etc/update-motd.d/50-infrastructure-banner]: Could not evaluate: Could not retrieve information from environment production source(s) puppet:///modules/toollabs/40-chasetestproject-infrastructure-banner.sh
Notice: Finished catalog run in 9.75 seconds
root@k8s-worker:~# hostname -f