My deployment fails with the following error: ```...
# workshop-infrastructure-platform-engineering
a
My deployment fails with the following error:
Copy code
2024-06-11T15:29:55Z CEST [ERROR]: Error 1/1: environment/ CLD-003: Error generating kubeconfig
Unable to determine cluster version: Get "<https://5min-idp-control-plane:6443/version>": Service Unavailable
There is an application deployed
j
Might be a flaky network connection, does a redeploy work?
a
I've removed everything and will try from scratch
j
Can be done in the UI here
a
Same error again.
Copy code
2024-06-11 13:43:01 [ERROR] Error 1/1: workload/hello-world MST-003: Deployment set error
error running dry-run install: Unable to continue with install: could not get information about the resource Service "hello-world" in namespace "5min-idp-ixse-development": Get "<https://5min-idp-control-plane:6443/api/v1/namespaces/5min-idp-ixse-development/services/hello-world>": Service Unavailable
  2024-06-11 13:43:01 [INFO] Step failed after 1m23s with message: failed to deploy an application
  2024-06-11 13:43:01 [DEBUG] Outputs: 'valuesetversion', 'deployment', 'deployment_id', 'deploymentset', 'set_id', 'status', 'value_set_version_id'
>> Pipeline: default >> deploy (Job 1) >> Deploy Set To Environment (Step 2) >> Status: failed

Pipeline run d211db18-9345-4e7a-aa88-96b115fe154b  failed

Unexpected error occurred.
deploy failed
5min-idp:/app# k get svc -n 5min-idp-ixse-development
bash: k: command not found
5min-idp:/app# kubectl get svc -n 5min-idp-ixse-development
No resources found in 5min-idp-ixse-development namespace.
5min-idp:/app# kubectl get all -A
NAMESPACE            NAME                                                 READY   STATUS    RESTARTS   AGE
humanitec-agent      pod/humanitec-agent-7ddb6fb59f-qvg6l                 1/1     Running   0          3m44s
ingress-nginx        pod/ingress-nginx-controller-6647957864-62x5w        1/1     Running   0          3m38s
kube-system          pod/coredns-76f75df574-4g6j7                         1/1     Running   0          3m44s
kube-system          pod/coredns-76f75df574-8s4bq                         1/1     Running   0          3m44s
kube-system          pod/etcd-5min-idp-control-plane                      1/1     Running   0          3m59s
kube-system          pod/kindnet-brbmz                                    1/1     Running   0          3m45s
kube-system          pod/kube-apiserver-5min-idp-control-plane            1/1     Running   0          3m59s
kube-system          pod/kube-controller-manager-5min-idp-control-plane   1/1     Running   0          3m59s
kube-system          pod/kube-proxy-s7775                                 1/1     Running   0          3m45s
kube-system          pod/kube-scheduler-5min-idp-control-plane            1/1     Running   0          3m59s
local-path-storage   pod/local-path-provisioner-7577fdbbfb-sfs5j          1/1     Running   0          3m44s

NAMESPACE       NAME                                         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
default         service/kubernetes                           ClusterIP   10.96.0.1       <none>        443/TCP                      4m
ingress-nginx   service/ingress-nginx-controller             NodePort    10.96.193.218   <none>        80:30080/TCP,443:30266/TCP   3m38s
ingress-nginx   service/ingress-nginx-controller-admission   ClusterIP   10.96.186.231   <none>        443/TCP                      3m38s
kube-system     service/kube-dns                             ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP,9153/TCP       3m59s

NAMESPACE     NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/kindnet      1         1         1       1            1           <http://kubernetes.io/os=linux|kubernetes.io/os=linux>   3m58s
kube-system   daemonset.apps/kube-proxy   1         1         1       1            1           <http://kubernetes.io/os=linux|kubernetes.io/os=linux>   3m59s

NAMESPACE            NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
humanitec-agent      deployment.apps/humanitec-agent            1/1     1            1           3m46s
ingress-nginx        deployment.apps/ingress-nginx-controller   1/1     1            1           3m38s
kube-system          deployment.apps/coredns                    2/2     2            2           3m59s
local-path-storage   deployment.apps/local-path-provisioner     1/1     1            1           3m58s

NAMESPACE            NAME                                                  DESIRED   CURRENT   READY   AGE
humanitec-agent      replicaset.apps/humanitec-agent-7ddb6fb59f            1         1         1       3m44s
ingress-nginx        replicaset.apps/ingress-nginx-controller-6647957864   1         1         1       3m38s
kube-system          replicaset.apps/coredns-76f75df574                    2         2         2       3m44s
local-path-storage   replicaset.apps/local-path-provisioner-7577fdbbfb     1         1         1       3m44s
5min-idp:/app#
j
Could you post the logs of the
humanitec-agent
running the
humanitec-agent
namespace? This is connecting your local cluster to the Humanitec Orchestrator.
a
humanitec-agent.log
Should the manifest be empty?
j
No, but it might only be available after the initial cluster connect.
I don’t see any obvious red-flags in the logs 😕 . Did you run
./2_cleanup.sh
between your tries? There is some state, maybe it confused them.
a
Will try again.. 3rd time's the charm.. 🙂
No luck..
Copy code
5min-idp:/app# kubectl get all -n 5min-idp-nyek-development
No resources found in 5min-idp-nyek-development namespace.
Not really sure what to troubleshoot next
No errors in the agent logs
j
Same error in the deploy?
a
Basically it always complains about one or more resources missing in the namespace.
But why no resources are created in the namespace 🤷 Cluster is obviously working and so is the agent.
j
I’m not sure, it seems that the connection between agent and orchestrator is somehow flaky.
a
I think it's somehow related to colima.. (Not running Docker desktop)
j
Might be, never tried that 😕, but will reach out internally to ask how to troubleshoot agent connections.
a
Will try using a different container engine.
j
Feel free to ping me here in case you had a chance to try this out, would be interested to debug this further.
a
Absolutely.. Will dig deeper into it tomorrow.
I gave up and am running the demo on a Linux server instead.