Possible optimisation of Azure Unit tests [autoscaler] - TagMerge

Possible optimisation of Azure Unit tests [autoscaler]

jayantjain93Asked 5 months ago

The goal is to possible explore if it is feasible to optimise the total time taken for executing the unit tests.

Which component are you using?: Cluster Autoscaler

What version of the component are you using?: Master

Component version: Latest (1.22)

What environment is this in?:

All

What did you expect to happen?: What happened instead?:

k8s.io/autoscaler/cluster-autoscaler/cloudprovider/azure is accounting for >73% of the compute when running unit tests. This seems a bit excessive for a single cloud provider.

ok      k8s.io/autoscaler/cluster-autoscaler    0.440s
ok      k8s.io/autoscaler/cluster-autoscaler/cloudprovider      0.121s
ok      k8s.io/autoscaler/cluster-autoscaler/cloudprovider/alicloud     0.124s
ok      k8s.io/autoscaler/cluster-autoscaler/cloudprovider/aws  0.483s
ok      k8s.io/autoscaler/cluster-autoscaler/cloudprovider/azure        158.385s   <--------
ok      k8s.io/autoscaler/cluster-autoscaler/cloudprovider/baiducloud   0.178s
ok      k8s.io/autoscaler/cluster-autoscaler/cloudprovider/bizflycloud  0.215s
ok      k8s.io/autoscaler/cluster-autoscaler/cloudprovider/brightbox    3.372s
ok      k8s.io/autoscaler/cluster-autoscaler/cloudprovider/cloudstack   1.107s
ok      k8s.io/autoscaler/cluster-autoscaler/cloudprovider/cloudstack/service   6.088s
ok      k8s.io/autoscaler/cluster-autoscaler/cloudprovider/clusterapi   9.854s
ok      k8s.io/autoscaler/cluster-autoscaler/cloudprovider/digitalocean 0.173s
ok      k8s.io/autoscaler/cluster-autoscaler/cloudprovider/exoscale     0.293s
ok      k8s.io/autoscaler/cluster-autoscaler/cloudprovider/gce  0.919s
ok      k8s.io/autoscaler/cluster-autoscaler/cloudprovider/ionoscloud   0.800s
ok      k8s.io/autoscaler/cluster-autoscaler/cloudprovider/linode       1.164s
ok      k8s.io/autoscaler/cluster-autoscaler/cloudprovider/linode/linodego      0.101s
ok      k8s.io/autoscaler/cluster-autoscaler/cloudprovider/magnum       2.669s
ok      k8s.io/autoscaler/cluster-autoscaler/cloudprovider/ovhcloud     0.175s
ok      k8s.io/autoscaler/cluster-autoscaler/cloudprovider/packet       4.216s
ok      k8s.io/autoscaler/cluster-autoscaler/clusterstate       0.230s
ok      k8s.io/autoscaler/cluster-autoscaler/clusterstate/api   0.112s
ok      k8s.io/autoscaler/cluster-autoscaler/clusterstate/utils 0.163s
ok      k8s.io/autoscaler/cluster-autoscaler/core       9.605s
ok      k8s.io/autoscaler/cluster-autoscaler/core/utils 0.282s
ok      k8s.io/autoscaler/cluster-autoscaler/debuggingsnapshot  0.134s
ok      k8s.io/autoscaler/cluster-autoscaler/estimator  0.336s
ok      k8s.io/autoscaler/cluster-autoscaler/expander/factory   0.236s
ok      k8s.io/autoscaler/cluster-autoscaler/expander/mostpods  0.182s
ok      k8s.io/autoscaler/cluster-autoscaler/expander/price     0.210s
ok      k8s.io/autoscaler/cluster-autoscaler/expander/priority  0.179s
ok      k8s.io/autoscaler/cluster-autoscaler/expander/random    0.125s
ok      k8s.io/autoscaler/cluster-autoscaler/expander/waste     0.115s
ok      k8s.io/autoscaler/cluster-autoscaler/metrics    0.271s
ok      k8s.io/autoscaler/cluster-autoscaler/processors/customresources 0.302s
ok      k8s.io/autoscaler/cluster-autoscaler/processors/nodegroupconfig 0.219s
ok      k8s.io/autoscaler/cluster-autoscaler/processors/nodegroupset    0.248s
ok      k8s.io/autoscaler/cluster-autoscaler/processors/nodeinfosprovider       0.528s
ok      k8s.io/autoscaler/cluster-autoscaler/processors/nodes   0.337s
ok      k8s.io/autoscaler/cluster-autoscaler/processors/pods    0.210s
ok      k8s.io/autoscaler/cluster-autoscaler/processors/status  0.207s
ok      k8s.io/autoscaler/cluster-autoscaler/simulator  4.202s
ok      k8s.io/autoscaler/cluster-autoscaler/utils/backoff      3.204s
ok      k8s.io/autoscaler/cluster-autoscaler/utils/daemonset    0.261s
ok      k8s.io/autoscaler/cluster-autoscaler/utils/deletetaint  0.189s
ok      k8s.io/autoscaler/cluster-autoscaler/utils/drain        0.132s
ok      k8s.io/autoscaler/cluster-autoscaler/utils/gpu  0.149s
ok      k8s.io/autoscaler/cluster-autoscaler/utils/klogx        0.049s
ok      k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes   0.113s
ok      k8s.io/autoscaler/cluster-autoscaler/utils/labels       0.271s
ok      k8s.io/autoscaler/cluster-autoscaler/utils/pod  0.085s
ok      k8s.io/autoscaler/cluster-autoscaler/utils/scheduler    0.101s
ok      k8s.io/autoscaler/cluster-autoscaler/utils/taints       0.121s
ok      k8s.io/autoscaler/cluster-autoscaler/utils/tpu  0.068s

How to reproduce it (as minimally and precisely as possible):

make test-in-docker

Source: link

jayantjain93Answered 5 months ago

/label cluster-autoscaler azure

marwanadAnswered 4 months ago

TestDeleteVirtualMachine (155.25s) seems to be the culprit. https://github.com/kubernetes/autoscaler/blob/cluster-autoscaler-release-1.21/cluster-autoscaler/cloudprovider/azure/azureutiltest.go#L328

@nilo19 could you take a look at that since you added it?

nilo19Answered 4 months ago

https://github.com/kubernetes/autoscaler/pull/4565 let's disable it since the client is not easy to mock.

jayantjain93Answered 4 months ago

Thanks for having a look at this!

Recent Issues

    Programming Languages