Skip to main content

Kubernetes Resource

The Kubernetes resource check creates kubernetes resources based on the provided manifests & perform checks on them. Some common use case of this check would be to see if a service is accessible via the ingress as shown in the example below.

ingress_test.yaml
---
apiVersion: canaries.flanksource.com/v1
kind: Canary
metadata:
name: ingress-test
namespace: default
labels:
"Expected-Fail": "false"
spec:
schedule: "@every 5m"
kubernetesResource:
- name: ingress-accessibility-check
namespace: default
description: "deploy httpbin & check that it's accessible via ingress"
waitFor:
expr: 'dyn(resources).all(r, k8s.isReady(r))'
interval: 2s
timeout: 5m
staticResources:
- apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: httpbin
namespace: default
spec:
rules:
- host: "httpbin.127.0.0.1.nip.io"
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: httpbin
port:
number: 80
resources:
- apiVersion: v1
kind: Pod
metadata:
name: httpbin
namespace: default
labels:
app: httpbin
spec:
containers:
- name: httpbin
image: "kennethreitz/httpbin:latest"
ports:
- containerPort: 80
- apiVersion: v1
kind: Service
metadata:
name: httpbin
namespace: default
spec:
selector:
app: httpbin
ports:
- port: 80
targetPort: 80
checks:
- http:
- name: Call httpbin via ingress
url: "http://ingress-nginx.ingress-nginx.svc"
headers:
- name: Host
value: "{{(index ((index .staticResources 0).Object.spec.rules) 0).host}}"
checkRetries:
delay: 3s
interval: 2s
timeout: 5m
FieldDescriptionScheme
name*

Name of the check, must be unique within the canary

string

resources*

Manifests that should be applied

[]KubernetesManifest

checkRetries

Retry configuration for the checks

CheckRetries

checks

canary spec for the checks to be performed after the resources are created

CanarySpec

clearResources

When set to true, resources from previous checks are deleted before every run. Even though the resources are deleted at the end of a check, setting this to true guarantees that there are no leftover resources from a previous failed run.

boolean

staticResources

Static resources are like resources but preserved between checks. They are only deleted when the canary is deleted.

[]KubernetesManifest

waitFor

Specify the desired state of the static/non-static resources before running the checks

WaitFor

description

Description for the check

string

display

Expression to change the formatting of the display

Expression

icon

Icon for overwriting default icon on the dashboard

Icon

labels

Labels for check

map[string]string

metrics

Metrics to export from

[]Metrics

test

Evaluate whether a check is healthy

Expression

transform

Transform data from a check into multiple individual checks

Expression

kubeconfig

Path to a kubeconfig on disk, or a reference to an existing secret

EnvVar

Check Retries

FieldDescriptionScheme
delay

Initial delay before the checks are run

Duration

interval

Retry the checks, on failure, on this interval

Duration

timeout

Timeout for the check

Duration

Wait For

FieldDescriptionScheme
delete

When set to true, the check waits for the resources to be deleted

boolean

disable

Disable the default behavior of waiting for resources to be healthy.

Duration

expr

CEL expression that determines whether all the resources are in their desired state before running checks on them. It receives a resources array of the static and non-static resources. The default behavior is to wait until all the resources are ready dyn(resources).all(r, k8s.isReady(r)).

CEL

interval

Interval to check if all static & non-static resources are ready. (Default: 5s)

Duration

timeout

Timeout to wait for all static & non-static resources to satisfy the expression. (Default: 10m)

Duration

Remote clusters

A single canary-checker instance can connect to any number of remote clusters via custom kubeconfig. Either the kubeconfig itself or the path to the kubeconfig can be provided.

kubeconfig from kubernetes secret

remote-cluster.yaml
---
apiVersion: canaries.flanksource.com/v1
kind: Canary
metadata:
name: pod-creation-test
spec:
schedule: "@every 5m"
kubernetesResource:
- name: pod creation on aws cluster
namespace: default
description: "deploy httpbin"
kubeconfig:
valueFrom:
secretKeyRef:
name: aws-kubeconfig
key: kubeconfig
resources:
- apiVersion: v1
kind: Pod
metadata:
name: httpbin
namespace: default
labels:
app: httpbin
spec:
containers:
- name: httpbin
image: "kennethreitz/httpbin:latest"
ports:
- containerPort: 80

Kubeconfig inline

remote-cluster.yaml
---
apiVersion: canaries.flanksource.com/v1
kind: Canary
metadata:
name: pod-creation-test
spec:
schedule: "@every 5m"
kubernetesResource:
- name: pod creation on aws cluster
namespace: default
description: "deploy httpbin"
kubeconfig:
value: |
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: xxxxx
server: https://xxxxx.sk1.eu-west-1.eks.amazonaws.com
name: arn:aws:eks:eu-west-1:765618022540:cluster/aws-cluster
contexts:
- context:
cluster: arn:aws:eks:eu-west-1:765618022540:cluster/aws-cluster
namespace: mission-control
user: arn:aws:eks:eu-west-1:765618022540:cluster/aws-cluster
name: arn:aws:eks:eu-west-1:765618022540:cluster/aws-cluster
current-context: arn:aws:eks:eu-west-1:765618022540:cluster/aws-cluster
kind: Config
preferences: {}
users:
- name: arn:aws:eks:eu-west-1:765618022540:cluster/aws-cluster
user:
exec:
....
resources:
- apiVersion: v1
kind: Pod
metadata:
name: httpbin
namespace: default
labels:
app: httpbin
spec:
containers:
- name: httpbin
image: "kennethreitz/httpbin:latest"
ports:
- containerPort: 80

Kubeconfig from local filesystem

remote-cluster.yaml
---
apiVersion: canaries.flanksource.com/v1
kind: Canary
metadata:
name: pod-creation-test
spec:
schedule: "@every 5m"
kubernetesResource:
- name: pod creation on aws cluster
namespace: default
description: "deploy httpbin"
kubeconfig:
value: /root/.kube/aws-kubeconfig
resources:
- apiVersion: v1
kind: Pod
metadata:
name: httpbin
namespace: default
labels:
app: httpbin
spec:
containers:
- name: httpbin
image: "kennethreitz/httpbin:latest"
ports:
- containerPort: 80

Templating

The resources and staticResources fields can be templated using Go Templates. This is helpful in creating resources with random names. Example: you can setup a resource to create a pod with random name on each check run. This way you don't have to wait for the pod to be deleted on every check.

info

Templating the Group, Version, Kind & Namespace however isn't allowed.

Creating a pod with a unique name on every run
pod_exit_code_check.yaml
---
apiVersion: canaries.flanksource.com/v1
kind: Canary
metadata:
name: pod-exit-code-check
namespace: default
labels:
"Expected-Fail": "false"
spec:
schedule: "@every 5m"
kubernetesResource:
- name: "pod exit code"
description: "Create pod & check its exit code"
namespace: default
display:
expr: |
"Result of check 'exit-code-check': " + display["exit-code-check"]
resources:
- apiVersion: v1
kind: Pod
metadata:
name: "hello-world-{{strings.ToLower (random.Alpha 10)}}"
namespace: default
spec:
restartPolicy: Never
containers:
- name: hello-world
image: hello-world
waitFor:
expr: "dyn(resources).all(r, k8s.isHealthy(r))"
interval: "1s"
timeout: "20s"
checkRetries:
delay: 2s
timeout: 5m
checks:
- kubernetes:
- name: exit-code-check
kind: Pod
namespaceSelector:
name: default
resource:
name: "{{(index .resources 0).Object.metadata.name}}"
test:
expr: >
size(results) == 1 &&
results[0].Object.status.containerStatuses[0].state.terminated.exitCode == 0

Examples

Creating a namespace
namespace_creation.yaml
---
apiVersion: canaries.flanksource.com/v1
kind: Canary
metadata:
name: namespace-creation
namespace: default
labels:
"Expected-Fail": "false"
spec:
schedule: "@every 5m"
kubernetesResource:
- name: "namespace creation"
namespace: "default"
description: "create a namespace and pod in it"
waitFor:
timeout: 3m
delete: true
staticResources:
- apiVersion: v1
kind: Namespace
metadata:
name: test
resources:
- apiVersion: v1
kind: Pod
metadata:
name: httpbin
namespace: test
labels:
app: httpbin
spec:
containers:
- name: httpbin
image: "kennethreitz/httpbin:latest"
ports:
- containerPort: 80
warning

Since static resources are deleted when the canary is deleted, extra care must be taken when providing their manifests. When this canary is deleted, the test namespace is deleted and consequently all the other resources within it, even those not created by this check.

Crossplane Example
namespace_creation.yaml
apiVersion: canaries.flanksource.com/v1
kind: Canary
metadata:
name: crossplane-kubernetes-resource
spec:
schedule: "@every 10m"
kubernetesResource:
- name: crossplane-kubernetes-resource
namespace: canaries
description: "Create an S3 bucket via crossplane and run s3 check on it"
waitFor:
expr: 'dyn(resources).all(r, has(r.Object.status.atProvider) && has(r.Object.status.atProvider.arn))'
interval: 30s
timeout: 5m
resources:
- apiVersion: s3.aws.crossplane.io/v1beta1
kind: Bucket
metadata:
name: check-bucket
spec:
forProvider:
acl: private
locationConstraint: us-east-1
providerConfigRef:
name: localstack

checks:
- s3:
- name: s3-check
bucketName: "{{ (index .resources 0).Object.metadata.name }}"
objectPath: dummy
region: "{{ (index .resources 0).Object.spec.forProvider.locationConstraint }}"
url: http://localstack-localstack.localstack.svc.cluster.local:4566
usePathStyle: true
accessKey:
value: test
secretKey:
value: test
checkRetries:
delay: 60s
interval: 10s
timeout: 5m