This page was exported from Latest Exam Prep [ http://certify.vceprep.com ] Export date:Sun Mar 9 14:25:04 2025 / +0000 GMT ___________________________________________________ Title: [Q22-Q44] Exam CKS Realistic Dumps Verified Questions Free [Mar 01, 2025] --------------------------------------------------- Exam CKS Realistic Dumps Verified Questions Free [Mar 01, 2025] Valid CKS Dumps for Helping Passing Linux Foundation Exam! The CKS certification is vendor-neutral, which means that it is not tied to any specific technology or vendor. This enables IT professionals to demonstrate their competence in Kubernetes security, regardless of the tools or platforms they use. CKS exam covers a broad range of topics, including Kubernetes architecture and components, security best practices, network security, cluster hardening, and monitoring and logging. Successful candidates will be able to identify and mitigate security risks and vulnerabilities in Kubernetes environments. Linux Foundation has recently announced the launch of a new certification exam – the Certified Kubernetes Security Specialist (CKS). CKS exam is designed to assess and validate the skills and knowledge of IT professionals who specialize in securing Kubernetes clusters.   NEW QUESTION 22Enable audit logs in the cluster, To Do so, enable the log backend, and ensure that1. logs are stored at /var/log/kubernetes-logs.txt.2. Log files are retained for 12 days.3. at maximum, a number of 8 old audit logs files are retained.4. set the maximum size before getting rotated to 200MBEdit and extend the basic policy to log:1. namespaces changes at RequestResponse2. Log the request body of secrets changes in the namespace kube-system.3. Log all other resources in core and extensions at the Request level.4. Log “pods/portforward”, “services/proxy” at Metadata level.5. Omit the Stage RequestReceivedAll other requests at the Metadata level Kubernetes auditing provides a security-relevant chronological set of records about a cluster. Kube-apiserver performs auditing. Each request on each stage of its execution generates an event, which is then pre-processed according to a certain policy and written to a backend. The policy determines what’s recorded and the backends persist the records.You might want to configure the audit log as part of compliance with the CIS (Center for Internet Security) Kubernetes Benchmark controls.The audit log can be enabled by default using the following configuration in cluster.yml:services:kube-api:audit_log:enabled: trueWhen the audit log is enabled, you should be able to see the default values at /etc/kubernetes/audit-policy.yaml The log backend writes audit events to a file in JSONlines format. You can configure the log audit backend using the following kube-apiserver flags:–audit-log-path specifies the log file path that log backend uses to write audit events. Not specifying this flag disables log backend. – means standard out–audit-log-maxage defined the maximum number of days to retain old audit log files–audit-log-maxbackup defines the maximum number of audit log files to retain–audit-log-maxsize defines the maximum size in megabytes of the audit log file before it gets rotated If your cluster’s control plane runs the kube-apiserver as a Pod, remember to mount the hostPath to the location of the policy file and log file, so that audit records are persisted. For example:–audit-policy-file=/etc/kubernetes/audit-policy.yaml –audit-log-path=/var/log/audit.logNEW QUESTION 23Context: Cluster: prod Master node: master1 Worker node: worker1You can switch the cluster/configuration context using the following command:[desk@cli] $ kubectl config use-context prodTask: Analyse and edit the given Dockerfile (based on the ubuntu:18:04 image) /home/cert_masters/Dockerfile fixing two instructions present in the file being prominent security/best-practice issues.Analyse and edit the given manifest file /home/cert_masters/mydeployment.yaml fixing two fields present in the file being prominent security/best-practice issues.Note: Don’t add or remove configuration settings; only modify the existing configuration settings, so that two configuration settings each are no longer security/best-practice concerns. Should you need an unprivileged user for any of the tasks, use user nobody with user id 65535 1. For Dockerfile: Fix the image version & user name in Dockerfile 2. For mydeployment.yaml : Fix security contexts Explanation[desk@cli] $ vim /home/cert_masters/DockerfileFROM ubuntu:latest # Remove thisFROM ubuntu:18.04 # Add thisUSER root # Remove thisUSER nobody # Add thisRUN apt get install -y lsof=4.72 wget=1.17.1 nginx=4.2ENV ENVIRONMENT=testingUSER root # Remove thisUSER nobody # Add thisCMD [“nginx -d”][desk@cli] $ vim /home/cert_masters/mydeployment.yamlapiVersion: apps/v1kind: Deploymentmetadata:creationTimestamp: nulllabels:app: kafkaname: kafkaspec:replicas: 1selector:matchLabels:app: kafkastrategy: {}template:metadata:creationTimestamp: nulllabels:app: kafkaspec:containers:– image: bitnami/kafkaname: kafkavolumeMounts:– name: kafka-volmountPath: /var/lib/kafkasecurityContext:{“capabilities”:{“add”:[“NET_ADMIN”],”drop”:[“all”]},”privileged”: True,”readOnlyRootFilesystem”: False, “runAsUser”: 65535} # Delete This{“capabilities”:{“add”:[“NET_ADMIN”],”drop”:[“all”]},”privileged”: False,”readOnlyRootFilesystem”: True, “runAsUser”: 65535} # Add This resources: {} volumes:– name: kafka-volemptyDir: {}status: {}Pictorial View: [desk@cli] $ vim /home/cert_masters/mydeployment.yamlNEW QUESTION 24You can switch the cluster/configuration context using the following command: [desk@cli] $ kubectl config use-context dev Context: A CIS Benchmark tool was run against the kubeadm created cluster and found multiple issues that must be addressed. Task: Fix all issues via configuration and restart the affected components to ensure the new settings take effect. Fix all of the following violations that were found against the API server: 1.2.7 authorization-mode argument is not set to AlwaysAllow FAIL 1.2.8 authorization-mode argument includes Node FAIL 1.2.7 authorization-mode argument includes RBAC FAIL Fix all of the following violations that were found against the Kubelet: 4.2.1 Ensure that the anonymous-auth argument is set to false FAIL 4.2.2 authorization-mode argument is not set to AlwaysAllow FAIL (Use Webhook autumn/authz where possible) Fix all of the following violations that were found against etcd: 2.2 Ensure that the client-cert-auth argument is set to true worker1 $ vim /var/lib/kubelet/config.yamlanonymous:enabled: true #Delete thisenabled: false #Replace by thisauthorization:mode: AlwaysAllow #Delete thismode: Webhook #Replace by thisworker1 $ systemctl restart kubelet. # To reload kubelet config ssh to master1 master1 $ vim /etc/kubernetes/manifests/kube-apiserver.yaml – — authorization-mode=Node,RBAC master1 $ vim /etc/kubernetes/manifests/etcd.yaml – –client-cert-auth=true Explanation ssh to worker1 worker1 $ vim /var/lib/kubelet/config.yaml apiVersion: kubelet.config.k8s.io/v1beta1 authentication:anonymous:enabled: true #Delete thisenabled: false #Replace by thiswebhook:cacheTTL: 0senabled: truex509:clientCAFile: /etc/kubernetes/pki/ca.crtauthorization:mode: AlwaysAllow #Delete thismode: Webhook #Replace by thiswebhook:cacheAuthorizedTTL: 0scacheUnauthorizedTTL: 0scgroupDriver: systemdclusterDNS:– 10.96.0.10clusterDomain: cluster.localcpuManagerReconcilePeriod: 0sevictionPressureTransitionPeriod: 0sfileCheckFrequency: 0shealthzBindAddress: 127.0.0.1healthzPort: 10248httpCheckFrequency: 0simageMinimumGCAge: 0skind: KubeletConfigurationlogging: {}nodeStatusReportFrequency: 0snodeStatusUpdateFrequency: 0sresolvConf: /run/systemd/resolve/resolv.confrotateCertificates: trueruntimeRequestTimeout: 0sstaticPodPath: /etc/kubernetes/manifestsstreamingConnectionIdleTimeout: 0ssyncFrequency: 0svolumeStatsAggPeriod: 0sworker1 $ systemctl restart kubelet. # To reload kubelet config ssh to master1 master1 $ vim /etc/kubernetes/manifests/kube-apiserver.yamlmaster1 $ vim /etc/kubernetes/manifests/etcd.yamlNEW QUESTION 25SIMULATIONFix all issues via configuration and restart the affected components to ensure the new setting takes effect.Fix all of the following violations that were found against the API server:- a. Ensure the –authorization-mode argument includes RBAC b. Ensure the –authorization-mode argument includes Node c. Ensure that the –profiling argument is set to false Fix all of the following violations that were found against the Kubelet:- a. Ensure the –anonymous-auth argument is set to false.b. Ensure that the –authorization-mode argument is set to Webhook.Fix all of the following violations that were found against the ETCD:-a. Ensure that the –auto-tls argument is not set to trueHint: Take the use of Tool Kube-Bench API server:Ensure the –authorization-mode argument includes RBACTurn on Role Based Access Control. Role Based Access Control (RBAC) allows fine-grained control over the operations that different entities can perform on different objects in the cluster. It is recommended to use the RBAC authorization mode.Fix – BuildtimeKubernetesapiVersion: v1kind: Podmetadata:creationTimestamp: nulllabels:component: kube-apiservertier: control-planename: kube-apiservernamespace: kube-systemspec:containers:– command:+ – kube-apiserver+ – –authorization-mode=RBAC,Nodeimage: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0livenessProbe:failureThreshold: 8httpGet:host: 127.0.0.1path: /healthzport: 6443scheme: HTTPSinitialDelaySeconds: 15timeoutSeconds: 15name: kube-apiserver-should-passresources:requests:cpu: 250mvolumeMounts:– mountPath: /etc/kubernetes/name: k8sreadOnly: true– mountPath: /etc/ssl/certsname: certs– mountPath: /etc/pkiname: pkihostNetwork: truevolumes:– hostPath:path: /etc/kubernetesname: k8s– hostPath:path: /etc/ssl/certsname: certs– hostPath:path: /etc/pkiname: pkiEnsure the –authorization-mode argument includes NodeRemediation: Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the master node and set the –authorization-mode parameter to a value that includes Node.–authorization-mode=Node,RBACAudit:/bin/ps -ef | grep kube-apiserver | grep -v grepExpected result:‘Node,RBAC’ has ‘Node’Ensure that the –profiling argument is set to falseRemediation: Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the master node and set the below parameter.–profiling=falseAudit:/bin/ps -ef | grep kube-apiserver | grep -v grepExpected result:‘false’ is equal to ‘false’Fix all of the following violations that were found against the Kubelet:- Ensure the –anonymous-auth argument is set to false.Remediation: If using a Kubelet config file, edit the file to set authentication: anonymous: enabled to false. If using executable arguments, edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.–anonymous-auth=falseBased on your system, restart the kubelet service. For example:systemctl daemon-reloadsystemctl restart kubelet.serviceAudit:/bin/ps -fC kubeletAudit Config:/bin/cat /var/lib/kubelet/config.yamlExpected result:‘false’ is equal to ‘false’2) Ensure that the –authorization-mode argument is set to Webhook.Auditdocker inspect kubelet | jq -e ‘.[0].Args[] | match(“–authorization-mode=Webhook”).string’ Returned Value: –authorization-mode=Webhook Fix all of the following violations that were found against the ETCD:- a. Ensure that the –auto-tls argument is not set to true Do not use self-signed certificates for TLS. etcd is a highly-available key value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should not be available to unauthenticated clients. You should enable the client authentication via valid certificates to secure the access to the etcd service.Fix – BuildtimeKubernetesapiVersion: v1kind: Podmetadata:annotations:scheduler.alpha.kubernetes.io/critical-pod: “”creationTimestamp: nulllabels:component: etcdtier: control-planename: etcdnamespace: kube-systemspec:containers:– command:+ – etcd+ – –auto-tls=trueimage: k8s.gcr.io/etcd-amd64:3.2.18imagePullPolicy: IfNotPresentlivenessProbe:exec:command:– /bin/sh– -ec– ETCDCTL_API=3 etcdctl –endpoints=https://[192.168.22.9]:2379 –cacert=/etc/kubernetes/pki/etcd/ca.crt–cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt –key=/etc/kubernetes/pki/etcd/healthcheck-client.key get foo failureThreshold: 8 initialDelaySeconds: 15 timeoutSeconds: 15 name: etcd-should-fail resources: {} volumeMounts:– mountPath: /var/lib/etcdname: etcd-data– mountPath: /etc/kubernetes/pki/etcdname: etcd-certshostNetwork: truepriorityClassName: system-cluster-criticalvolumes:– hostPath:path: /var/lib/etcdtype: DirectoryOrCreatename: etcd-data– hostPath:path: /etc/kubernetes/pki/etcdtype: DirectoryOrCreatename: etcd-certsstatus: {}NEW QUESTION 26You can switch the cluster/configuration context using the following command:[desk@cli] $ kubectl config use-context devContext:A CIS Benchmark tool was run against the kubeadm created cluster and found multiple issues that must be addressed.Task:Fix all issues via configuration and restart the affected components to ensure the new settings take effect.Fix all of the following violations that were found against the API server:1.2.7 authorization-mode argument is not set to AlwaysAllow FAIL1.2.8 authorization-mode argument includes Node FAIL1.2.7 authorization-mode argument includes RBAC FAILFix all of the following violations that were found against the Kubelet:4.2.1 Ensure that the anonymous-auth argument is set to false FAIL4.2.2 authorization-mode argument is not set to AlwaysAllow FAIL (Use Webhook autumn/authz where possible) Fix all of the following violations that were found against etcd:2.2 Ensure that the client-cert-auth argument is set to true worker1 $ vim /var/lib/kubelet/config.yamlanonymous:enabled: true #Delete thisenabled: false #Replace by thisauthorization:mode: AlwaysAllow #Delete thismode: Webhook #Replace by thisworker1 $ systemctl restart kubelet. # To reload kubelet configssh to master1master1 $ vim /etc/kubernetes/manifests/kube-apiserver.yaml– — authorization-mode=Node,RBACmaster1 $ vim /etc/kubernetes/manifests/etcd.yaml– –client-cert-auth=trueExplanationssh to worker1worker1 $ vim /var/lib/kubelet/config.yamlapiVersion: kubelet.config.k8s.io/v1beta1authentication:anonymous:enabled: true #Delete thisenabled: false #Replace by thiswebhook:cacheTTL: 0senabled: truex509:clientCAFile: /etc/kubernetes/pki/ca.crtauthorization:mode: AlwaysAllow #Delete thismode: Webhook #Replace by thiswebhook:cacheAuthorizedTTL: 0scacheUnauthorizedTTL: 0scgroupDriver: systemdclusterDNS:– 10.96.0.10clusterDomain: cluster.localcpuManagerReconcilePeriod: 0sevictionPressureTransitionPeriod: 0sfileCheckFrequency: 0shealthzBindAddress: 127.0.0.1healthzPort: 10248httpCheckFrequency: 0simageMinimumGCAge: 0skind: KubeletConfigurationlogging: {}nodeStatusReportFrequency: 0snodeStatusUpdateFrequency: 0sresolvConf: /run/systemd/resolve/resolv.confrotateCertificates: trueruntimeRequestTimeout: 0sstaticPodPath: /etc/kubernetes/manifestsstreamingConnectionIdleTimeout: 0ssyncFrequency: 0svolumeStatsAggPeriod: 0sworker1 $ systemctl restart kubelet. # To reload kubelet configssh to master1master1 $ vim /etc/kubernetes/manifests/kube-apiserver.yamlmaster1 $ vim /etc/kubernetes/manifests/etcd.yamlNEW QUESTION 27use the Trivy to scan the following images,  1. amazonlinux:1 2. k8s.gcr.io/kube-controller-manager:v1.18.6Look for images with HIGH or CRITICAL severity vulnerabilities and store the output of the same in /opt/trivy-vulnerable.txtNEW QUESTION 28Create a Pod name Nginx-pod inside the namespace testing, Create a service for the Nginx-pod named nginx-svc, using the ingress of your choice, run the ingress on tls, secure port. $ kubectl get ing -n <namespace-of-ingress-resource>NAME HOSTS ADDRESS PORTS AGEcafe-ingress cafe.com 10.0.2.15 80 25s$ kubectl describe ing <ingress-resource-name> -n <namespace-of-ingress-resource> Name: cafe-ingress Namespace: default Address: 10.0.2.15 Default backend: default-http-backend:80 (172.17.0.5:8080) Rules:Host Path Backends—- —- ——–cafe.com/tea tea-svc:80 (<none>)/coffee coffee-svc:80 (<none>)Annotations:kubectl.kubernetes.io/last-applied-configuration: {“apiVersion”:”networking.k8s.io/v1″,”kind”:”Ingress”,”metadata”:{“annotations”:{},”name”:”cafe-ingress”,”namespace”:”default”,”selfLink”:”/apis/networking/v1/namespaces/default/ingresses/cafe-ingress”},”spec”:{“rules”:[{“host”:”cafe.com”,”http”:{“paths”:[{“backend”:{“serviceName”:”tea-svc”,”servicePort”:80},”path”:”/tea”},{“backend”:{“serviceName”:”coffee-svc”,”servicePort”:80},”path”:”/coffee”}]}}]},”status”:{“loadBalancer”:{“ingress”:[{“ip”:”169.48.142.110″}]}}} Events:Type Reason Age From Message—- —— —- —- ——-Normal CREATE 1m ingress-nginx-controller Ingress default/cafe-ingressNormal UPDATE 58s ingress-nginx-controller Ingress default/cafe-ingress$ kubectl get pods -n <namespace-of-ingress-controller>NAME READY STATUS RESTARTS AGEingress-nginx-controller-67956bf89d-fv58j 1/1 Running 0 1m$ kubectl logs -n <namespace> ingress-nginx-controller-67956bf89d-fv58j——————————————————————————- NGINX Ingress controller Release: 0.14.0 Build: git-734361d Repository: https://github.com/kubernetes/ingress-nginx——————————————————————————-….NEW QUESTION 29You must complete this task on the following cluster/nodes: Cluster: immutable-cluster Master node: master1 Worker node: worker1 You can switch the cluster/configuration context using the following command: [desk@cli] $ kubectl config use-context immutable-cluster Context: It is best practice to design containers to be stateless and immutable. Task: Inspect Pods running in namespace prod and delete any Pod that is either not stateless or not immutable. Use the following strict interpretation of stateless and immutable: 1. Pods being able to store data inside containers must be treated as not stateless. Note: You don’t have to worry whether data is actually stored inside containers or not already. 2. Pods being configured to be privileged in any way must be treated as potentially not stateless or not immutable. Reference: https://kubernetes.io/docs/concepts/policy/pod-security-policy/ https://cloud.google.com/architecture/best-practices-for-operating-containersNEW QUESTION 30You can switch the cluster/configuration context using the following command:[desk@cli] $ kubectl config use-context stageContext:A PodSecurityPolicy shall prevent the creation of privileged Pods in a specific namespace.Task:1. Create a new PodSecurityPolcy named deny-policy, which prevents the creation of privileged Pods.2. Create a new ClusterRole name deny-access-role, which uses the newly created PodSecurityPolicy deny-policy.3. Create a new ServiceAccount named psd-denial-sa in the existing namespace development.Finally, create a new ClusterRoleBindind named restrict-access-bind, which binds the newly created ClusterRole deny-access-role to the newly created ServiceAccount psp-denial-sa Create psp to disallow privileged containerapiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:name: deny-access-rolerules:– apiGroups: [‘policy’]resources: [‘podsecuritypolicies’]verbs: [‘use’]resourceNames:– “deny-policy”k create sa psp-denial-sa -n developmentapiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:name: restrict-access-bingroleRef:kind: ClusterRolename: deny-access-roleapiGroup: rbac.authorization.k8s.iosubjects:– kind: ServiceAccountname: psp-denial-sanamespace: developmentExplanationmaster1 $ vim psp.yamlapiVersion: policy/v1beta1kind: PodSecurityPolicymetadata:name: deny-policyspec:privileged: false # Don’t allow privileged pods!seLinux:rule: RunAsAnysupplementalGroups:rule: RunAsAnyrunAsUser:rule: RunAsAnyfsGroup:rule: RunAsAnyvolumes:– ‘*’master1 $ vim cr1.yamlapiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:name: deny-access-rolerules:– apiGroups: [‘policy’]resources: [‘podsecuritypolicies’]verbs: [‘use’]resourceNames:– “deny-policy”master1 $ k create sa psp-denial-sa -n developmentmaster1 $ vim cb1.yamlapiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:name: restrict-access-bingroleRef:kind: ClusterRolename: deny-access-roleapiGroup: rbac.authorization.k8s.iosubjects:# Authorize specific service accounts:– kind: ServiceAccountname: psp-denial-sanamespace: developmentmaster1 $ k apply -f psp.yaml master1 $ k apply -f cr1.yaml master1 $ k apply -f cb1.yaml Reference: https://kubernetes.io/docs/concepts/policy/pod-security-policy/ master1 $ k apply -f cr1.yaml master1 $ k apply -f cb1.yaml master1 $ k apply -f psp.yaml master1 $ k apply -f cr1.yaml master1 $ k apply -f cb1.yaml Reference: https://kubernetes.io/docs/concepts/policy/pod-security-policy/NEW QUESTION 31Given an existing Pod named test-web-pod running in the namespace test-system Edit the existing Role bound to the Pod’s Service Account named sa-backend to only allow performing get operations on endpoints.Create a new Role named test-system-role-2 in the namespace test-system, which can perform patch operations, on resources of type statefulsets.  Create a new RoleBinding named test-system-role-2-binding binding the newly created Role to the Pod’s ServiceAccount sa-backend. NEW QUESTION 32SIMULATIONCreate a network policy named allow-np, that allows pod in the namespace staging to connect to port 80 of other pods in the same namespace.Ensure that Network Policy:-1. Does not allow access to pod not listening on port 80.2. Does not allow access from Pods, not in namespace staging. apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata:name: network-policyspec:podSelector: {} #selects all the pods in the namespace deployedpolicyTypes:– Ingressingress:– ports: #in input traffic allowed only through 80 port only– protocol: TCPport: 80NEW QUESTION 33You can switch the cluster/configuration context using the following command: [desk@cli] $ kubectl config use-context dev A default-deny NetworkPolicy avoid to accidentally expose a Pod in a namespace that doesn’t have any other NetworkPolicy defined.Task: Create a new default-deny NetworkPolicy named deny-network in the namespace test for all traffic of type Ingress + Egress The new NetworkPolicy must deny all Ingress + Egress traffic in the namespace test.Apply the newly created default-deny NetworkPolicy to all Pods running in namespace test.You can find a skeleton manifests file at /home/cert_masters/network-policy.yaml master1 $ k get pods -n test –show-labelsNAME READY STATUS RESTARTS AGE LABELStest-pod 1/1 Running 0 34s role=test,run=test-podtesting 1/1 Running 0 17d run=testing$ vim netpol.yamlapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata:name: deny-networknamespace: testspec:podSelector: {}policyTypes:– Ingress– Egressmaster1 $ k apply -f netpol.yamlExplanationcontrolplane $ k get pods -n test –show-labelsNAME READY STATUS RESTARTS AGE LABELStest-pod 1/1 Running 0 34s role=test,run=test-podtesting 1/1 Running 0 17d run=testingmaster1 $ vim netpol1.yamlapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata:name: deny-networknamespace: testspec:podSelector: {}policyTypes:– Ingress– Egressmaster1 $ k apply -f netpol1.yaml Reference: https://kubernetes.io/docs/concepts/services-networking/network-policies/ Explanation controlplane $ k get pods -n test –show-labels NAME READY STATUS RESTARTS AGE LABELS test-pod 1/1 Running 0 34s role=test,run=test-pod testing 1/1 Running 0 17d run=testing master1 $ vim netpol1.yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata:name: deny-networknamespace: testspec:podSelector: {}policyTypes:– Ingress– Egressmaster1 $ k apply -f netpol1.yaml Reference: https://kubernetes.io/docs/concepts/services-networking/network-policies/NEW QUESTION 34Fix all issues via configuration and restart the affected components to ensure the new setting takes effect.Fix all of the following violations that were found against the API server:- a. Ensure the –authorization-mode argument includes RBAC b. Ensure the –authorization-mode argument includes Node c. Ensure that the –profiling argument is set to false Fix all of the following violations that were found against the Kubelet:- a. Ensure the –anonymous-auth argument is set to false.b. Ensure that the –authorization-mode argument is set to Webhook.Fix all of the following violations that were found against the ETCD:-a. Ensure that the –auto-tls argument is not set to trueHint: Take the use of Tool Kube-Bench API server:Ensure the –authorization-mode argument includes RBACTurn on Role Based Access Control. Role Based Access Control (RBAC) allows fine-grained control over the operations that different entities can perform on different objects in the cluster. It is recommended to use the RBAC authorization mode.Fix – BuildtimeKubernetesapiVersion: v1kind: Podmetadata:creationTimestamp: nulllabels:component: kube-apiservertier: control-planename: kube-apiservernamespace: kube-systemspec:containers:– command:+ – kube-apiserver+ – –authorization-mode=RBAC,Nodeimage: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0livenessProbe:failureThreshold: 8httpGet:host: 127.0.0.1path: /healthzport: 6443scheme: HTTPSinitialDelaySeconds: 15timeoutSeconds: 15name: kube-apiserver-should-passresources:requests:cpu: 250mvolumeMounts:– mountPath: /etc/kubernetes/name: k8sreadOnly: true– mountPath: /etc/ssl/certsname: certs– mountPath: /etc/pkiname: pkihostNetwork: truevolumes:– hostPath:path: /etc/kubernetesname: k8s– hostPath:path: /etc/ssl/certsname: certs– hostPath:path: /etc/pkiname: pkiEnsure the –authorization-mode argument includes NodeRemediation: Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the master node and set the –authorization-mode parameter to a value that includes Node.–authorization-mode=Node,RBACAudit:/bin/ps -ef | grep kube-apiserver | grep -v grepExpected result:‘Node,RBAC’ has ‘Node’Ensure that the –profiling argument is set to falseRemediation: Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the master node and set the below parameter.–profiling=falseAudit:/bin/ps -ef | grep kube-apiserver | grep -v grepExpected result:‘false’ is equal to ‘false’Fix all of the following violations that were found against the Kubelet:- Ensure the –anonymous-auth argument is set to false.Remediation: If using a Kubelet config file, edit the file to set authentication: anonymous: enabled to false. If using executable arguments, edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.–anonymous-auth=falseBased on your system, restart the kubelet service. For example:systemctl daemon-reloadsystemctl restart kubelet.serviceAudit:/bin/ps -fC kubeletAudit Config:/bin/cat /var/lib/kubelet/config.yamlExpected result:‘false’ is equal to ‘false’2) Ensure that the –authorization-mode argument is set to Webhook.Auditdocker inspect kubelet | jq -e ‘.[0].Args[] | match(“–authorization-mode=Webhook”).string’ Returned Value: –authorization-mode=Webhook Fix all of the following violations that were found against the ETCD:- a. Ensure that the –auto-tls argument is not set to true Do not use self-signed certificates for TLS. etcd is a highly-available key value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should not be available to unauthenticated clients. You should enable the client authentication via valid certificates to secure the access to the etcd service.Fix – BuildtimeKubernetesapiVersion: v1kind: Podmetadata:annotations:scheduler.alpha.kubernetes.io/critical-pod: “”creationTimestamp: nulllabels:component: etcdtier: control-planename: etcdnamespace: kube-systemspec:containers:– command:+ – etcd+ – –auto-tls=trueimage: k8s.gcr.io/etcd-amd64:3.2.18imagePullPolicy: IfNotPresentlivenessProbe:exec:command:– /bin/sh– -ec– ETCDCTL_API=3 etcdctl –endpoints=https://[192.168.22.9]:2379 –cacert=/etc/kubernetes/pki/etcd/ca.crt–cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt –key=/etc/kubernetes/pki/etcd/healthcheck-client.key get foo failureThreshold: 8 initialDelaySeconds: 15 timeoutSeconds: 15 name: etcd-should-fail resources: {} volumeMounts:– mountPath: /var/lib/etcdname: etcd-data– mountPath: /etc/kubernetes/pki/etcdname: etcd-certshostNetwork: truepriorityClassName: system-cluster-criticalvolumes:– hostPath:path: /var/lib/etcdtype: DirectoryOrCreatename: etcd-data– hostPath:path: /etc/kubernetes/pki/etcdtype: DirectoryOrCreatename: etcd-certsstatus: {}NEW QUESTION 35You can switch the cluster/configuration context using the following command: [desk@cli] $ kubectl config use-context test-account Task: Enable audit logs in the cluster.To do so, enable the log backend, and ensure that:1. logs are stored at /var/log/Kubernetes/logs.txt2. log files are retained for 5 days3. at maximum, a number of 10 old audit log files are retainedA basic policy is provided at /etc/Kubernetes/logpolicy/audit-policy.yaml. It only specifies what not to log. Note: The base policy is located on the cluster’s master node.Edit and extend the basic policy to log: 1. Nodes changes at RequestResponse level 2. The request body of persistentvolumes changes in the namespace frontend 3. ConfigMap and Secret changes in all namespaces at the Metadata level Also, add a catch-all rule to log all other requests at the Metadata level Note: Don’t forget to apply the modified policy. $ vim /etc/kubernetes/log-policy/audit-policy.yaml– level: RequestResponseuserGroups: [“system:nodes”]– level: Requestresources:– group: “” # core API groupresources: [“persistentvolumes”]namespaces: [“frontend”]– level: Metadataresources:– group: “”resources: [“configmaps”, “secrets”]– level: Metadata$ vim /etc/kubernetes/manifests/kube-apiserver.yaml Add these– –audit-policy-file=/etc/kubernetes/log-policy/audit-policy.yaml– –audit-log-path=/var/log/kubernetes/logs.txt– –audit-log-maxage=5– –audit-log-maxbackup=10Explanation[desk@cli] $ ssh master1 [master1@cli] $ vim /etc/kubernetes/log-policy/audit-policy.yaml apiVersion: audit.k8s.io/v1 # This is required.kind: Policy# Don’t generate audit events for all requests in RequestReceived stage.omitStages:– “RequestReceived”rules:# Don’t log watch requests by the “system:kube-proxy” on endpoints or services– level: Noneusers: [“system:kube-proxy”]verbs: [“watch”]resources:– group: “” # core API groupresources: [“endpoints”, “services”]# Don’t log authenticated requests to certain non-resource URL paths.– level: NoneuserGroups: [“system:authenticated”]nonResourceURLs:– “/api*” # Wildcard matching.– “/version”# Add your changes below– level: RequestResponseuserGroups: [“system:nodes”] # Block for nodes– level: Requestresources:– group: “” # core API groupresources: [“persistentvolumes”] # Block for persistentvolumesnamespaces: [“frontend”] # Block for persistentvolumes of frontend ns– level: Metadataresources:– group: “” # core API groupresources: [“configmaps”, “secrets”] # Block for configmaps & secrets– level: Metadata # Block for everything else[master1@cli] $ vim /etc/kubernetes/manifests/kube-apiserver.yamlapiVersion: v1kind: Podmetadata:annotations:kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 10.0.0.5:6443 labels:component: kube-apiservertier: control-planename: kube-apiservernamespace: kube-systemspec:containers:– command:– kube-apiserver– –advertise-address=10.0.0.5– –allow-privileged=true– –authorization-mode=Node,RBAC– –audit-policy-file=/etc/kubernetes/log-policy/audit-policy.yaml #Add this– –audit-log-path=/var/log/kubernetes/logs.txt #Add this– –audit-log-maxage=5 #Add this– –audit-log-maxbackup=10 #Add this…output truncatedNote: log volume & policy volume is already mounted in vim /etc/kubernetes/manifests/kube-apiserver.yaml so no need to mount it. Reference: https://kubernetes.io/docs/tasks/debug-application-cluster/audit/NEW QUESTION 36Two tools are pre-installed on the cluster’s worker node:Using the tool of your choice (including any non pre-installed tool), analyze the container’s behavior for at least 30 seconds, using filters that detect newly spawning and executing processes.Store an incident file at /opt/KSRS00101/alerts/details, containing the detected incidents, one per line, in the following format:The following example shows a properly formatted incident file: NEW QUESTION 37Create a network policy named restrict-np to restrict to pod nginx-test running in namespace testing.Only allow the following Pods to connect to Pod nginx-test:-1. pods in the namespace default2. pods with label version:v1 in any namespace.Make sure to apply the network policy.  Send us your Feedback on this. NEW QUESTION 38You must complete this task on the following cluster/nodes: Cluster: trace Master node: master Worker node: worker1 You can switch the cluster/configuration context using the following command: [desk@cli] $ kubectl config use-context trace Given: You may use Sysdig or Falco documentation. Task: Use detection tools to detect anomalies like processes spawning and executing something weird frequently in the single container belonging to Pod tomcat. Two tools are available to use: 1. falco 2. sysdig Tools are pre-installed on the worker1 node only. Analyse the container’s behaviour for at least 40 seconds, using filters that detect newly spawning and executing processes. Store an incident file at /home/cert_masters/report, in the following format: [timestamp],[uid],[processName] Note: Make sure to store incident file on the cluster’s worker node, don’t move it to master node. $vim /etc/falco/falco_rules.local.yaml– rule: Container Drift Detected (open+create)desc: New executable created in a container due to open+createcondition: >evt.type in (open,openat,creat) andevt.is_open_exec=true andcontainer andnot runc_writing_exec_fifo andnot runc_writing_var_lib_docker andnot user_known_container_drift_activities andevt.rawres>=0output: >%evt.time,%user.uid,%proc.name # Add this/Refer falco documentationpriority: ERROR$kill -1 <PID of falco>Explanation[desk@cli] $ ssh node01 [node01@cli] $ vim /etc/falco/falco_rules.yaml search for Container Drift Detected & paste in falco_rules.local.yaml [node01@cli] $ vim /etc/falco/falco_rules.local.yaml– rule: Container Drift Detected (open+create)desc: New executable created in a container due to open+createcondition: >evt.type in (open,openat,creat) andevt.is_open_exec=true andcontainer andnot runc_writing_exec_fifo andnot runc_writing_var_lib_docker andnot user_known_container_drift_activities andevt.rawres>=0output: >%evt.time,%user.uid,%proc.name # Add this/Refer falco documentationpriority: ERROR[node01@cli] $ vim /etc/falco/falco.yamlNEW QUESTION 39Use the kubesec docker images to scan the given YAML manifest, edit and apply the advised changes, and passed with a score of 4 points.kubesec-test.yamlapiVersion: v1kind: Podmetadata:name: kubesec-demospec:containers:– name: kubesec-demoimage: gcr.io/google-samples/node-hello:1.0securityContext:readOnlyRootFilesystem: trueHint: docker run -i kubesec/kubesec:512c5e0 scan /dev/stdin < kubesec-test.yaml kubesec scan k8s-deployment.yamlcat <<EOF > kubesec-test.yamlapiVersion: v1kind: Podmetadata:name: kubesec-demospec:containers:– name: kubesec-demoimage: gcr.io/google-samples/node-hello:1.0securityContext:readOnlyRootFilesystem: trueEOFkubesec scan kubesec-test.yamldocker run -i kubesec/kubesec:512c5e0 scan /dev/stdin < kubesec-test.yaml kubesec http 8080 &[1] 12345{“severity”:”info”,”timestamp”:”2019-05-12T11:58:34.662+0100″,”caller”:”server/server.go:69″,”message”:”Starting HTTP server on port 8080″} curl -sSX POST –data-binary @test/asset/score-0-cap-sys-admin.yml http://localhost:8080/scan[{“object”: “Pod/security-context-demo.default”,“valid”: true,“message”: “Failed with a score of -30 points”,“score”: -30,“scoring”: {“critical”: [{“selector”: “containers[] .securityContext .capabilities .add == SYS_ADMIN”,“reason”: “CAP_SYS_ADMIN is the most privileged capability and should always be avoided”},{“selector”: “containers[] .securityContext .runAsNonRoot == true”,“reason”: “Force the running image to run as a non-root user to ensure least privilege”},// …NEW QUESTION 40Cluster: scanner Master node: controlplane Worker node: worker1You can switch the cluster/configuration context using the following command:[desk@cli] $ kubectl config use-context scannerGiven: You may use Trivy’s documentation.Task: Use the Trivy open-source container scanner to detect images with severe vulnerabilities used by Pods in the namespace nato.Look for images with High or Critical severity vulnerabilities and delete the Pods that use those images. Trivy is pre-installed on the cluster’s master node. Use cluster’s master node to use Trivy. NEW QUESTION 41SIMULATIONService is running on port 389 inside the system, find the process-id of the process, and stores the names of all the open-files inside the /candidate/KH77539/files.txt, and also delete the binary.  Send us your feedback on it. NEW QUESTION 42ContextThis cluster uses containerd as CRI runtime.Containerd’s default runtime handler is runc. Containerd has been prepared to support an additional runtime handler, runsc (gVisor).TaskCreate a RuntimeClass named sandboxed using the prepared runtime handler named runsc.Update all Pods in the namespace server to run on gVisor. NEW QUESTION 43Service is running on port 389 inside the system, find the process-id of the process, and stores the names of all the open-files inside the /candidate/KH77539/files.txt, and also delete the binary. root# netstat -ltnupActive Internet connections (only servers)Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:17600 0.0.0.0:* LISTEN 1293/dropbox tcp 0 0 127.0.0.1:17603 0.0.0.0:* LISTEN 1293/dropbox tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 575/sshd tcp 0 0 127.0.0.1:9393 0.0.0.0:* LISTEN 900/perl tcp 0 0 :::80 :::* LISTEN 9583/docker-proxy tcp 0 0 :::443 :::* LISTEN 9571/docker-proxy udp 0 0 0.0.0.0:68 0.0.0.0:* 8822/dhcpcd…root# netstat -ltnup | grep ‘:22’tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 575/sshdThe ss command is the replacement of the netstat command.Now let’s see how to use the ss command to see which process is listening on port 22:root# ss -ltnup ‘sport = :22’Netid State Recv-Q Send-Q Local Address:Port Peer Address:Porttcp LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:(“sshd”,pid=575,fd=3))NEW QUESTION 44ContextA Role bound to a Pod’s ServiceAccount grants overly permissive permissions. Complete the following tasks to reduce the set of permissions.TaskGiven an existing Pod named web-pod running in the namespace security.Edit the existing Role bound to the Pod’s ServiceAccount sa-dev-1 to only allow performing watch operations, only on resources of type services.Create a new Role named role-2 in the namespace security, which only allows performing update operations, only on resources of type namespaces.Create a new RoleBinding named role-2-binding binding the newly created Role to the Pod’s ServiceAccount.  Loading … CKS Exam Dumps For Certification Exam Preparation: https://www.vceprep.com/CKS-latest-vce-prep.html --------------------------------------------------- Images: https://certify.vceprep.com/wp-content/plugins/watu/loading.gif https://certify.vceprep.com/wp-content/plugins/watu/loading.gif --------------------------------------------------- --------------------------------------------------- Post date: 2025-03-01 15:59:13 Post date GMT: 2025-03-01 15:59:13 Post modified date: 2025-03-01 15:59:13 Post modified date GMT: 2025-03-01 15:59:13