Logo

dev-resources.site

for different kinds of informations.

Kubectl Top command:-Secrets behind scenes

Published at
10/20/2024
Categories
kubernetes
metrics
cpu
memory
Author
soumya14041987
Categories
4 categories in total
kubernetes
open
metrics
open
cpu
open
memory
open
Author
14 person written this
soumya14041987
open
Kubectl Top command:-Secrets behind scenes

Metrics Server is typically installed as an add-on in Kubernetes clusters, including in Minikube & other K8's clusters. It is not installed by default in most Kubernetes clusters, but it can be easily added. It is an optional add-on that you install to provide real-time resource utilization data for nodes and pods.

It is a lightweight service designed to work with the Kubernetes Metrics API to provide metrics like CPU and memory usage for horizontal pod autoscaling (HPA), the kubectl top command, and more.

If the Metrics Server is not installed, commands like kubectl top or the Horizontal Pod Autoscaler won’t have the necessary data for CPU and memory metrics.

In a Minikube Cluster:

Minikube is a local Kubernetes cluster, and similar to full Kubernetes clusters, the Metrics Server is also not enabled by default.

You can enable it in Minikube using a specific add-on command.
Installing Metrics Server in Minikube:

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
Enter fullscreen mode Exit fullscreen mode

This will install the Metrics Server into your cluster.

  1. Verify Metrics Server Deployment

After installing, check if the Metrics Server is running properly:

kubectl get pods -n kube-system

Enter fullscreen mode Exit fullscreen mode

You should see a metrics-server pod running. For example:

NAME                                 READY   STATUS    RESTARTS   AGE
metrics-server-86cbb8457f-zkhrn       1/1     Running   0          1m
Enter fullscreen mode Exit fullscreen mode

Now just execute below commands to listed all pods , nodes in default , respective namespace .

kubectl top pods 
kubectl top nodes 
kubectl top pods -n <namespace>
kubectl top pods --all-namespaces

Enter fullscreen mode Exit fullscreen mode

Now deep dive a little bit .
View Resource Usage for a Specific Pod:-

kubectl top pod <pod-name> -n <namespace>

Enter fullscreen mode Exit fullscreen mode

View Container Resource Usage within a Pod

kubectl top pod <pod-name> -n <namespace> --containers

Enter fullscreen mode Exit fullscreen mode

Sort by Resource Usage (CPU or Memory):-

kubectl top pods -n <namespace> --sort-by=cpu
kubectl top pods -n <namespace> --sort-by=memory

Enter fullscreen mode Exit fullscreen mode

Now let's walkthrough a real world troubleshooting scenario on where post installing Metrics server its not able to execute kubectl top commands and getting error as Metrics API not available .

Workaround Step by Step :-
I assume that Metrics server adds-on already installed ,so not going through the steps as those has been already mentioned above.

Verify Metrics Server Deployment

kubectl get pods -n kube-system

Enter fullscreen mode Exit fullscreen mode

You should see a metrics-server pod running. For example:

metrics-server-86cbb8457f-zkhrn       1/1     Running   0          1m
Enter fullscreen mode Exit fullscreen mode

Check Logs for Errors

kubectl logs -n kube-system <metrics-server-pod-name>

Enter fullscreen mode Exit fullscreen mode

Now comes the most important part :-
Ensure Proper Configuration
The Metrics Server requires proper API access and SSL certificates to function correctly. Some common configuration issues might involve:

TLS or certificate issues: If you're using self-signed certificates, ensure the certificates are set up correctly.

API Server flags: The Kubernetes API server must allow the Metrics Server to access metrics. Ensure the --kubelet-insecure-tls flag is set if you are in a development environment:

Wait for Metrics to Populate
After starting the Metrics Server, it may take a few minutes for it to collect and expose metrics from the nodes and pods. Try running kubectl top nodes again after a couple of minutes

kubectl top nodes

Enter fullscreen mode Exit fullscreen mode

** Check Kubelet Configuration**

The Kubelet (running on each node) must expose its metrics to the Metrics Server. Ensure that the --authentication-token-webhook and --authorization-mode=Webhook flags are enabled on your Kubelet configuration, allowing it to authenticate API requests.

Edit the Metrics Server deployment:

kubectl edit deployment metrics-server -n kube-system

Enter fullscreen mode Exit fullscreen mode

Add the --kubelet-insecure-tls flag under the args section in the container definition

spec:
  containers:
  - name: metrics-server
    image: k8s.gcr.io/metrics-server/metrics-server:v0.6.2
    args:
      - --cert-dir=/tmp
      - --secure-port=4443
      - --kubelet-preferred-address-types=InternalIP,Hostname,ExternalIP
      - --kubelet-insecure-tls

Enter fullscreen mode Exit fullscreen mode

Save and exit. Kubernetes will automatically update and restart the Metrics Server pod.

** Restart Metrics Server**
After applying any of the above changes, you should restart the Metrics Server to ensure it picks up the new configuration:

kubectl rollout restart deployment metrics-server -n kube-system

Enter fullscreen mode Exit fullscreen mode

Check Logs and Verify
After restarting, check the Metrics Server logs to ensure the issue is resolved:

kubectl logs -n kube-system <metrics-server-pod-name>

Enter fullscreen mode Exit fullscreen mode

Now run the top command again .

kubectl top nodes
kubectl top pods 

Enter fullscreen mode Exit fullscreen mode

Conclusion:
The Metrics Server is an essential tool in Kubernetes, enabling real-time resource monitoring and driving features like the Horizontal Pod Autoscaler (HPA). While it focuses on lightweight and short-term metrics, it is a key component for ensuring efficient resource management and auto-scaling within a cluster.

memory Article's
30 articles in total
Favicon
Memory Management in Operating Systems
Favicon
What is GCHandle in C#? (Part 1)
Favicon
How Memory Shapes Data Structures: Arrays and Allocation
Favicon
Mastering Pointers in Go: Enhancing Safety, Performance, and Code Maintainability
Favicon
Methods for finding memory leaks in Visual Studio
Favicon
Laravel 11: Allowed memory size of 134217728 bytes exhausted (tried to allocate 23085056 bytes)
Favicon
Setting up memory for Flink - Configuration
Favicon
CS50 - Week 4
Favicon
How to Create Dynamic Memory Card Game Using HTML CSS and JavaScript
Favicon
Profiling no Java: Guia prático para analisar o desempenho de aplicações Java
Favicon
Potential Drawbacks of Using DMA Instead of Interrupts for Data Transfer in Embedded Systems
Favicon
x64 Virtual Address Translation
Favicon
Why Is Stack Memory Faster Than Heap Memory? Here’s What You Need to Know!
Favicon
Java tool to accurately measure object sizes and their hierarchies.
Favicon
Physical and Logical Memory: Addressing and Allocation in Operating Systems
Favicon
Mastering memory management in Go: Avoiding slice-related leaks
Favicon
How to estimate Java object size
Favicon
The difference between pointers and values on methods
Favicon
Data Flow in LLM Applications: Building Reliable Context Management Systems
Favicon
JavaScript Shared Memory
Favicon
Understanding Memory<T> in C#
Favicon
The Power of Memory Map
Favicon
Node.js Memory Leaks: A Guide to Detection and Resolution
Favicon
Kubectl Top command:-Secrets behind scenes
Favicon
Subsistema de memória
Favicon
How to Increase Free Tier Memory on AWS EC2
Favicon
Understanding Memory Leaks in Java: Common Causes and How to Detect Them
Favicon
"What Every Programmer Should Know About Memory" by Ulrich Drepper.
Favicon
Understanding Garbage Collection in Java: Essential for Interview Preparation
Favicon
Navigating JVM Memory: Key Concepts for Your Java Interview

Featured ones: