top of page

AWS - Elastic Kubernetes Service (EKS)

Updated: Nov 27, 2023

  1. What is Kubernetes and what does it do?

Kubernetes is an open-source platform for automating the deployment, scaling, and management of containerized applications. It helps you to orchestrate containers and manage the life cycle of your applications.

2. Which tools are you familiar with?

There are several tools that can be used with Kubernetes, including kubectl, kubeadm, Minikube, Helm, and others.


3. What is rolling updates for EKS?

Amazon Elastic Kubernetes Service (EKS) is a managed service that makes it easier to run Kubernetes on AWS without needing to maintain the Kubernetes control plane. Rolling updates in the context of Kubernetes and EKS refers to the process of updating the applications, node groups, or the Kubernetes components without any downtime. Here's a breakdown of the concept:

  1. Rolling updates for Applications: When you have an application running in a Kubernetes Deployment and you want to update the application (e.g., deploy a new version of your app), you can use the rolling update strategy. This means that Kubernetes will gradually replace old pods with new ones. This ensures there's no downtime, and if anything goes wrong, it can automatically rollback.

  2. Rolling updates for EKS Worker Nodes: If you need to update or modify the EKS worker nodes (for example, to use a new AMI, change the instance type, or update the launch configuration/launch template), a rolling update will ensure that the nodes are updated one at a time or in small batches. As each node is updated, its workloads are evicted and rescheduled onto other nodes. Once the node is updated and rejoins the cluster, workloads can be scheduled back onto it.

  3. Rolling updates for Kubernetes Version Upgrades: Amazon EKS supports the ability to upgrade the version of Kubernetes on an existing cluster without any downtime. This involves updating various components in a specific order, such as the control plane version first, followed by worker nodes. EKS manages the control plane update for you, but you are responsible for updating the worker nodes. Again, this is typically done in a rolling fashion to ensure workloads remain available throughout the process.

Advantages of Rolling Updates:

  • Zero Downtime: Applications remain available to users, and this can be crucial for production environments.

  • Safety: If something goes wrong during the update, Kubernetes can automatically rollback to the previous version. Also, since not all instances are updated at once, you have a chance to catch issues early before they affect your entire environment.

  • Control: You can control the speed and batch size of the update, deciding how many pods or nodes to update simultaneously.

To implement rolling updates, it's essential to monitor the health of your applications and infrastructure, set proper readiness and liveness probes, and understand the update strategy configurations available in Kubernetes.


4. What is CNF and when does it need restarting? CNF stands for Cloud-Native Network Function. In the context of telecommunications and network infrastructure, CNFs are implementations of network functions that adhere to cloud-native principles. These principles include containerization (typically using Docker), dynamic management using orchestration platforms like Kubernetes, microservices architecture, CI/CD, and scalability. "CNF restarting" usually refers to the process of restarting a cloud-native network function that's running inside a container, similar to how you'd restart a traditional VM-based network function. The difference is that CNFs are designed to be stateless (or at least separate their state from processing) and lightweight, which makes them quicker to start, stop, or restart, and also more resilient against failures. Here's why you might restart a CNF:

  1. Software Updates: If there's a new version of the CNF, it might need a restart after updating.

  2. Configuration Changes: Some configuration changes might require a CNF to restart to take effect.

  3. Troubleshooting: If a CNF is not behaving as expected, restarting it can sometimes resolve the issue, especially if it's a transient or one-off problem.

  4. Resource Management: If resources need to be reallocated or optimized, CNFs might be restarted to fit new requirements.

In cloud-native environments, especially those managed by orchestrators like Kubernetes, operations like restarting can often be handled automatically. For example, if a CNF crashes, Kubernetes can automatically restart it. Similarly, during updates, Kubernetes can use rolling restarts to ensure that the service remains available. When working with CNFs, it's essential to monitor them effectively, ensure that they are stateless (or manage their state appropriately), and understand how they interact with other components in your infrastructure. This ensures that operations like restarts are smooth and don't lead to prolonged outages or disruptions.


DomainSets" which is related to Kubernetes and how it handles Network Policies, especially in the context of EKS or any other Kubernetes environment.

"DomainSets" (or something similar in naming) might be a feature or enhancement proposal to handle domain-based network policies, allowing Kubernetes administrators to define policies based on domain names rather than just IP addresses or CIDRs. The idea is that with domain-based policies, one can better control egress traffic from pods to specific domain names.


4. How to deploy Helm-chart through Pipeline?

Helm charts can be deployed through a pipeline using a CI/CD tool like Jenkins or GitLab CI. The pipeline would typically include steps to build and package the chart, run tests, and deploy the chart to a Kubernetes cluster.

5. How do you autoscale pods?

Autoscaling pods in Kubernetes can be achieved through the use of a horizontal pod autoscaler (HPA). The HPA will monitor resource utilization (such as CPU or memory usage) and automatically increase or decrease the number of replicas of a pod based on the utilization.

6. How long does it take for the pod to be up and running?

The time it takes for a pod to be up and running can vary depending on several factors, such as the size of the container images, the network latency, and the resources available on the cluster. On average, it can take several seconds to several minutes for a pod to be up and running.

7. How do you autoscale nodes?

Autoscaling nodes in Kubernetes can be achieved through the use of a cluster autoscaler. The cluster autoscaler will monitor the resource utilization of the cluster and automatically increase or decrease the number of nodes in the cluster based on the utilization.

8. Give us an example that some new customer came to you for their application so what would you recommend him for their new application?

It depends on the specific needs of the customer, but one recommendation could be to containerize the application using Docker and then deploy it on a Kubernetes cluster for management and scaling purposes.

9. How do you execute that application using the tools which you selected?

Once the application is containerized, it can be deployed to a Kubernetes cluster using kubectl or another tool, such as Helm. The application can then be managed and scaled using the Kubernetes API and the tools provided by the platform.

10. How provide a static IP to a Pod in Amazon Elastic Kubernetes Service (EKS)?

You can use an EKS feature called "AWS Elastic IP" (EIP) along with Kubernetes Services and Network Policies. Here are the general steps to achieve this:

  1. Create an Elastic IP (EIP): You need to create an AWS Elastic IP address that you can associate with your Pod. You can do this using the AWS Management Console, AWS CLI, or an infrastructure-as-code tool like Terraform. Make note of the EIP allocation ID.

  2. Create a Network Load Balancer (NLB): To associate the EIP with your Pod, you can use an NLB as an intermediary. Create an AWS Network Load Balancer and configure it with the EIP that you created in step 1.

  3. Create a Kubernetes Service: You can create a Kubernetes Service of type LoadBalancer and specify the NLB created in step 2 as the target. This will ensure that traffic to your service is directed through the NLB and, ultimately, to your Pod.

Here is an example Kubernetes Service YAML manifest:

yaml code
apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
    app: my-app  # Match this label to your Pod labels
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80  # Port where your application is running in the Pod
  type: LoadBalancer
  loadBalancerSourceRanges:
  - 0.0.0.0/0  # Allow traffic from all sources

Allow traffic from all sources

  1. Associate the Service with your Pod: Ensure that the Service selector matches the labels of the Pods you want to expose with the static IP address. Kubernetes will automatically route traffic from the NLB to the appropriate Pod based on the label selector.

  2. Create a Network Policy (Optional): If you want to restrict inbound and outbound traffic to your Pod, you can create Kubernetes Network Policies to define the rules. Make sure the NLB allows traffic from the EIP and that your Network Policy allows traffic to and from the NLB's IP addresses.

  3. Test the Configuration: After creating the Service and ensuring that the NLB is associated with the EIP, you should be able to access your Pod using the static IP address assigned to the EIP.

Keep in mind that using static IPs for Pods in EKS can have limitations and might not be the most common use case. It's typically used when you have specific requirements for IP addresses that need to remain constant. Ensure that you understand the networking implications and costs associated with using Elastic IPs and NLBs in your EKS cluster.

50 views0 comments

Recent Posts

See All

Azure Game

✅ Designing Distributed Systems https://lnkd.in/ducStwZq ✅ Succeeding with AI: How to Make AI Work for Your Business https://lnkd.in/d5gcR6c4 ✅ Azure Synapse Analytics Cookbook https://lnkd.in/d2Jqb

System Design Thinking

System Design Thinking is an essential aspect of technical architecture and engineering roles, as it involves designing complex systems and ensuring they meet the organization's and its users' needs.

Reference - Architecture - MicroServices and Databases

https://medium.com/@madhukaudantha/microservice-architecture-and-design-patterns-for-microservices-e0e5013fd58a https://www.baeldung.com/transactions-across-microservices https://developer.ibm.c

bottom of page