Kubernetes nodeport vs loadbalancer vs ingress

ExternalName

Another Service type is the , which will redirect a request to a domain specified in its parameter:

---apiVersion: v1kind: Servicemetadata:  name: "google-service"  namespace: "default"spec:  ports:    - port: 80  type: ExternalName  externalName: google.com

Create it:

$ kubectl apply -f nginx-svc.yamlservice/google-service created

Check the Service:

$ kubectl get svc google-serviceNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEgoogle-service ExternalName <none> google.com 80/TCP 33s

And check how it’s working — go to the NGINX pod and use the utility to check the DNS record:

root@nginx-554b9c67f9-rwbp7:/# dig google-service.default.svc.cluster.local +shortgoogle.com.172.217.8.206

Here, we are asking a local DNS-name of the google-service, which was resolved to an IP of the google.com domain which was set in the .

Actually, the isn’t a dedicated Service — it just describes a set of rules for the Kubernetes Ingress Controller to create a Load Balancer, its Listeners and routing rules for them.

The documentation is here>>>.

In the case of AWS, it will be the ALB Ingress Controller — see the ALB Ingress Controller on Amazon EKS and AWS Elastic Kubernetes Service: running ALB Ingress controller.

To make it working, require an additional Service where will route traffic to — kind of a backend.

For the ALB Ingress Controller a manifest with the and its Service can be the next:

---apiVersion: v1kind: Servicemetadata:  name: "nginx-service"  namespace: "default"spec:  ports:    - port: 80  type: NodePort  selector:    app: "nginx"--- apiVersion: extensions/v1beta1kind: Ingressmetadata:  name: "nginx-ingress"  annotations:    kubernetes.io/ingress.class: alb    alb.ingress.kubernetes.io/scheme: internet-facing  labels:    app: "nginx"spec:  backend:    serviceName: "nginx-service"    servicePort: 80

Here we are creating a Service with the type and with the ALB type.

Kubernetes will create an object, then the alb-ingress-controller will see it, will create an AWS ALB сwith the routing rules from the of the , will create a Service object with the port, then will open a TCP port on WorkerNodes and will start routing traffic from clients => to the Load Balancer => to the NodePort on the EC2 => via Service to the pods.

Let’s check.

The Service:

$ kubectl get svc nginx-serviceNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEnginx-service NodePort 172.20.54.138 <none> 80:30968/TCP 21h
$ kubectl get ingress nginx-ingressNAME HOSTS ADDRESS PORTS AGEnginx-ingress * e172ad3e-default-nginxingr-29e9–1405936870.us-east-2.elb.amazonaws.com 80 5m22s

And the Load Balancer’s URL:

$ curl e172ad3e-default-nginxingr-29e9–1405936870.us-east-2.elb.amazonaws.com<!DOCTYPE html><html><head><title>Welcome to nginx!</title>…

“It works!”

kubectl proxy and Service DNS

Because Service type will be accessible from the cluster only — we can use to test it — this will open a local TCP port to the API-server and then we can use it to access our NGINX.

Start the proxy:

$ kubectl proxy --port=8080Starting to serve on 127.0.0.1:8080

Now, knowing our Service name — we set it in the — we can open a connection to the localhost:8080 and then via a namespace name — to the Service itself:

$ curl -L localhost:8080/api/v1/namespaces/default/services/nginx-service/proxy<!DOCTYPE html><html><head><title>Welcome to nginx!</title>…

Or just can obtain information about the Service:

$ curl -L localhost:8080/api/v1/namespaces/default/services/nginx-service/{“kind”: “Service”,“apiVersion”: “v1”,“metadata”: {“name”: “nginx-service”,“namespace”: “default”,“selfLink”: “/api/v1/namespaces/default/services/nginx-service”,

So, the :

  • will provide access to an application within a Kubernetes cluster but without access from the world
  • will use an IP from the cluster’s IP-pool and will be accessible via a DNS-name in the cluster’s scope, see the DNS for Services and Pods

Path-based routing

In the example above we will send all the traffic from the ALB to the same Service and its pods.

By using the and it rules we can also specify rules to send traffic to a specific backend depending on, for example, URI of the request.

So, let’s spin up two NGINX pods:

$ kubectl create deployment nginx-1 --image=nginxdeployment.apps/nginx-1 created$ kubectl create deployment nginx-2 --image=nginxdeployment.apps/nginx-2 created

Create a file on each — but with different content:

$ kubectl exec nginx-1–75969c956f-gnzwv --bash -c "echo svc-1 > /usr/share/nginx/html/sv1.html"$ kubectl exec nginx-2-db55bc45b-lssc8 --bash -c "echo svc-2 > /usr/share/nginx/html/svc2.html"

Update the manifest file and add one more Service, and set rules for the with two backends:

---apiVersion: v1kind: Servicemetadata:  name: "nginx-1-service"  namespace: "default"spec:  ports:    - port: 80  type: NodePort  selector:    app: "nginx-1"---apiVersion: v1kind: Servicemetadata:  name: "nginx-2-service"  namespace: "default"spec:  ports:    - port: 80  type: NodePort  selector:    app: "nginx-2"    ---apiVersion: extensions/v1beta1kind: Ingressmetadata:  name: "nginx-ingress"  annotations:    kubernetes.io/ingress.class: alb    alb.ingress.kubernetes.io/scheme: internet-facing  labels:    app: "nginx"spec:  rules:  - http:      paths:      - path: /svc1.html        backend:          serviceName: "nginx-1-service"          servicePort: 80      - path: /svc2.html        backend:          serviceName: "nginx-2-service"          servicePort: 80

Here we set two rules: if URI == /svc1.html or /svc2.html — then sent the traffic to the nginx-1 or nginx-2 accordingly.

Deploy it:

$ kubectl apply -f nginx-svc.yamlservice/nginx-1-service createdservice/nginx-2-service createdingress.extensions/nginx-ingress configured

Check the rules:

$ kubectl describe ingress nginx-ingress…Rules:Host Path Backends — — — — — — — — */svc1.html nginx-1-service:80 (<none>)/svc2.html nginx-2-service:80 (<none>)…

Check it — make a request to the URIs svc1.html and svc2.html:

$ curl e172ad3e-default-nginxingr-29e9–1405936870.us-east-2.elb.amazonaws.com/svc1.htmlsvc-1$ curl e172ad3e-default-nginxingr-29e9–1405936870.us-east-2.elb.amazonaws.com/svc2.htmlsvc-2

Over a NodePort Service ¶

Due to its simplicity, this is the setup a user will deploy by default when following the steps described in the .

Info

A Service of type exposes, via the component, the same unprivileged port (default: 30000-32767) on every Kubernetes node, masters included. For more information, see .

In this configuration, the NGINX container remains isolated from the host network. As a result, it can safely bind to any port, including the standard HTTP ports 80 and 443. However, due to the container namespace isolation, a client located outside the cluster network (e.g. on the public internet) is not able to access Ingress hosts directly on ports 80 and 443. Instead, the external client must append the NodePort allocated to the Service to HTTP requests.

Example

Given the NodePort allocated to the Service

and a Kubernetes node with the public IP address (the external IP is added as an example, in most bare-metal environments this value is <None>)

a client would reach an Ingress with at , where the myapp.example.com subdomain resolves to the 203.0.113.2 IP address.

Impact on the host system

While it may sound tempting to reconfigure the NodePort range using the API server flag to include unprivileged ports and be able to expose ports 80 and 443, doing so may result in unexpected issues including (but not limited to) the use of ports otherwise reserved to system daemons and the necessity to grant privileges it may otherwise not require.

This practice is therefore discouraged. See the other approaches proposed in this page for alternatives.

This approach has a few other limitations one ought to be aware of:

Source IP address

Services of type NodePort perform by default. This means the source IP of a HTTP request is always the IP address of the Kubernetes node that received the request from the perspective of NGINX.

The recommended way to preserve the source IP in a NodePort setup is to set the value of the field of the Service spec to ().

Warning

This setting effectively drops packets sent to Kubernetes nodes which are not running any instance of the NGINX Ingress controller. Consider assigning NGINX Pods to specific nodes in order to control on what nodes the NGINX Ingress controller should be scheduled or not scheduled.

Example

In a Kubernetes cluster composed of 3 nodes (the external IP is added as an example, in most bare-metal environments this value is <None>)

with a Deployment composed of 2 replicas

Requests sent to and would be forwarded to NGINX and original client’s IP would be preserved, while requests to would get dropped because there is no NGINX replica running on that node.

Ingress status

Because NodePort Services do not get a LoadBalancerIP assigned by definition, the NGINX Ingress controller does not update the status of Ingress objects it manages.

Despite the fact there is no load balancer providing a public IP address to the NGINX Ingress controller, it is possible to force the status update of all managed Ingress objects by setting the field of the Service.

Warning

There is more to setting than just enabling the NGINX Ingress controller to update the status of Ingress objects. Please read about this option in the page of official Kubernetes documentation as well as the section about in this document for more information.

Example

Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is <None>)

one could edit the Service and add the following field to the object spec

which would in turn be reflected on Ingress objects as follows:

Redirects

As NGINX is not aware of the port translation operated by the NodePort Service, backend applications are responsible for generating redirect URLs that take into account the URL used by external clients, including the NodePort.

Example

Redirects generated by NGINX, for instance HTTP to HTTPS or to , are generated without NodePort:

kubectl port-forward

To make sure that our pod is up and running and is able to handle incoming connections to the port 80 let use the . After we will check that it is working — we can start playing with the network setting from the Kubernetes cluster side.

Find the pod’s name:

$ kubectl get podNAME READY STATUS RESTARTS AGEnginx-554b9c67f9-rwbp7 1/1 Running 0 40m

Pass it to the as the first argument, then specify a local port (8080), and port on the pod (80):

$ kubectl port-forward nginx-554b9c67f9-rwbp7 8080:80Forwarding from :8080 -> 80

From the local machine check connection to the NGINX pod in the Kubernetes cluster:

$ curl localhost:8080<!DOCTYPE html><html><head><title>Welcome to nginx!</title>…

Cool — “It works!”, we have a working pod and now we can use it for our Services.

Kubernetes Service types — an overview

Let’s take a brief overview of each type and then will start with examples:

  1. : the default type, will create a Service resource with an IP address from the cluster’s pool, such a Service will be available from within the cluster only (or with )
  2. : will open a TCP port on each WorkerNode EС2, “behind it” automatically will create a Service and will route traffic from this TCP port on an ЕС2 to this — such a service will be accessible from the world (obviously, if an EC2 has a public IP), or within a VPC
  3. : will create an external Load Balancer (AWS Classic LB), “behind it” automatically will create a , then and in this way will route traffic from the Load Balancer to a pod in a cluster
  4. : something like a DNS-proxy — in response to such a Service will return a record taken via CNAME of the record specified in the

The Ingress resource

A minimal Ingress resource example:

As with all other Kubernetes resources, an Ingress needs , , and fields.
The name of an Ingress object must be a valid
.
For general information about working with config files, see deploying applications, configuring containers, managing resources.
Ingress frequently uses annotations to configure some options depending on the Ingress controller, an example of which
is the rewrite-target annotation.
Different Ingress controllers support different annotations. Review the documentation for
your choice of Ingress controller to learn which annotations are supported.

The Ingress
has all the information needed to configure a load balancer or proxy server. Most importantly, it
contains a list of rules matched against all incoming requests. Ingress resource only supports rules
for directing HTTP(S) traffic.

Ingress rules

Each HTTP rule contains the following information:

  • An optional host. In this example, no host is specified, so the rule applies to all inbound
    HTTP traffic through the IP address specified. If a host is provided (for example,
    foo.bar.com), the rules apply to that host.
  • A list of paths (for example, ), each of which has an associated
    backend defined with a and a or
    . Both the host and path must match the content of an
    incoming request before the load balancer directs traffic to the referenced
    Service.
  • A backend is a combination of Service and port names as described in the
    Service doc or a by way of a CRD. HTTP (and HTTPS) requests to the
    Ingress that matches the host and path of the rule are sent to the listed backend.

A is often configured in an Ingress controller to service any requests that do not
match a path in the spec.

DefaultBackend

An Ingress with no rules sends all traffic to a single default backend. The is conventionally a configuration option
of the Ingress controller and is not specified in your Ingress resources.

If none of the hosts or paths match the HTTP request in the Ingress objects, the traffic is
routed to your default backend.

Resource backends

A backend is an ObjectRef to another Kubernetes resource within the
same namespace as the Ingress object. A is a mutually exclusive
setting with Service, and will fail validation if both are specified. A common
usage for a backend is to ingress data to an object storage backend
with static assets.

After creating the Ingress above, you can view it with the following command:

Path types

Each path in an Ingress is required to have a corresponding path type. Paths
that do not include an explicit will fail validation. There are three
supported path types:

  • : With this path type, matching is up to the
    IngressClass. Implementations can treat this as a separate or treat
    it identically to or path types.

  • : Matches the URL path exactly and with case sensitivity.

  • : Matches based on a URL path prefix split by . Matching is case
    sensitive and done on a path element by element basis. A path element refers
    to the list of labels in the path split by the separator. A request is a
    match for path p if every p is an element-wise prefix of p of the
    request path.

    Note: If the last element of the path is a substring of the last
    element
    in request path, it is not a match (for example:
    matches, but does not match ).

Examples

Kind Path(s) Request path(s) Matches?
Prefix (all paths) Yes
Exact Yes
Exact No
Exact No
Exact No
Prefix , Yes
Prefix , Yes
Prefix No
Prefix Yes
Prefix Yes, ignores trailing slash
Prefix Yes, matches trailing slash
Prefix Yes, matches subpath
Prefix No, does not match string prefix
Prefix , Yes, matches prefix
Prefix , , Yes, matches prefix
Prefix , , Yes, matches prefix
Prefix No, uses default backend
Mixed (Prefix), (Exact) Yes, prefers Exact

Multiple matches

In some cases, multiple paths within an Ingress will match a request. In those
cases precedence will be given first to the longest matching path. If two paths
are still equally matched, precedence will be given to paths with an exact path
type over prefix path type.

Using a self-provisioned edge ¶

Similarly to cloud environments, this deployment approach requires an edge network component providing a public entrypoint to the Kubernetes cluster. This edge component can be either hardware (e.g. vendor appliance) or software (e.g. HAproxy) and is usually managed outside of the Kubernetes landscape by operations teams.

Such deployment builds upon the NodePort Service described above in , with one significant difference: external clients do not access cluster nodes directly, only the edge component does. This is particularly suitable for private Kubernetes clusters where none of the nodes has a public IP address.

On the edge side, the only prerequisite is to dedicate a public IP address that forwards all HTTP traffic to Kubernetes nodes and/or masters. Incoming traffic on TCP ports 80 and 443 is forwarded to the corresponding HTTP and HTTPS NodePort on the target nodes as shown in the diagram below:

Когда использовать LoadBalancer?

Если надо дать прямой доступ к сервису извне, это ваш выбор. Весь трафик на выбранный порт будет маршрутизироваться на сервис. Это значит, что можно пускать практически любой вид трафика: HTTP, TCP, UDP, Websockets, gRPC и т.п.

Огромный минус в том, что для каждого сервиса будет использоваться свой LoadBalancer со своим внешним IP, и придётся в результате платить сразу за несколько LoadBalancer’ов.

Ingress

В отличие от других приведённых выше примеров Ingress — вообще не сервис как таковой. Его нельзя задать как type в описании сервиса. На деле Ingress поднимается между сервисами и внешним миром и выступает в роли некоего умного маршрутизатора, который является точкой входа в кластер.

С Ingress’ом можно делать очень много разных штук, т.к. существует множество Ingress Controller’ов с разными возможностями. Например, в GKE дефолтный Ingress Controller поднимет HTTP(S) Load Balancer. Он позволяет роутить трафик на сервисы по путям URL’ов и поддоменам. Например, можно отправить всё, что приходит на foo.yourdomain.com на сервис foo, а всё, что приходит на yourdomain.com/bar/ на сервис bar

Примера YAML Ingress для кластера в GKE:

Отказоустойчивость подов с помощью ReplicaSet

Теперь создадим несколько подов с nginx. Для этого нужно использовать другую абстракцию кубернетис. ReplicaSet следит за количеством подов по шаблону, который мы указываем. В этом шаблоне мы можем указать метку нашего приложения, в моем примере это my-nginx, и количество запущенных реплик.

---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: replicaset-nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: my-nginx
  template:
    metadata:
      labels:
        app: my-nginx
    spec:
      containers:
      - image: nginx:1.16
        name: nginx
        ports:
        - containerPort: 80

Запускаем replicaset.

# kubectl apply -f replicaset-nginx.yaml

Проверяем, что получилось.

# kubectl get replicaset
NAME               DESIRED   CURRENT   READY   AGE
replicaset-nginx   2         2         2       18m

Нам нужно было 2 реплики и мы их получили. С помощью edit мы можем на ходу редактировать replicaset, к примеру, добавляя количество реплик.

# kubectl edit replicaset replicaset-nginx

Репликасет сама следит за количеством запущенных подов. Если один из них по какой-то причине упадет или будет удален, она поднимет его заново. Можете проверить это сами, вручную удалив один из подов. Спустя некоторое время он появится снова.

# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
pod-nginx                1/1     Running   0          15m
replicaset-nginx-f87qf   1/1     Running   0          22m
replicaset-nginx-mr4kw   1/1     Running   0          22m

# kubectl delete pod replicaset-nginx-f87qf
pod "replicaset-nginx-f87qf" deleted

# kubectl get replicaset
NAME               DESIRED   CURRENT   READY   AGE
replicaset-nginx   2         2         2       23m

# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
pod-nginx                1/1     Running   0          16m
replicaset-nginx-g4l58   1/1     Running   0          14s
replicaset-nginx-mr4kw   1/1     Running   0          23m

Я удалил replicaset-nginx-f87qf, вместо него тут же был запущен новый replicaset-nginx-g4l58. Наглядный пример одного из механизмов отказоустойчивости в кластере kubernetes на уровне подов. Кубернетис будет следить за количеством реплик на основе их label. Если вы через replicaset указали запустить 2 реплики приложения с меткой my-nginx, ни меньше, ни больше подов с этой меткой вы запустить не сможете. Если вы вручную запустите pod с меткой my-nginx, он будет тут же завершен, если у вас уже есть 2 пода с такими метками от replicaset.

Replicaset запускает поды на разных нодах. Проверить это можно, посмотрев расширенную информацию о подах.

# kubectl get pod -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP            NODE         NOMINATED NODE   READINESS GATES
replicaset-nginx-cmfnh   1/1     Running   0          2m26s    10.233.67.6   kub-node-1   <none>           <none>
replicaset-nginx-vgxfl   1/1     Running   0          2m26s    10.233.68.4   kub-node-2   <none>           <none>

Таким образом, если у вас одна нода выйдет из строя, кластер автоматически запустит умершие поды на новых нодах. При этом, если нода вернется обратно с запущенными подами на ней, kubernetes автоматически прибьет лишние поды, чтобы их максимальное количество соответствовало информации из шаблона replicaset.

Удаляются replicaset так же как и поды.

# kubectl delete rs replicaset-nginx
replicaset.extensions "replicaset-nginx" deleted

Вместо длинного replicaset я использовал сокращение rs. Это бывает удобно для абстракций с длинными названиями. Для всех них кубернетис поддерживает сокращения.

Pod (язык больше не поворачивается называть его «подом»)

Как вы уже наверное знаете, pod — это самый маленький юнит работы в Kubernetes, обёртка вокруг одного или нескольких контейнеров со своей айпишкой, именем, идентификатором и наверняка богатым внутренним миром.

Как и в прошлый раз, мы сделаем наш первый pod вокруг Docker контейнера с nginx. Только в этот раз — из файла конфигурации:

pod.yml

YAML

apiVersion: v1
kind: Pod
metadata:
name: single-nginx-pod
spec:
containers:
— name: nginx
image: nginx

1

3

5

7
8

apiVersion: v1

metadata

spec

-name: nginx

image: nginx

YAML файлы эстетически и практически прекрасны. Их просто читать, просто создавать. Запороть, правда, тоже легко, но это ко многим вещам относится. В нашем YAML мы задали тип () создаваемого объекта, имя, и из каких контейнеров он создан. То есть «Pod», «single-nginx-pod» и «nginx» соответственно.

Чтобы создать из него реальный объект, этот файл нужно скормить команде , и гештальт будет завершён.

Create pod

Shell

kubectl apply -f pod.yml
#pod «single-nginx-pod» created

kubectl get pods
#NAME READY STATUS RESTARTS AGE
#single-nginx-pod 0/1 ContainerCreating 0 6s

#few seconds later
kubectl get pods
#NAME READY STATUS RESTARTS AGE
#single-nginx-pod 1/1 Running 0 2m

2
3
4
5

7
8
9
10

#pod «single-nginx-pod» created
 

kubectl getpods

#NAME               READY     STATUS              RESTARTS   AGE

 
#few seconds later

kubectl getpods

#NAME               READY     STATUS    RESTARTS   AGE

Pod создался не сразу, ведь nginx образ ещё нужно было и скачать, но через пару секунд всё будет готово. В этот pod можно даже зайти и осмотреться:

Get inside of the pod

Shell

kubectl exec -ti single-nginx-pod bash
#root@single-nginx-pod:/#

1
2

kubectl exec-ti single-nginx-pod bash

#root@single-nginx-pod:/#

Смотреть особо не на что, так что я установил , чтобы порадовать себя ядовито зелёными цветами и заодно убедиться, что процесс с  на месте:

Правда, pod-одиночка — уязвим и удивительно бесполезен. Во-первых, снаружи по HTTP к нему не достучаться. Во-вторых, если к контейнеру или хосту придёт костлявая, то, собственно, история nginx на этом и закончится. Если же нам нужно будет его отмасштабировать, то команду  придётся повторить ещё кучу раз.

С другой стороны, существуют различные контроллеры, которые как раз и помогают от таких напастей.

4. Enable ingress controller add-on

Now we need to enable the ingress-controller add-on available with minikube. This is a very important step or else the ingress itself won’t work.

$ minikube addons enable ingress
* ingress was successfully enabled

Depending upon your cluster type, you can choose your controller and the steps of installation.

Once the add-on is enabled, you can verify the status of the Pod:

$ kubectl get pods -n kube-system
NAME                                        READY   STATUS    RESTARTS   AGE
coredns-6955765f44-gsr7p                    1/1     Running   0          32m
coredns-6955765f44-hswzz                    1/1     Running   0          32m
etcd-minikube                               1/1     Running   0          33m
kube-addon-manager-minikube                 1/1     Running   0          33m
kube-apiserver-minikube                     1/1     Running   0          33m
kube-controller-manager-minikube            1/1     Running   0          33m
kube-proxy-tgh66                            1/1     Running   0          32m
kube-scheduler-minikube                     1/1     Running   0          33m
nginx-ingress-controller-6fc5bcc8c9-wnkfs   1/1     Running   0          111s
storage-provisioner                         1/1     Running   0          32m

So our pod is up and running properly.

Рейтинг
( Пока оценок нет )
Понравилась статья? Поделиться с друзьями:
Мой редактор ОС
Добавить комментарий

;-) :| :x :twisted: :smile: :shock: :sad: :roll: :razz: :oops: :o :mrgreen: :lol: :idea: :grin: :evil: :cry: :cool: :arrow: :???: :?: :!: