Учимся разворачивать микросервисы. часть 2. kubernetes

История создания

Проект Kubernetes (сокращенно K8s) вырос из системы управления кластерами Borg. Внутренний продукт поискового гиганта Google получил название в честь кибер-рассы боргов из легендарного сериала «Звездный путь».

Команде разработчиков Google Borg была поставлена масштабная задача — создать открытое программное обеспечение для оркестрирования* контейнеров, которое станет вкладом Google в развитие мировых IT-технологий. Приложение было написано на основе языка Go.

На этапе разработки K8s назвался Project Seven («Проект «Седьмая»). Это было прямой отсылкой к персонажу «Звездного пути» Seven of Nine («Седьмая-из-девяти») — андроиду-боргу, сумевшему вернуть себе человечность. Позже проект получил имя «Кубернетес», от греческого слова κυβερνήτης, которое означает «управляющий», «рулевой» или «кормчий».

В 2014 году общественности представили исходные коды, а годом позже появилась первая версия программы Kubernetes 1.0. В дальнейшем все права на продукт были переданы некоммерческому фонду Cloud Native Computing Foundation (CNCF), куда входят Google, The Linux Foundation и ряд крупнейших технологических корпораций.

Supplying Host Headers¶

Most ingress controllers and service mesh implementations rely on the
Host HTTP request header being
supplied in the request in order to determine how to route the request to the correct pod.

Determining the hostname to IP mapping

For the Host header to be set in the request, the hostname of the service should resolve to the
public IP address of the ingress or service mesh. Depending on if you are using an ingress
controller or a service mesh, use one of the following techniques to determine the correct hostname
to IP mapping:

Ingresses

For traffic which is reaching the cluster network via a normal Kubernetes Ingress, the hostname
should map to the IP of the ingress. We can retrieve the external IP of the ingress from the
Ingress object itself, using :

In the example above, the hostname should be configured to resolve to the
IP . The next section describes various ways to configure your local system to
resolve the hostname to the desired IP.

Istio

In the case of Istio, traffic enters the mesh through an
Ingress Gateway,
which simply is a load balancer sitting at the edge of mesh.

To determine the correct hostname to IP mapping, it largely depends on what was configured in the
and . If you are following the
Istio getting started guide, the examples use the «default» istio
ingress gateway, which we can obtain from kubectl:

In the above example, the hostname should be configured to resolve to the
IP . The next section describes various ways to configure your local system to
resolve the hostname to the desired IP.

Configuring local hostname resolution

Now that you have determined the correct hostname to IP mapping, the next step involves configuring
the system so that will resolve properly. There are different techniques to do this:

DNS Entry

In real, production environments, the Host header is typically achieved by adding a DNS entry for
the hostname in the DNS server. However, for local development, this is typically not an easily
accessible option.

/etc/hosts Entry

On local workstations, a local entry to can be added to map the hostname and IP address
of the ingress. For example, the following is an example of an file which maps
to IP .

The advantages of using a host entry, are that it works for all clients (CLIs, browsers). On the
other hand, it is harder to maintain if the IP address changes frequently.

Supply Header in Curl

Clients such as curl, have the ability to explicitly set a header (the flag in curl).
For example:

Notice that the same request made without the header, fails with a error.

Browser Extension

Similar to curl’s ability to explicitly set a header, browsers can also achieve this via browser
extensions. One example of a browser extension which can do this, is
ModHeader.

Check rollout history

At this stage we have discussed about how can manage updates of an application with no downtime but you must be wondering how these updates are actually rolled out? The answer is using deployment history. Every time you modify the deployment, a revision history is stored for the respective modification. So you can use these revision ID to either perform update or rollback to last revision.

Let’s take an example, here if I check the deployment history for the newly created deployment:

# kubectl rollout history deployment rolling-nginx
deployment.apps/rolling-nginx
REVISION  CHANGE-CAUSE
1         <none>

Since we have not done any changes to this deployment yet, there is only a single revision history.

But why CHANGE-CAUSE is showing NONE? It is because we have not used —record while creating our deployment. The argument will add a add the command under for each revision history

So, I will delete this deployment and create the same again:

# kubectl delete deployment rolling-nginx
deployment.apps "rolling-nginx" deleted

and this time I will use along with :

# kubectl create -f rolling-nginx.yml --record
deployment.apps/rolling-nginx created

Now verify the revision history, this time the command for the first revision is recorded:

# kubectl rollout history deployment rolling-nginx
deployment.apps/rolling-nginx
REVISION  CHANGE-CAUSE
1         kubectl create --filename=rolling-nginx.yml --record=true

Modify the deployment and initiate the update
Let us modify the deployment to verify our , if you remember we had used a very old image for our containers so we will update the image details to use a different image:

# kubectl set image deployment rolling-nginx nginx=nginx:1.15 --record
deployment.apps/rolling-nginx image updated

Here, I have just updated the image section to use instead of 1.9, alternatively you can also use to edit the YAML file of the deployment.

Проброс порта в pod

А сейчас пробросим 80-й порт мастера в конкретный под и проверим, что nginx действительно работает в соответствии с установленным конфигом. Делается это следующим обарзом.

# kubectl port-forward deployment-nginx-848cc4c754-w7q9s 80:80
Forwarding from 127.0.0.1:80 -> 80
Forwarding from :80 -> 80

Перемещаемся в сосeднюю консоль мастера и там проверяем через curl.

# curl localhost:80
deployment-nginx-848cc4c754-w7q9s

Если сделать проброс в другой под и проверить подключение, вы получите в ответ на запрос curl на 80-й порт мастера имя второго пода. На практике, я не знаю, как можно использовать данную возможность. А вот для тестов в самый раз.

Weighted Experiment Step with Traffic Routing¶

Important

Available since v1.1

A Rollout using the Canary strategy along with Traffic Routing can
split traffic to an experiment stack in a fine-grained manner. When
Traffic Routing is enabled, the Rollout Experiment step allows
traffic to be
shifted to experiment pods.

Note

This feature is currently available only for the SMI, ALB, and Istio Traffic Routers.

In the above example, during an update, the first step would start
a baseline vs. canary experiment. When pods are ready (Experiment enters
Running phase), the rollout would direct 5% of traffic to and 5%
to , leaving the remaining 90% of traffic to the old stack.

Note

When a weighted experiment step with traffic routing is used, a
service is auto-created for each experiment template. The traffic routers use
this service to send traffic to the experiment pods.

Promoting a Rollout¶

The rollout is now in a paused state. When a Rollout reaches a step with no duration, it
will remain in a paused state indefinitely until it is resumed/promoted. To manually promote a
rollout to the next step, run the command of the plugin:

After promotion, Rollout will proceed to execute the remaining steps. The remaining rollout steps
in our example are fully automated, so the Rollout will eventually complete steps until it has has
fully transitioned to the new version. Watch the rollout again until it has completed all steps:

Tip

The command also supports the ability to skip all remaining steps and analysis with the
flag.

Once all steps complete successfully, the new ReplicaSet is marked as the «stable» ReplicaSet.
Whenever a rollout is aborted during an update, either automatically via a failed canary analysis,
or manually by a user, the Rollout will fall back to the «stable» version.

Deploying a Rollout¶

First we deploy a Rollout resource and a Kubernetes Service targeting that Rollout. The example
Rollout in this guide utilizes a canary update strategy which sends 20% of traffic to the canary,
followed by a manual promotion, and finally gradual automated traffic increases for the remainder
of the upgrade. This behavior is described in the following portion of the Rollout spec:

Run the following command to deploy the initial Rollout and Service:

Initial creations of any Rollout will immediately scale up the replicas to 100% (skipping any
canary upgrade steps, analysis, etc…) since there was no upgrade that occurred.

The Argo Rollouts kubectl plugin allows you to visualize the Rollout, its related resources
(ReplicaSets, Pods, AnalysisRuns), and presents live state changes as they occur.
To watch the rollout as it deploys, run the command from plugin:

Blue/green Deployment

Blue/green deployment quoted from TechTarget

Container technology offers a stand-alone environment to run the desired service, which makes it super easy to create identical environments as required in the blue/green deployment. The loosely coupled Services — ReplicaSets, and the label/selector-based service routing in Kubernetes make it easy to switch between different backend environments. With these techniques, the blue/green deployments in Kubernetes can be done as follows:

  • Before the deployment, the infrastructure is prepared like so:

Prepare the public service endpoint, which initially routes to one of the backend environments, say TARGET_ROLE=blue.

Optionally, prepare a test endpoint so that we can visit the backend environments for testing. They are similar to the public service endpoint, but they are intended to be accessed internally by the dev/ops team only.

  • Update the application in the inactive environment, say green environment. Set and in the deployment config to update the green environment.
  • Test the deployment via the test endpoint to ensure the green environment is ready to serve client traffic.
  • Switch the frontend Service routing to the green environment by updating the Service config with .
  • Run additional tests on the public endpoint to ensure it is working properly.
  • Now the blue environment is idle and we can:
    • leave it with the old application so that we can roll back if there’s issue with the new application
    • update it to make it a hot backup of the active environment
    • reduce its replica count to save the occupied resources

As compared to Rolling Update, the blue/green up* The public service is either routed to the old applications, or new applications, but never both at the same time.

  • The time it takes for the new pods to be ready does not affect the public service quality, as the traffic will only be routed to the new pods when all of them are tested to be ready.
  • We can do comprehensive tests on the new environment before it serves any public traffic. Just keep in mind this is in production, and the tests should not pollute live application data.

Need help?

For the past 4 years, Weaveworks has been running Kubernetes at scale. We’re happy to share our knowledge and help teams embrace the benefits of on-premise installations of upstream Kubernetes with Weaveworks Kubernetes Subscription.

Contact us for more details on our Kubernetes support packages.

About Anita Buehrle

Anita has over 20 years experience in software development. She’s written technical guides for the X Windows server company, Hummingbird (now OpenText) and also at Algorithmics, Inc. She’s managed product delivery teams, and developed and marketed her own mobile apps. Currently, Anita leads content and other market-driven initiatives at Weaveworks.

< Previous

Securing GitOps Pipelines

Next >

Developer Impact on the Bottom Line

Prerequisites

To follow along with this article, we will need some previous experience with Kubernetes. If new to this platform, kindly check out the Step by Step Introduction to Basic Kubernetes Concepts tutorial. There, you can learn everything you need to follow the instructions here. We would also recommend going through the Kubernetes documentation if and when required.

Besides that, we will need kubectl, a Command-Line Interface (CLI) tool that will enable us to control your cluster from a terminal. If you don’t have this tool, check the instructions on the .We will also need a basic understanding of Linux and YAML.

Summary K8s Deployments Strategies

To sum up, there are different ways to deploy an application; when releasing to development/staging environments, a recreate or ramped deployment is usually a good choice. When it comes to production, a ramped or blue/green deployment is usually a good fit, but proper testing of the new platform is necessary. If we are not confident with the stability of the platform and what could be the impact of releasing a new software version, then a canary release should be the way to go. By doing so, we let the consumer test the application and its integration into the platform. In this article, we have only scratched the surface of the capabilities of Kubernetes deployments. By combining deployments with all the other Kubernetes features, users can create more robust containerized applications to suit any need.

Стратегия непрерывного обновления

Стратегия скользящего обновления — это постепенный процесс, который позволяет обновлять вашу систему Kubernetes с незначительным влиянием на производительность и без простоев.

В этой стратегии развертывание выбирает модуль со старым программированием, деактивирует его и создает обновленный модуль для его замены. Развертывание повторяет этот процесс до тех пор, пока не останутся устаревшие модули.

Преимущество стратегии последовательного обновления заключается в том, что обновление применяется по отдельности, поэтому большая система может оставаться активной.

Во время этого процесса обновления производительность незначительно снижается, поскольку в системе постоянно на один активный модуль меньше желаемого количества модулей. Часто это предпочтительнее полной деактивации системы.

Стратегия последовательного обновления используется в качестве стратегии обновления по умолчанию, но подходит не для всех ситуаций. Некоторые соображения при принятии решения об использовании стратегии последовательного обновления:

  • Как моя система отреагирует на мгновенное дублирование модулей?
  • Достаточно ли существенное обновление для того, чтобы некоторые модули работали со старыми спецификациями?
  • Сильно ли незначительное снижение производительности повлияет на удобство использования моей системы? Насколько точно моя система чувствительна ко времени?

Например, представьте, что мы хотим изменить спецификации наших модулей. Сначала мы изменим шаблон Pod на новые спецификации, которые передаются из развертывания в ReplicaSet.

Затем развертывание распознает, что текущее состояние программы (модули со старыми спецификациями) отличается от желаемого состояния (модули с новыми спецификациями).

Развертывание создаст модули и ReplicaSet с обновленными спецификациями и поочередно перенесет рабочую нагрузку из старых модулей в новые.

К концу у нас будет совершенно новый набор модулей и ReplicaSet без простоев обслуживания.

Внедрение скользящего обновления

Мы будем использовать объявление файла YAML с именем deploy.yamlдля создания нашего развертывания.

В самом верху вы указываете версию Kubernetes API для использования. Предполагая, что вы используете последнюю версию Kubernetes, объекты развертывания находятся в группе apps/v1API.

Затем.kindполе сообщает Kubernetes, что вы определяете объект развертывания.

В этом.metadataразделе мы даем развертыванию имя и ярлыки.

В этом.specразделе происходит большая часть действий. Все, что находится ниже,.specотносится к модулю. Все, что вложено ниже,.spec.templateотносится к шаблону Pod, которым будет управлять развертывание. В этом примере шаблон Pod определяет один контейнер.

  • .spec.replicas сообщает Kubernetes, сколько реплик Pod нужно развернуть.
  • spec.selector — это список ярлыков, которые должны быть у модулей, чтобы развертывание могло ими управлять.
  • .spec.strategyсообщает Kubernetes, как в этом случае выполнять обновления модулей, управляемых развертыванием RollingUpdate.

Наконец, мы применим это развертывание к нашему кластеру Kubernetes с помощью команды:

Updating a Rollout¶

Next it is time to perform an update. Just as with Deployments, any change to the Pod template
field () results in a new version (i.e. ReplicaSet) to be deployed. Updating a
Rollout involves modifying the rollout spec, typically changing the container image field with
a new version, and then running against the new manifest. As a convenience, the
rollouts plugin provides a command, which performs these steps against the live rollout
object in-place. Run the following command to update the Rollout with the «yellow»
version of the container:

During a rollout update, the controller will progress through the steps defined in the Rollout’s
update strategy. The example rollout sets a 20% traffic weight to the canary, and pauses the rollout
indefinitely until user action is taken to unpause/promote the rollout. After updating the image,
watch the rollout again until it reaches the paused state:

When the demo rollout reaches the second step, we can see from the plugin that the Rollout is in
a paused state, and now has 1 of 5 replicas running the new version of the pod template, and 4 of 5
replicas running the old version. This equates to the 20% canary weight as defined by the
step.

Weight Verification¶

When Argo Rollouts adjusts a canary weight, it currently assumes that the adjustment was made and
moves on to the next step. However, for some traffic routing providers, this change can take a long
time to take effect (or possibly never even made) since external factors may cause the change to
become delayed.

This proposal is to add verification to the traffic routers so that after a setWeight step, the
rollout controller could verify that the weight took effect before moving on to the next step. This
is especially important for the ALB ingress controller which are affected by things like rate
limiting, the ALB ingress controller not running, etc…

Traffic Management tools in Kubernetes¶

The core Kubernetes objects do not have fine-grained tools needed to fulfill all the requirements of traffic management. At most, Kubernetes offers native load balancing capabilities through the Service object by offering an endpoint that routes traffic to a grouping of pods based on that Service’s selector. Functionality like traffic mirroring or routing by headers is not possible with the default core Service object, and the only way to control the percentage of traffic to different versions of an application is by manipulating replica counts of those versions.

Service Meshes fill this missing functionality in Kubernetes. They introduce new concepts and functionality to control the data plane through the use of CRDs and other core Kubernetes resources.

What is Argo Rollouts?¶

Argo Rollouts is a Kubernetes controller and set of CRDs which provide advanced deployment capabilities such as blue-green, canary, canary analysis, experimentation, and progressive delivery features to Kubernetes.

Argo Rollouts (optionally) integrates with ingress controllers and service meshes, leveraging their traffic shaping abilities to gradually shift traffic to the new version during an update. Additionally, Rollouts can query and interpret metrics from various providers to verify key KPIs and drive automated promotion or rollback during an update.

Here is a demonstration video (click to watch on Youtube):

Рейтинг
( Пока оценок нет )
Понравилась статья? Поделиться с друзьями:
Мой редактор ОС
Добавить комментарий

;-) :| :x :twisted: :smile: :shock: :sad: :roll: :razz: :oops: :o :mrgreen: :lol: :idea: :grin: :evil: :cry: :cool: :arrow: :???: :?: :!: