

Orchestration, monitoring, security… For IT teams, the transition to containers opens new opportunities. That will require new expertise.
As part of the information system transformation, containers undeniably represent the armed arm of the devops approach. And for good reason as containers answer three founding principles of devops:
- collaboration (everyone works on the same containers),
- automation (Docker APIs automate different tasks of creation, production, configuration)
- and continuous integration (with its metrics and release management).
This is what gives substance to the promise of “continuous integration / continuous delivery” (alias “CI/CD”). But 3 challenges must be answered.
Challenge #1: From cloud clusters to serverless services, new environments to master
Not surprisingly, moving towards this agility means responding to new challenges. The first of these concerns organization and skills: for technical teams, containers – like micro services – represent a real change of perspective. A new basic unit. Certainly, the container breaks silos but it also implies rethinking the roles of each one just as micro services require new methods to remain in control of architectures whose level of complexity is increasing.
The challenge is significant because it is not only a question of mastering the containers but also the technological environments that host them. And the possibilities are… vast. Containers can be deployed on the internal infrastructure, directly on physical machines, but also on virtual machines, for example to be integrated into DRPs (Disaster Recovery Plans). Given their portability, they can also be deployed in the cloud, on virtual servers, on container-oriented cloud clusters or on so-called “serverless containers” services such as Azure Container Instances ACI and AWS Fargate.
Challenge #2: Master the technological stacks of containers
One thing is certain: whatever the environment chosen, teams must master the underlying technologies, adapt their working methods and equip themselves with new automation and supervision tools. Otherwise, there is a high risk of being overwhelmed by the management of hundreds or even thousands of containers.
Mirroring applications, business continuity, load balancing, container life cycle… To meet the challenges of container management, a whole ecosystem of solutions has emerged in the past years, that must now be domesticated. If Kubernetes has become a de facto standard for orchestrating containers, it has competitors such as OpenShift (Red Hat), Docker Swarm, Rancher, Mesosphere DC/OS or Katello. In any case, container management implies an increase in skills on these platforms that cannot be achieved easily.
Challenge #3: Rethink security for containerized infrastructures.
The containers, new motto for modern IT architectures, and they are also changing the focus of information system security maps. While they are totally isolated from each other, the containers are all based on the same operating system, a same Kernel that has its own vulnerabilities and configuration flaws.
Therefore, some companies prefer to deploy containers within VM (in other words, encapsulate containers in VMs) to benefit from better insulation and therefore increased security. That’s why Gartner also recommends using an operating system as light and rugged as possible. However, it should not be forgotten that containers are generally deployed on a managed infrastructure, typically a Kubernetes cluster, which also has its own vulnerabilities (intrinsic or resulting from poorly controlled settings).
Ensuring the security of a container-based infrastructure therefore requires real expertise. If containers, alongside micro services and devops approaches are indeed tools of the digital transformation in progress, they themselves call for a profound transformation of IT organizations. And a dedicated change management.
Serverless, From CaaS to FaaS
The serverless concept is on the rise. Well suited to containers, it reduces deployment to a single click, with the cloud provider’s CaaS (Container as a Service) infrastructure taking care of provisioning the entire environment.
Another serverless paradigm is also emerging today: “Function as a Service” (FaaS). In this model, developers “only” write the code of the functions called to trigger configured events without having to program the plumbing that normally structures an application. The FaaS approach allows to easily and quickly assemble functionalities linked together by Gateways APIs and event managers without worrying about infrastructures, resource allocation, scalability or container deployment issues.

by Matthieu Demoor