7 good practices for high-performance containerised applications
Containers are synonymous with agility, but they need to be well managed to ensure high-performance applications. From development to operation, the software factory must opt for new practices.
How to take advantage of containers – and above all, their portability – while guaranteeing optimal application performance? This is the key issue for all organizations that are moving in this direction. A question that calls for many others. Are old applications containerisable? What principles should be applied for new applications? How to cut them into micro services?
First elements of answers with 7 fundamental good practices.
1) Engage in containerisation with a global vision
The formula may seem agreed and yet… Too many organizations still adopt containers as a simple technical way of running applications. This is not the case: properly adopting containers means questioning the entire IT production chain, from design to operation. And this questioning is only admissible if the effort required is part of a shared transformation strategy with explicit objectives (reducing development times, lowering operating costs, etc.).
Without a global strategy, containers will not be able to achieve their full potential, including for application performance.
2) Identify the right candidate applications and… the worst ones
Are containers reserved for micro services? No, and, good news, even monolithic applications can benefit from it… under certain conditions.
Eligibility criteria are closely linked to the very nature of containers. Because a container must be able to be switched off, started and moved instantly or almost instantly, it is ephemeral and immutable in nature – therefore stateless. As a result, it does not have to embed or manage large data files. Therefore a monolithic application that makes intensive use of data is not a good candidate for containerization: its start-up and movement will necessarily be (too) slow.
In the same way, applications designed for older Kernels (i.e. not suitable for containers) or too dependent on a system grafted directly onto the Kernel are not recommended since the container isolates the underlying infrastructure. On the other hand, an old application that would escape these limits may potentially benefit from being containerized, even if it has not been recast for this purpose.
3) Focus on “observability”
Of course, containers are better suited to software architectures that distinguish the different services that make up applications. But this development by micro services imposes a discipline and practices that are often underestimated. Among them, “observability” does not refer to a function but to a characteristic.
An “observable” micro-service is designed so that IT teams can accurately identify where a problem comes from in the event of a malfunction or anomaly. Failing this, identifying the weak link in a hyper-distributed architecture that causes the performance of the entire chain to fall is complex and costly. Sometimes to the point of forgetting the advantages of containers…
4) Cutting applications into a performance logic
How to divide an application into micro services to guarantee its maintainability but also its performance in a containerized infrastructure? Most often, companies opt for a breakdown by roles, which gradually shape the contours of future micro-services.
Martin Fowler, a world-renowned architect, recommends a more pragmatic approach: identifying the business parts of an application that can benefit from micro services and containers based on scalability criteria or frequency of feature evolution. A way of thinking from the right start about the division of applications into a performance logic.
5) Configure the infrastructure for performance
The quest for performance in a container approach begins upstream, from the configuration of the infrastructure. At a minimum, it is a question of anticipating 3 factors:
The type of load that the application will undergo. Are they occasional, seasonal peaks? The answers directly influence the design of the containerized application.
Geographical coverage. If the application architecture must be designed for several regions of the world, it is important to qualify it upstream to take into account the (often complex) synchronization mechanisms.
Pure performance. Does the application use encryption? To highly distributed data? Again, these parameters should be considered as soon as possible to adjust the container design and architecture configuration.
6) Automatise the software factory
If containers by their nature (compactness, portability) promise performance gains, it is in practice that end-to-end automation of the software factory makes the difference.
This is one of the promises of the “Infrastructure as a Code” concept, to which solutions such as Ansible or Terraform give shape. With them, containerised infrastructure gains in elasticity, and therefore in its ability to optimise performance over time.
7) Set right tools
However, containerization, combined with the notions of flexibility and agility, is based on a technological millefeuille that inevitably introduces complexity and risks. The operation of such an infrastructure leads to new precautions and habits. For example for :
Save the configuration files of the execution clusters (Kubernetes, Openshift, Swarm, etc.)
Monitor container images (and avoid the production of images that are not updated)
Centralize all container and orchestration logs to supervise the proper functioning of the architecture
This is only a very partial view of the container management checklist. In addition, there are also new practices to anticipate vulnerabilities or to develop dashboards that are consistent with a containerized infrastructure.
Objective: Supervise, beyond containers, operational benefits, from reducing lead times (to production) to improving availability and performance.