How at Newesis we created our new company web site

DevOps approach and automation without micro services

We are a small company, we can not dedicate too much budget and resources to build and manage our own company web site but we have certain strong beliefs on how things should be done and so we had to apply in house the same concept we are using when working for our customers. 

At Newesis we strongly believe on micro services architectures, but we also believe that business context and goals are the driver to select the technology to be used. We also believe that “to use” is always the approach to consider before the “to build”. For this reason, considering the need to reduce as max as possible the effort and time required to create and update our web site, we decided to avoid the usage of a micro services based approach and to embrace a traditional WordPress solution with MySQL as database.

WordPress logo

WordPress is since many years one of the leading CMS solution in the market. It is constantly updated and maintained, it is extensible, it is easy to use, it relies on an ecosystem of plugins and themes that allow you to do almost anything you may need and, not to forget to mention, it is free. Same considerations can be done for MySQL. 

MySQL logo

The combination of WordPress and MySQL was at the end a good match for the needs of our corporate web site. We would have been able to select the template and plugins we needed and start from a solid base to quickly build the web site, preserving the ability to easily customise layout and features using PHP, CSS and Javascript. 

Once this decision had been taken we found ourselves in front of the following questions: 

  • “How can we deal with what basically is a LAMP stack?” 
  • “How can we implement automation on the build and deployment?” 
  • “How can we implement resiliency and scalability?”
  • “How will we deal with performance and security?”

We were used to approach micro services based architectures, to deal with containers, PaaS from Cloud vendors, to have a clear segregation between data, software and configurations.

Toolset, SDLC and Automation

We defined our toolset and approach. Visual Studio Code was our choice as editor; so many extensions, available multi-platform and, again considered, free. Most of us were already using it for other activities, so it was the natural choice.

We needed a source code repository and pipelines to automate the deployment and testing. We selected Azure DevOps Services, once again a free tier for small teams that was perfect for us to host our private GIT repositories and to build our pipelines, managing the activities via a Kanban boards for stories, tasks and bugs and a backlog board for our epics. 

We wanted our setup and deployments to be repeatable, composed by idempotent operations, with everything described in code and templates and with the ability to create new environments or to scale vertically or horizontally the existing one in case of need.

We came to the conclusion that our web application would have been packaged and deployed as container, meaning we need a Container Registry and a Kubenetes Cluster in our infrastructure.

We decided to have in our GIT Repo the following structure:

  • Infrastructure: to host templates and scripts to create (or upgrade) any infrastructure element, as Virtual Machines or other Cloud Services
  • Deployment: to host the application deployment scripts and templates
  • Website: to host the actual code and structural content of the web site, basically the set of WordPress Themes and Plugins and custom developed extensions and content
  • Database: to host any script to create or update the MySQL databases

No sensitive data is available in the GIT Repo, but placeholders are used in order to automatically insert those data when needed at runtime taking them from secured encrypted registries.

Infrastructure as Code

In order to use an IaC approach we decided to use a Public Cloud provider, Microsoft Azure, and we selected Terraform as our orchestrator. 

Using Terraform we define and create the Azure Container Registry and the Azure Kubernetes Service (AKS).

To complete the Infrastructure area of our GIT Repo we created the scripts to configure Helm and Tiller to be used with our AKS and the data backup service.

Data Backup Service is a simple containerised application, built using an Alpine base, installing MySQL client tools and setting as entry point a bash script that open a connection to a remote MySQL instance, query for the list of the databases and execute a dump of each of them, to store it all in a compressed file saved on a specific path. The Infrastructure section of the GIT Repo simply contains the dockerfile to build the container image and the bash script.

We inserted our IoC in our deployment pipeline as initial step, to assure that any change done to the infrastructure out of the automated deployment will always be corrected and configuration moved back to desired state. 


The Deployment area of the GIT Repo contains the Helm chart and the other YAML files used to define each software module to be deployed into our AKS.

As already for the Terraform templates, we adopted a structure where variables files are not filled but contains placeholders that the deployment pipeline will replace with values defined in the pipeline configuration. We could have inserted the placeholder directly in the main template files, avoid the usage of variables files, but this would have made the entire set of files more complex to maintain when the number of files and their length increase. using this approach the definition files are not containing any placeholder to be replaced by the pipeline but they always refer to variables in standard variables files. Only those files contains placeholders so you can work in your YAML files without caring about what is in the pipeline and you will have to consider only the very small list of variables files to get the entire list of placeholders when working on the pipeline configuration.

Deployments include the actual website (the WordPress based container image); the database (MySQL); the CertManager; the NGINX Ingress Controller; the Data Backup Service CronJob.


This section of the GIT Repo contains the actual code and structural content plus the dockerfile to build the container image. We decided to use the latest version of Apache, WordPress and PHP as the starting point, creating a base image to start with, and to add opensource, commercial or custom made templates and plugins and custom code or graphical extension. 

Folders used by WordPress for the local cache, for the editorial content upload and configuration files are excluded from the GIT Repo and therefore from the image. 

While configuration files are edited at each startup of a container instance in order to populate it with the environment variable parameters, the cache and uploads folders are on a PersistentVolume, in order to preserve their content on each specific environment, independently from the deployment of new container images.


We decided to use the latest version of MySQL and to store in the GIT Repo any script used to create or alter the database structure or content, in order to automatically execute it at the startup of each MySQL image. As per the WordPress image, the version of MySQL is packaged inside the container image used, configuration parameters are taken at startup from the environment variables and upgrade scripts are also inserted into the container to be used at each startup, but the actual data storage is on a PersistentVolume to make it stale on each environment.

Build and Deployment Pipelines

Newesis Corporate Site Pipeline Stats

The build phase is triggered by the push into the main trunk of the GIT Repo, this will cause a new image to be created of each container (if files changed in the file structure impacting that container) and to push the newly created image with the incremented version number, plus the “latest” tag, into the private Container Registry. 

Different variables are used in order to keep sensitive data, as the registry credentials, out of the build definition and to have flexibility to reuse the tasks in different scenarios and software packages without the need to duplicate.

Newesis Corporate Site — Extract from the Build Definition

The Infrastructure and Deployment files are instead pushed to an artifact repository where they will be retrieved by the deployment pipeline.

The Deployment Pipeline is automatically triggered at the completion of a new build and will execute the Infrastructure and Deployment steps for each environment. 

Newesis Corporate Site — Extract from Deployment Pipeline

We came to the decision to use one unique AKS cluster for all environments and to install it using the RBAC configuration and to define multiple namespaces to segregate the different services (CertManager, Ingress Controller) and the different environments for our solution (preproduction, demo, production). We have each step of Kubernetes deployment linked to its specific namespace and using a Service Principal credential having access restricted to that specific namespace. This is protecting us from any mistake, for example as insertion of an explicit namespace in one of the deployment YAML file pointing to a wrong one (as CertManager with the Ingress Controller namespace) would just cause the deployment of that element to fail because of the inability of the task to work on a namespace different from the ones defined in the pipeline. 

Delivery, Performance and Security

Using Terraform, Kubernetes, Docker Containers and Azure DevOps Services we came to an adequate level in terms of automation, assuring the ability to create, upgrade, maintain but also recreate from scratch multiple time the runtime environment of our corporate web site. 

The usage of AKS provides the ability to scale both vertical and horizontally and to manage failures; the PersistentVolumes and the replicated storage together with the Data Backup Services we implemented allow to guarantee the reliability and durability of the data.

We wanted our web site also to have the right level of performance and security. 

Cloudflare Logo

We decided therefore to move our DNS zone into Cloudflare DNS service and to enable Cloudflare CND services in front of our web site. 

Cloudflare provided us not only with a performant and robust DNS service but it enabled DDoS and WAF protection in front of our web site, including the ability to define rules for the TTL at the Edge and on client site cache and restriction rules to deny access to management reserved areas of the web site.


Our new Corporate site is a small web site, not that much content, even less traffic, it didn’t required massive development to be built and it is involving a very light maintenance activity. At the same time it has been an interesting exercise for us to see how we can apply DevOps based concepts and approach also to a traditional LAMP stack solution, enabling a full chain of automation, as required by our modern, fast changing, digital landscape.