Migrate a monolith to microservices
Monolithic applications contain all of the components they need to function in a single, indivisible unit. They can be easier to develop and test because all of the application code is in the same location and it is easier to secure communication between components that are contained within the same application package.
However, issues come up as monolithic applications get larger and more complex, more developers join the project, or certain components need to be scaled individually. Moving towards a microservice architecture can help solve these issues.
When you are ready to transition to a microservices architecture, Consul and Nomad provide the functionality to help you deploy, connect, secure, monitor, and scale your application.
In this tutorial, you will clone the code repository to your local workstation and learn about the cloud infrastructure required to complete the scenarios in this collection.
Collection overview
This collection is composed of six different tutorials:
- a general overview, provided by this tutorial, that helps you navigate across the code used in the collection;
- Set up the cluster guides you through the setup of a Consul and Nomad cluster, and its infrastructure is used as a prerequisite for the remaining tutorials;
- four scenario tutorials, each showing the deployment of HashiCups, a demo application, on the Nomad and Consul cluster at different levels of integration:
- Deploy HashiCups demonstrates how to convert a Docker Compose configuration, used to deploy a monolithic application locally, into a Nomad job configuration file, or jobspec, that deploys the same application, as a monolith, into the Nomad cluster. This scenario does not integrate Consul for the deployment.
- Integrate service discovery demonstrates how to convert a Nomad job configuration for a monolithic application into an application deployed using Consul service discovery. The tutorial shows two different scenarios: the first, in which the application is deployed on a single Nomad node, the second, in which the application is deployed on multiple Nomad nodes, taking advantage of Nomad scheduling capabilities.
- Integrate service mesh and API gateway demonstrates how to set up your Consul and Nomad cluster to use Consul service mesh. The tutorial includes configuration for Consul API gateway and Consul intentions that are required for application security.
- Scale a service demonstrates how to use Nomad Autoscaler to automatically scale part of the HashiCups application in response to a spike in traffic.
The architectural diagrams below provide you with a visual representation of the configurations you will learn about and use in the different steps of the collection.
The cluster consists of three server nodes, three private client nodes, and one publicly accessible client node. Each node runs the Consul agent and Nomad agent. The agents run in either server or client mode depending on the role of the node.
Note
With the exception of the prerequisite tutorial that sets up the Nomad and Consul cluster, none of the other tutorials are mandatory. They are intended to show the progression of deployment maturity and the different integrations available between Consul and Nomad. You can decide which of the deployments suit your current scenario better and learn how to perform it without having to follow the other scenario tutorials.
Review the code repository
The infrastructure creation flow consists of three steps:
- Create the Amazon Machine Image (AMI) with Packer.
- Provision the infrastructure with Terraform.
- Set up access to the CLI and UI for both Consul and Nomad.
Clone the hashicorp-education/learn-consul-nomad-vm
code repository to your local workstation.
Change to the directory of the local repository.
The aws
directory
View the structure of the aws
directory. It contains the configuration files for creating the AMI and cluster infrastructure.
- The
aws-ec2-control_plane.tf
file contains configuration for creating the servers whileaws-ec2-data_plane.tf
contains configuration for creating the clients. Both are structured similarly.
- The
aws_base.tf
file contains configuration for creating the Virtual Private Cloud (VPC), security groups, and IAM configurations. This file defines the ingress ports for Consul, Nomad, and the HashiCups application.
- The
image.pkr.hcl
file contains the configuration to create an AMI using a Ubuntu 22.04 base image. Packer copies theshared
directory from the root of the code repository to the machine image and runs theshared/scripts/setup.sh
script.
The
secrets.tf
file contains configuration for creating gossip encryption keys, TLS certificates for the server and client nodes, and ACL policies and tokens for both Consul and Nomad.The
variables.hcl.example
file is the configuration file template used by Packer when building the AMI and Terraform when provisioning infrastructure. A copy is made of this file during cluster creation and updated with the AWS region and AMI ID after Packer builds the image. It also contains configurable variables for the cluster and their default values.The
variables.tf
file defines the variables used by Terraform and includes resource naming configurations, node types and counts, and Consul configurations for cluster auto-joining and additional cluster configuration.
The shared
directory
Next, view the structure of the shared
directory. It contains the configuration files for creating the server and client nodes, the Nomad job specification files for HashiCups, and additional scripts.
The shared/conf
directory contains the agent configuration files for the Consul and Nomad server and client nodes. It also contains systemd
configurations for setting up Consul as the DNS.
- The
shared/data-scripts/user-data-server.sh
andshared/data-scripts/user-data-client.sh
scripts are run by Terraform during the provisioning process for the server and client nodes respectively once the virtual machine's initial setup is complete. The scripts configure and start the Consul and Nomad agents by retrieving certificates, exporting environment variables, and starting the agent services. Theuser-data-server.sh
script additionally bootstraps the Nomad ACL system.
The
shared/jobs
folder contains all of the HashiCups jobspecs and any associated script files for additional components like the API gateway and the Nomad Autoscaler. The other tutorials in this collection will explain each of them.The
shared/scripts/setup.sh
file is the script run by Packer during the image creation process. This script installs the Docker and Java dependencies as well as the Consul and Nomad binaries.
- The
shared/scripts/unset_env_variables.sh
script unsets local environment variables in your CLI before the infrastructure destruction process with Terraform.
Next steps
In this tutorial, you became familiar with the infrastructure set up process for the cluster.
In the next tutorial, you will create the cluster running Consul and Nomad and set up access to each of their command line and user interfaces.