Select Page

Integrating HashiCorp Consul with Amazon Route 53 Resolver

Integrating HashiCorp Consul with Amazon Route 53 Resolver

When customers and enterprises begin the long process of re-architecting their monolith applications into small, manageable chunks (typically called microservices), the infrastructure to support them can increase in complexity as these services must now have a way to discover each other and establish communication. Previously, in the traditional monolith application, this communication took place in the host itself and didn’t require network services. However, that now changes as the application is broken up and is placed on a number of different hosts. Many organizations opt to deploy load balancers in front of each service within the application, along with a corresponding DNS record, which provides a single point of access to a service, regardless of the number of hosts, VMs, or containers serving that service. While this is a more traditional solution and works well, it might require the purchase of a vast number of load balancers (or licenses/hourly costs to run virtual ones) to front end these services and increases the administrative overhead of managing the complexity this introduces.

The scenario above is what enterprises are looking to either avoid or replace, as they continue to rearchitect the application along with the supporting infrastructure as they move to the public cloud. This is where HashiCorp Consul, a service mesh product designed to help solve these problems, can help IT organizations simplify operations while still securing this new east/west traffic among applications. Rather than provisioning a countless number of load balancers, the individual host (again, can be physical, virtual, or a container) can register itself with Consul specifying the particular service it is providing. Furthermore, combining the dynamic registration with health checks allows Consul to understand what services are available for consumption and which hosts are ‘healthy’ to service the request(s). Clients looking to connect with a service can then query DNS, in which Consul ultimately provides a response for one of the services. Let’s quickly take further look at some of the main focus areas and benefits of HashiCorp Consul:

Service Mesh: Simplifying connectivity and discovery between services are becoming critical as organizations retire the monolith and move to microservices.

Service Configuration: Services get runtime configurations from multiple solutions that tend to lose performance as they scale. Furthermore, these solutions are typically owned by separate teams, causing confusion and require numerous requests for a single application.

Service Segmentation: Requests to firewall teams doesn’t scale and can sometimes take weeks to fulfill. Many times, the firewall solution doesn’t offer simple integration into an application’s deployment methodology and becomes a manual task. These firewalls often require complicated rule sets and an organization generally requires many firewalls to handle east/west traffic.

It’s no secret that many folks are moving their workloads to the public cloud and those cloud platforms offer a vast number of services that can help reduce the requirements for traditional solutions that IT organizations required to manage to support the organization. This is, after all, one of the benefits of moving to a public cloud such as AWS, among many others.

The fact that most clients will rely on DNS to communicate with a service, AWS provides a hosted DNS service called Amazon Route 53. More specifically, a new feature called “Amazon Route 53 Resolver” was added a week before AWS re:Invent 2018 which can be configured within your VPC(s) to significantly simply configurations for sending desired DNS queries directly to Consul. You can now use Amazon-provided DNS to forward specific queries to Consul while continuing to serve your traditional domain-based queries from domain controllers. This reduces the manual configurations within Microsoft DNS and offers the ability to automate the AWS configuration using something like CloudFormation.

To get started with this integration, let’s take a look at our Consul cluster first and understand some of the settings. The Consul cluster is a (5) node cluster running Consul v1.4 Enterprise within a subnet in AWS. Alongside Consul is a HashiCorp Vault cluster as well, which has registered itself as a service. The following diagram depicts what the Consul/Vault environment looks like for this demonstration.

Using the ‘consul members‘ command will display the list of members in the cluster. The ‘consul catalog services‘ command will display the list of services registered with Consul.

The other configuration I wanted to show was the configuration file, in which I’ve configured Consul to respond to DNS queries of “” rather than the default of “consul.” domain. This configuration is pretty basic, but you can see how everything is configured for this lab. I’m also using AWS tags to bootstrap the cluster, rather than having to specify the IP addresses of each node within the configuration file (this requires a role assigned to the instance to allow it describe instances in EC2 to look for the specific tag).

  "server": true,
  "node_name": "CONSUL-NODE-A",
  "datacenter": "dc-1",
  "data_dir": "/var/consul/data",
  "bind_addr": "",
  "client_addr": "",
  "domain": "",
  "advertise_addr": "",
  "bootstrap_expect": 5,
  "retry_join": ["provider=aws tag_key=consul tag_value=true"],
  "ui": true,
  "log_level": "DEBUG",
  "enable_syslog": true,
  "acl_enforce_version_8": false

So now that you understand how the Consul cluster is configured and we know we have Vault registered as a service let’s see if Consul responds to a DNS query that is requesting the IP addresses of our Vault nodes. For this, we’ll do a local ‘dig’ and show the results. As you can see, Consul responded to the DNS request (over the default port of 8600) with the IP addresses of the (3) Vault servers. Notice that we’re using the domain “” here and not the default domain of “consul.” as indicated above.

Now that we have Consul configured how we want, we need to get DNS set so that any queries for * are forwarded to Consul. In most environments (probably all environments), the Consul nodes will not serve as the primary DNS servers for clients throughout the AWS and user environments, so we need to use a service to forward queries to Consul. To accomplish this, I’m going to use Amazon Route 53 Resolver as the primary DNS for other clients in AWS and configure forwarding for both internet access and to discover services that have registered with Consul, specifically Vault in this case.

To get started with Route 53 Resolver, we need to create both Inbound and Outbound endpoints to allow DNS queries from and to our VPC. Since we’re just using a single VPC for this post, the inbound and outbound endpoints will reside in the same VPC. I’ll select my VPC (Krausen-VPC) and security group (Krausen-DNS) that the endpoints will use. I’ll also select the Availability Zone (us-east-1a) and specific subnets (Krausen-PrivateSubnet-AZ1) in which I want the endpoints to be created. I’ll also create the Outbound endpoints identically.

Now that the endpoints are created, we need to create the rule that forwards any queries for “” to Consul. This allows us to use Resolver as our main DNS servers for things like the internet and forward other traffic where appropriate. In this example, I’ve configured a rule called “bryan” to forward any queries to two of the Consul servers over port 8600 (remember that Consul listens on 8600 for DNS and not the standard port 53). By the way, Resolver automatically adds a rule called “Internet Resolver” for recursive queries to internet bound traffic.

Last but not least, we need to configure DNS within our VPC to hand out the IP addresses of the Inbound endpoints as the DNS servers so future clients will automatically query the Route 53 Resolver when they come online. To do this, create a new DHCP Options Set (found in the VPC console) and configure it to hand out the IP addresses of the Inbound endpoints of Route 53 Resolver. (These can be found by selecting “Inbound endpoints” in the Route 53 console). Once you create the new DHCP Options set, assign it to your VPC.

At this point, everything should be configured for clients to query it’s default DNS servers (set as Route 53 Resolver now) for services registered with Consul. For demonstration purposes, I’ve launched a new instance in a subnet within the same VPC. The first thing we’ll do is ensure that the DNS servers were configured correctly for the new instance. As you can see, we got the IP addresses of the Route 53 Resolver Inbound endpoints as configured:


Next, we can make sure that we can hit services hosted on the internet. From this screenshot, you can see that we can successfully query the current address of

Last but not least, let’s make sure we can ping our Vault servers by name. Remember, this server doesn’t know anything about the domain “,” only it’s locally configured DNS servers. As you can see, the client was able to successfully retrieve the IP address of one of the Vault servers (see the ‘dig’ screenshot above to see the list of IPs for the three servers)

So there you have it, clients can now query services that are registered with Consul through the use of DNS forwarding. In this post, we used the Route 53 Resolver, but other solutions may work just as well, such as Microsoft DNS forwarding. Keep in mind, though, that Microsoft DNS doesn’t provide you with the ability to select the port you want to forward queries so it’ll forward those queries on the default port of 53. If you find yourself in this predicament, you can use this article by HashiCorp to install and configure ‘bind’ locally on each Consul server which will listen on port 53 but forward all queries to port 8600. This stems from the fact that services running on ports less than 1024 on Linux require a user with elevated privileges, in which we don’t want/need to run the Consul service within a production environment.

About The Author

Bryan Krausen

Bryan Krausen is currently working as a Sr. Solutions Architect with experience in a vast number of platforms, specializing in AWS and HashiCorp tools.

Leave a reply

Your email address will not be published.