Magento Global Reference Architecture Terraform Infrastructure as Code (IaC) on AWS Cloud

Yegor Shytikov
6 min readJan 13, 2022

This -> Open Source Repo creates global AWS resources using Infrastructure as a code (IaaC) Terraform approach:

https://github.com/Genaker/TerraformMagentoCloud

What this free Open Source Repository Does?

It setups Multi-Region AWS infrastructure with auto-scaling using AWS Cloud and Terraform and Terragrunt.

A minimal approach creates minimal infrastructure without redundant complexity, like the Varnish cache server layer. However, Varnish is also supported. Varnish is Redundant in modern architecture. You can use Cloud Front or Cloud Flare, Fastly, or Redis instead. Also, simple KISS platforms have less downtime.

Varnish Cache doesn’t help to fix Magento 2 performance problems.

Modern eCommerce is dynamic, not static blog pages.

Varnish is okay for Blogs and News websites however not for modern eCommerce.

This Tool allows you to Define Terraform code once, no matter how many global environments you have.

Consider the following file structure, which defines three environments (prod, QA, stage) with the same infrastructure in each one (an app, a MySQL database, and a VPC):

United States E-commerce Infrastructure

└── US
├── prod
│ ├── app
│ │ └── main.tf
│ ├── mysql
│ │ └── main.tf
│ └── vpc
│ └── main.tf
├── qa
│ ├── app
│ │ └── main.tf
│ ├── mysql
│ │ └── main.tf
│ └── vpc
│ └── main.tf
└── stage
├── app
│ └── main.tf
├── mysql
│ └── main.tf
└── vpc
└── main.tf

Australian E-commerce Infrastructure

└── AU
├── prod
│ ├── app
│ │ └── main.tf
│ ├── mysql
│ │ └── main.tf
│ └── vpc
│ └── main.tf
├── qa
│ ├── app
│ │ └── main.tf
│ ├── mysql
│ │ └── main.tf
│ └── vpc
│ └── main.tf
└── stage
├── app
│ └── main.tf
├── mysql
│ └── main.tf
└── vpc
└── main.tf

European E-commerce Infrastructure

└── EU
├── prod
│ ├── app
│ │ └── main.tf
│ ├── mysql
│ │ └── main.tf
│ └── vpc
│ └── main.tf
├── qa
│ ├── app
│ │ └── main.tf
│ ├── mysql
│ │ └── main.tf
│ └── vpc
│ └── main.tf
└── stage
├── app
│ └── main.tf
├── mysql
│ └── main.tf
└── vpc
└── main.tf

The contents of each environment will be more or less identical, except perhaps for a few settings (e.g. the prod environment may go big or more servers regional websites hosted in a different region). As the size of the infrastructure grows maintaining all of this code between environments becomes more error-prone. You can reduce the amount of copy-paste using Terraform modules, but even the code to instantiate a module and set up input variables, output variables, providers, and remote state can still create a lot of maintenance overhead.

You can define your general configuration just once in the root terragrunt.hcl file:

# stage/terragrunt.hclremote_state {
backend = “s3”
config = {
bucket = “my-terraform-state”
key = “${path_relative_to_include()}/terraform.tfstate” region = “us-east-1”
encrypt = true
dynamodb_table = “my-lock-table”
}
}

The terragrunt.hcl files use the same configuration language as Terraform (HCL) and the configuration is more or less the same as the backend configuration you had in each module, except that the key value is now using the path_relative_to_include() built-in function, which will automatically set key to the relative path between the root terragrunt.hcl and the child module (so your Terraform state folder structure will match your Terraform code folder structure, which makes it easy to go from one to the other).

To update each environment settings terragrunt.hcl files to tell them to include the configuration from the root terragrunt.hcl:

# stage/mysql/terragrunt.hclinclude {
path = find_in_parent_folders()
}

The find_in_parent_folders() helper will automatically search up the directory tree to find the root terragrunt.hcl and inherit the remote_state configuration from it.

With this approach, copy/paste between environments is minimized. The terragrunt.hcl files contain solely the source URL of the module to deploy and the inputs to set for that module in the current environment. To create a new environment, you copy an old one and update just the environment-specific inputs in the terragrunt.hcl files, which is about as close to the “essential complexity” of the problem as you can get.

Global Reference Architecture

Adobe Commerce Global Reference Architecture
  • Each region utilizes the same code base and deployment/build CI/CD pipelines
  • Each Region has Own resources defined as Terraform Code
  • Each region has horizontal auto-scaling of the Web-Servers
  • Each region has Vertically scaled DB (MySQL), Elastic Search and Reis Server.
  • Redis and MySQL DB supports horizontal scaling using read replicas. Requires changes of the configurations, not automatic scaling (proactively in advance)
  • Each Region has its own dedicated Admin/Cron vertically scalable server. Admin activities and cons don’t affect user/customers performance.

Each region has infrastructure like this and scales independently :

There are also two best ways to manage the magento code base for tenants.

By separate composer. json and managing modules on/off by config.php files. Also, each environment has its own codebase with customization with shared or unique modules orchestrated by the composer or git submodules. Each tenant has its own feathers and extensions without bloating the core codebase. Each tenant deploys separately using code automation tools like git hook or code commit.

git submodules

developing using GIT best practices.

Submodules allow you to keep a Git repository as a subdirectory of another Git repository. This lets you clone another repository into your project and keep your commits separate.

Execute the following command:

> cd /var/www/magento

> git submodule add git@github.com:user/magento-2-module.git

As a result new magento-2-module directory is created inside /var/www/magento/ project.

.gitmodules – is a configuration file that stores the mapping between the project’s URL and the local subdirectory.

Read full doc here: https://git-scm.com/book/en/v2/Git-Tools-Submodules

Managing Composer.Json per Tenant

You use Composer to manage dependencies for your custom modules. To do this, you keep your custom tenant site’s composer.json file. If your custom project is not hosted on Packagist you may use a Composer path repository to accomplish this.

Also, you can use this helpful composer module from Wikimedia to have a composer file specific to the tenant.

https://github.com/wikimedia/composer-merge-plugin

It merge multiple composer.json files at Composer runtime.

Example :

Composer Merge Plugin is intended to allow easier dependency management for applications which ship a composer.json file and expect some deployments to install additional Composer managed libraries. It does this by allowing the application’s top level composer.json file to provide a list of optional additional configuration files. When Composer is run it will parse these files and merge their configuration settings into the base configuration. This combined configuration will then be used when downloading additional libraries and generating the autoloader.

Composer Merge Plugin was created to help with installation of MediaWiki which has core library requirements as well as optional libraries and extensions which may be managed via Composer.

Conclusion: use software development best practices, and don’t rust Adobe BS and scum!

--

--

Yegor Shytikov

True Stories about Magento 2. Melting down metal server infrastructure into cloud solutions.