Folks asking what's wrong with the “official” Magento Commerce Cloud? In this article, I will try to describe why Magento “Cloud” is not a real cloud and why it is bad.
let's Check what the official Magento Commerce Cloud is:
Magento/Adobe Commerce Cloud architecture deploy databases, web server, and caching servers on the single instance. So, it is 1-tire architecture.
Single-tier architecture implies putting all of the required components for a software application (PHP, DB MySQL, (MariaDB) Galera, Redis Cache, Crons, RebbitMQ, Admin, NFS, Elastic Search, NewRelic for monitoring, Nginx, HAProxy etc.) on just one server. This is a good way to test your application in development environments, and it is an ideal solution for small sites with low traffic demand. It is handy to manage and maintain, and, of course, a Single-Tier deployment is cost-effective.
But having all the resources on the same machine can create a performance and security risk. If the PHP part will produce high CPU usage and knowing Magento 2 slow framework, it will be a bottleneck, and the entire Website will be down. PHP will use all resources required for MySQL, Elastic Search, Redis. Even Just Cron jobs that are constantly running each minute can affect eCommerce website performance and create a bottleneck. Or if MySQL will have a slow query, it will impact other processes: PHP, Elastic, Redis, etc.
The typical architecture of the Magento Single-Tier Solution is Magento cloud:
The only difference of the Magento Cloud singe-Tier architecture is high availability triple redundancy when the same infrastructure replicated in different availability zones, making Magento Commerce Cloud more costly and 3 times slower because of the network latency than classical single tier.
I'm fun of single-tier architecture for small merchants. However, the way Magento is doing it is just terrible. Magento Commerce Cloud just combining the worst features of the single and multi-tier architecture. Basically, what you need to know about Magento Cloud: Magento Team didn't build it for Magento, just rented it from the 3-d party Platform.sh. Platform SH was built for corporate websites on Drupa and WordPress and is not a good choice for e-commerce.
For example, My Single tier Architecture using AWS Graviton 2 processor X5.16xlarge with the price of $8,200 per year:
This single-tier Magento architecture can generate 7200 uncached pages per second or ~ 500K requests per hour and handle 43K orders per hour. Cached pages throughput, it doesn’t matter anyway, is cached by Fastly CDN. It is the biggest Magento scam when salespersons try to sell Magento cloud performance as Cache Perforce.
Magento 2 Graviton 2 Single Tire architecture performance:
- Uncached Pages per hour: ~500K or 7200+ per minute
- Orders per hour: 42K+
- Without impacting average response time around 500ms TTFB (time to the first byte)
It is really easy to use because it is a single server.
There is the automated script how to install required Magento software on single/multiple instances:
Every merchant wants to have a faster website and pay less for hosting. Magento Commerce Cloud works only for small…
If you want more advantages of the AWS cloud, you can add separate RDS AWS Aurora MySQL instances with good features provided by AWS.
Magento with RDS Arora has the next advantages :
Up to 5X Higher Throughput than MySQL
Testing on standard benchmarks such as SysBench has shown up to a 5x increase in throughput performance over stock MySQL on similar hardware. Amazon Aurora uses a variety of software and hardware techniques to ensure the database engine is able to fully leverage available compute, memory and networking. I/O operations use distributed systems techniques such as quorums to improve performance consistency.
Push-Button Compute Scaling
Using the Amazon RDS APIs or with a few clicks in the AWS Management Console, you can scale the compute and memory resources powering your deployment up or down. Compute scaling operations typically complete in a few minutes.
Amazon Aurora will automatically grow the size of your database volume as your database storage needs grow. Your volume will grow in increments of 10 GB up to a maximum of 128 TB. You don’t need to provision excess storage for your database to handle future growth.
Low-Latency Read Replicas
Increase read throughput to support high-volume application requests by creating up to 15 database Aurora replicas. Amazon Aurora Replicas share the same underlying storage as the source instance, lowering costs and avoiding the need to perform writes at the replica nodes. This frees up more processing power to serve read requests and reduces the replica lag time — often down to single-digit milliseconds. Aurora provides a reader endpoint so the application can connect without having to keep track of replicas as they are added and removed. Aurora also supports auto-scaling, where it automatically adds and removes replicas in response to changes in performance metrics that you specify. Aurora also supports cross-region read replicas.
Custom Database Endpoints
Custom endpoints allow you to distribute and load balance workloads across different sets of database instances. For example, you may provision a set of Aurora Replicas to use an instance type with higher memory capacity in order to run an analytics workload. A custom endpoint can then help you route the analytics workload to these appropriately-configured instances, while keeping other instances isolated from this workload.
Amazon Aurora Parallel Query provides faster analytical queries over your current data. It can speed up queries by up to 2 orders of magnitude, while maintaining high throughput for your core transaction workload. By pushing query processing down to the Aurora storage layer, it gains a large amount of computing power while reducing network traffic. Use Parallel Query to run transactional and analytical workloads alongside each other in the same Aurora database.
Instance Monitoring and Repair
Amazon RDS continuously monitors the health of your Amazon Aurora database and underlying EC2 instance. In the event of database failure, Amazon RDS will automatically restart the database and associated processes. Amazon Aurora does not require crash recovery replay of database redo logs, greatly reducing restart times. It also isolates the database buffer cache from database processes, allowing the cache to survive a database restart.
Multi-AZ Deployments with Aurora Replicas
On instance failure, Amazon Aurora uses RDS Multi-AZ technology to automate failover to one of up to 15 Amazon Aurora Replicas you have created in any of three Availability Zones. If no Amazon Aurora Replicas have been provisioned, in the case of a failure, Amazon RDS will attempt to create a new Amazon Aurora DB instance for you automatically.
For globally distributed applications you can use Global Database, where a single Aurora database can span multiple AWS regions to enable fast local reads and quick disaster recovery. Global Database uses storage-based replication to replicate a database across multiple AWS Regions, with typical latency of less than 1 second. You can use a secondary region as a backup option in case you need to recover quickly from a regional degradation or outage. A database in a secondary region can be promoted to full read/write capabilities in less than 1 minute.
Fault-Tolerant and Self-Healing Storage
Each 10GB chunk of your database volume is replicated six ways, across three Availability Zones. Amazon Aurora storage is fault-tolerant, transparently handling the loss of up to two copies of data without affecting database write availability and up to three copies without affecting read availability. Amazon Aurora storage is also self-healing; data blocks and disks are continuously scanned for errors and replaced automatically.
Automatic, Continuous, Incremental Backups and Point-in-Time Restore
Amazon Aurora’s backup capability enables point-in-time recovery for your instance. This allows you to restore your database to any second during your retention period, up to the last five minutes. Your automatic backup retention period can be configured up to thirty-five days. Automated backups are stored in Amazon S3, which is designed for 99.999999999% durability. Amazon Aurora backups are automatic, incremental, and continuous and have no impact on database performance.
DB Snapshots are user-initiated backups of your instance stored in Amazon S3 that will be kept until you explicitly delete them. They leverage the automated incremental snapshots to reduce the time and storage required. You can create a new instance from a DB Snapshot whenever you desire.
Backtrack lets you quickly move a database to a prior point in time without needing to restore data from a backup. This lets you quickly recover from user errors, such as dropping the wrong table or deleting the wrong row. When you enable Backtrack, Aurora will retain data records for the specified Backtrack duration. For example, you could set up Backtrack to allow you to move your database up to 72 hours back. Backtrack completes in seconds, even for large databases, because no data records need to be copied. You can go backards and forwards to find the point just before the error occurred.
Backtrack is also useful for development & test, particularly in situations where your test deletes or otherwise invalidates the data. Simply backtrack to the original database state, and you’re ready for another test run. You can create a script that calls Backtrack via an API and then runs the test, for simple integration into your test framework.
Amazon Aurora runs in Amazon VPC, which allows you to isolate your database in your own virtual network, and connect to your on-premises IT infrastructure using industry-standard encrypted IPsec VPNs. To learn more about Amazon RDS in VPC, refer to the Amazon RDS User Guide. In addition, using Amazon RDS, you can configure firewall settings and control network access to your DB Instances.
Amazon Aurora allows you to encrypt your databases. On a database instance running with Amazon Aurora encryption, data stored at rest in the underlying storage is encrypted, as are the automated backups, snapshots, and replicas in the same cluster. Amazon Aurora uses SSL (AES-256) to secure data in transit.
Amazon Aurora allows you to log database events with minimal impact on database performance. Logs can later be analyzed for database management, security, governance, regulatory compliance and other purposes. You can also monitor activity by sending audit logs to Amazon CloudWatch.
Fully Managed — Easy to Use
Getting started with Amazon Aurora is easy. Just launch a new Amazon Aurora DB Instance using the Amazon RDS Management Console or a single API call or CLI. Amazon Aurora DB Instances are pre-configured with parameters and settings appropriate for the DB Instance class you have selected. You can launch a DB Instance and connect your application within minutes without additional configuration. DB Parameter Groups provide granular control and fine-tuning of your database.
Monitoring and Metrics
Amazon Aurora provides Amazon CloudWatch metrics for your DB Instances at no additional charge. You can use the AWS Management Console to view over 20 key operational metrics for your database instances, including compute, memory, storage, query throughput, cache hit ratio, and active connections. In addition, you can use Enhanced Monitoring to gather metrics from the operating system instance that your database runs on. Finally, you can use Amazon RDS Performance Insights, a database monitoring tool that makes it easy to detect database performance problems and take corrective action, with an easy-to-understand dashboard that visualizes database load.
Automatic Software Patching
Amazon Aurora will keep your database up-to-date with the latest patches. You can control if and when your instance is patched via DB Engine Version Management. Aurora uses zero-downtime patching when possible: if a suitable time window appears, the instance is updated in place, application sessions are preserved and the database engine restarts while the patch is in progress, leading to only a transient (5 second or so) drop in throughput.
DB Event Notifications
Amazon Aurora can notify you via email or SMS of important database events such as an automated failover. You can use the AWS Management Console or the Amazon RDS APIs to subscribe to over 40 different DB events associated with your Amazon Aurora databases.
Fast Database Cloning
Amazon Aurora supports quick, efficient cloning operations, where entire multi-terabyte database clusters can be cloned in minutes. Cloning is useful for a number of purposes including application development, testing, database updates, and running analytical queries. Immediate availability of data can significantly accelerate your software development and upgrade projects and make analytics more accurate.
You can clone an Amazon Aurora database with just a few clicks, and you don’t incur any storage charges, except if you use additional space to store data changes.
The END about Aurora DB
Continue about the Magento Cloud )
So, just a regular AWS Database has more advantages than the entire Magento Cloud Hosting.
What is Magento 2 Multi-Tenant Architecture
Multi-tier architecture solves many performance and security problems by splitting data and load across more than one server. Having all the resources spread into different servers boosts your performance. In addition to this, having different layers for different resources implies adding an extra security layer by separating data from code.
This architecture also provides high scalability and failover: you can add as many nodes as you need to increase the capacity of your cluster (Horizontal Scaling). This way, the workload is also decentralized, ensuring that when a node is down, the rest of the deployment is working.
Terraform Magento AWS provisioning script of the multi-tier infrastructure:
This repository contains Magento 2 Cloud Terraform infrastructure as code for AWS Public Cloud. This infrastructure is…
Magento 2 Multi-Tier Architecture Benefits
- Extended workloads into different servers provide much better performance.
- Ability to add capacity by increasing the number of nodes in the cluster (Auto Scaling).
High Availability / Failover
- Fault-tolerance: if a node is down, the cluster can continue working.
- A replica is optional, but backups are mandatory, and if you are not Pentagon or Bank of America, you can save money on replication.
- Improved security and access control by separating data from code.
- Configured to work on a Virtual Private Cloud (VPC).
- Authentication is configured for external access in most solutions.
Other Nice Features
- Cloud Watch Log rotation and system monitoring for servers.
One of the most common infrastructure patterns is the 3-tier infrastructure. This pattern divides the infrastructure into 3 separate layers: one public and 2 private layers. The idea is that the public layer acts as a shield to the private layers. Anything in the public layers is publicly accessible, but stuff in the private layers is only accessible from inside the network.
In addition to dividing the network into 3 separate high availability layers. AWS allows you to achieve high availability by distributing your application across multiple Availability Zones. Each Availability Zone is a physical data center in a different geographic location.
Following AWS best practices, Magento Cloud should split the network across 3 availability zones. This gives us high availability and redundancy. If one of the availability zones is unavailable for whatever cloud provider's reasons, our application would not be affected as traffic would flow to the other 2 availability zones. Basically, Availability zones it is several physically independent isolated environments in the same region — different buildings, electricity, internet, cooling, etc. However, in the single availability zone, you are having the same stable environment with 99.8% uptime when high availability has a 99.99% uptime guarantee.
In order to split Magento 2 Cloud network into 3 tiers and across 3 availability zones, need following subnets (see image 1 below):
- Public Layer: This layer consists of 3 public subnets. One on each Availability Zone.
- Application Layer: This layer consists of 3 private subnets. one on each Availability Zone.
- Database Layer: This layer consists of 3 private subnets. one on each
What is the difference between a public subnet and a private subnet?
The main feature that makes a subnet “public” or “private” is how instances in that subnet access the internet. A public subnet is a subnet that allows its instances to access the internet via an Internet Gateway. A private subnet, on the other hand, allows its instances to access the internet via either a Network Address Translation server (NAT) or via Amazon’s managed NAT service (NAT Gateway). We have chosen to use amazon’s NAT gateway as it is managed by AWS, and it scales out as needed.
In order to make a subnet allow access to the internet via an Internet Gateway or a NAT Gateway, you have to make sure that the route tables for the subnet are set up in a way to direct traffic to the correct gateway. In our example, we have set the following route tables:
What goes in each of the layers?
The public (top) layer will host an internet-facing Elastic Load Balancer (ELB) and a Bastion host.
The ELB is the entry point for your application, and it directs traffic to your Magento Web servers.
Note that the ELB is available in all 3 availability zones by default. This will allow for high availability and redundancy. Behind the scenes, AWS provisions multiple instances of the ELB based on what availability zones have EC2 instances behind that load balancer.
The Bastion host (also known as Jump host) is the server that will allow you to connect to your application servers (or any other servers in the private subnets) via SSH. Also, you can have an SSM connection to your instances instead of the bastion.
AWS System Session Manager is a fully managed AWS Systems Manager capability that lets you manage your Amazon Elastic Compute Cloud (Amazon EC2) instances, on-premises instances, and virtual machines (VMs) through an interactive one-click browser-based shell or through the AWS Command Line Interface (AWS CLI). Session Manager provides secure and auditable instance management without the need to open inbound ports, maintain bastion hosts, or manage SSH keys.
The second layer is the application layer, this is where your Magento application web servers live. In our case, we have wrapped our Magento server with an AutoScale Group. This will allow our application to scale up if more servers are needed or to recover in case one of the availability zones is out of service. In the case where an entire availability zone is out of service, the load balancer is smart enough to know that and it will scale up in a different availability zone.
The third and last layer is the database layer. This is where the Magento databases live. Each database type (MySQL, Elastic Search, Redis) has its own server and own scalability and level of redundancy. The only way to access these databases is by connecting to them from the application layer or you can forward port to the public subnet using Bastion.
In our case, we have decided to use Amazon’s Relational Database Service (RDS), which is a managed database service provided by Amazon. One advantage of using RDS is that we can have a failover database instance in a separate availability zone. In addition, we can also have one or more read-only RDS instances to take some of the load of the main database.
So, everybody should understand the trust about Magento cloud architecture and doesn’t trust to marketing misleading advertisements about it.