Improve and Fix Slow Magento 2 Performance Top issues

Yegor Shytikov
12 min readOct 2, 2020


Adobe released top 10 Performance Best Practices which are silly. In this post we will review it and try to understand can we improve performance following this advice. And also we will provide community-driven advice

Adobe has this top performance issues in the list:


What adobe Wrote:

Adobe’s Customer Engineering team has researched and identified the top 10 common issues that impact Magento Commerce sites. These 10 issues constitute approximately 30% of all issues identified through our Site-Wide Analysis Tool. Through this whitepaper we identify these common issues, explain the potential impact they could have, and recommend best practices to address them. Whether you are starting a new project or managing an existing implementation, Adobe recommends reviewing these issues and incorporating the practices to provide the optimal SITE PERFORMANCE and visitor experience.

Is there a more useless phrase than “Magento 2 best practices” or Magento Certified Developer?

Magento 2 Issues:
Async Order Processing is Not Enabled

The configuration Async Order Processing is disabled

There can be times when intensive sales on a storefront occur at the same time that Magento is performing intensive order processing. Having asyncOrderProcessing disabled can lead to deadlocks and slowness on the order page. To fix that Magento Commerce can be configured to distinguish between the traffic patterns for order processing and checkout at the database level. Enabling Async Order Processing will store and index order data asynchronously. Orders are placed in temporary storage and moved in bulk
to the Order Management Grid without collisions. This will improve performance and avoid conflicts in read write operations in the corresponding tables. You can activate this option

However, This optimization really doesn’t impact the performance of the catalog pages and increases debug complexity and development time in several times. It can help you if you are having more than 1 orders per second or 60 orders per minute — 360 orders per hour — 8640 orders per day. For the 99% of the Magento customers, it is not an issue at all.

Async Email Notification is Not Enabled:

When the asyncEmailNotification is disabled, it can degrade the performance of checkout and order processing, negatively impacting visitor experience.

By enabling asyncEmailNotification, the processes that handle email notifications for check out and order processing are moved to the background. This improves the performance of placing an order. You can activate this feature from

Stores > Settings > Configuration > Sales > Sales Emails > General Settings > Asynchronous Sending

Asynchronous sendingGlobalDetermines if sales emails are sent asynchronously. It is recommended that you enable Asynchronous sending.

Disable — (Default) Sales emails are sent when triggered by an event.
Enable — (Recommended) Sales emails are sent at predetermined, regular intervals.

This issue the same as previous doesn’t affect the real user catalog browsing experience. It only affects order placement performance and can’t be treated as a top issue.

Deferred Stock Updates is not Enabled

In times of high number of sales transactions, when deferredStockUpdates is disabled, it can cause deadlocks which could degrade performance of the store. By enabling deferredStockUpdates as the name suggests, can defer stock updates related to orders. This reduces the number of operations required and significantly speeds up the order
placement process.
This feature can be activated from Stores > Settings > Configuration > Catalog > Inventory > Product Stock Options > Use ‘Deferred Stock Updates’

This option can only be used when Backorders are enabled in the store because this feature can result in negative stock quantities.

This recommendation the same as the previous 2 doesn’t improve customers' experience and catalog pages performance. It improves only broken by default order placement Magento functionality for high orders volumes per second

Redis Replica Connection is not Enabled

Redis Replica Connections are not enabled. This issue is only applicable to Magento Commerce running on a non split architecture.

During times of high traffic times, if this feature is disabled, a large number of queries could overwhelm the Primary node of the MySQL database. This could lead to performance degradation or even site outage.

Magento has some issues here. They started With MySQL replica however describes why Redis replica is needed. Both of them are not needed.

By enabling Redis Replica Connections, Magento can spread the load to multiple nodes asynchronously by load balancing SELECT queries across them.

• Redis Replica Connection is NOT compatible with Scaled (Split) Architecture and should not be enabled for these environments. Enabling Redis Replica Reads on Scaled (Split) Architecture will generate errors of Redis Connections not being able to connect.
• Redis Replica are still active but will not be used for Redis reads
• Adobe recommends using Magento v2.3.5 and later for Scaled (Split) Architecture and implementing the new Redis Back End Config and implement L2 Caching for Redis.

This is also not the issue because it treating only the symptoms not the issue of Redis bad performance. Magento Core and Extensions Abuses Redis A lot if you are having high traffic you should use DEDICATED Redis on the fastest possible CPU with Multi-threaded enable or you can try to use multithreaded KeyDB. Also, AWS C,R,M6g instance has 30–60% better Redis performance.

Graviton 2 Redis Performance


If you are having good infrastructure nit the broken Magento Cloud which is a replica of the you shouldn't have any issues with Redis until you will have 1000 requests for non-chaced pages.

Why 1000? Because Redis Can easily handle 160K requests per second. Single Magento 2 page generates ~ 100 Redis calls. If you are having more than 100 than you need to fix code, not Redis.

Why Magento Cloud Cloud Redis is garbage?

Because it splits Server power with another service PHP, MYSQL, Crons, Elastic Search, etc. and just replica will not help. You need a dedicated high-performance Redis server.

What about Magento Mysql Replica?

Most of the merchants Don’t need MySQL to read replicas. Wha yu need it reduces the numbers of Mysql queries (N+1 SQL issue) in the Magento Core and 3-d Magento modules. Magento 2 ecosystems are amateurish and low-cost that's why 3-d party modules regularly are the biggest issues for your website. Also, you need to fix the Magento Mysql missing Indexes issues.

Magento has approximately 60 — 100 MySQL requests per page. Then you can handle 1000 uncached pages per the smallest AWS EC2 instance if you don’t have fixed Magento 2 Core and 3-d party modules. After 1000 you can use Aurora Db to have up to 15 read replicas. Magento Cloudread replica is silly it isn’t dedicated because it shares infrastructure the same as Redis. However, if you are having issues with the code Red replica will not help. And this is a little bit of math why. Bad implementation of the Magento has 3000 SQL requests per page. 90% of the Magento 2 store implementations are bad and slow. You are lucky if you are in that 10%. However, it is not the developer's issue it is the broken Magento 2 Core issue. Magento 2 is a failure from the release 2.0.

60 000 Req per inst/ 3000 req per page = 20 pages per Mysql instance per sec you can have. By adding a single replica you will not improve performance because you will have 1 read instance and on Master Write-only you need 2 read replicas. After adding 2 read instances Total (3 instances) you can generate 40 pages per second and so on. Pretty weird, isn't it? So only by fixing Magento Core and 3-d party code, using modern technologies, and by offloading features to the Magento-less microservices, you can be really happy

OP Cache Size not set correctly

On Magento Commerce Pro accounts only, there is not enough memory for OPcache

OPcache improves PHP performance by storing precompiled script bytecode in shared memory, thereby removing the need for PHP to load and parse scripts on each request. When OPcache is set incorrectly, instead of improving performance it can increase the cache generation overhead.

It is recommended to set the opcache.memory_consumption PHP setting in php.ini file to at least 2048MB to avoid performance degradation.

What 2048MG for opcache code? It shows what the garbage Magento 2 is. Any other solutions work with 128MB (default). However, this recommendation is silly even if you give 100500MB for the chance it will not cache because it has a max file limit.

OPCache stores cached scripts in an HashTable, a data structure with very fast lookup time (on average), so cached scripts can be retrieved quickly. max_accelerated_files represent the maximum number of keys that can be stored in this HashTable. You could think about it as the max number of keys in an associative array. The memory allocated for this is part of the shared memory that OPCache can use, that you can configure it with the opcache.memory_consumption option. When OPCache tries to cache more scripts than the available number of keys, it triggers a restart and clean the current cache.

So let’s just say you configured opcache.max_accelerated_files to 223 and a request to your /home route parse and cache into OPCache 200 scripts. As long as your next requests will need only those 200 scripts OPCache is fine. But if one of the following requests parse 24 new scripts, OPCache triggers a restart to make room for caching those. Since you don't want that to happen you should monitor OPCache and choose an appropriate number.

Keep in mind that one file can be cached more than once with different keys if required with a relative path like require include.php or require ../../include.php. The cleanest solution to avoid this is to use a proper autoload.

You should also set opcache.max_accelerated_files to 100 000 000 max value. The default is 4000. Alos, opcache check timestamp should be disabled on the production server and CLI opcache should be enabled.

By default settings, when PHP file is executed Opcache checks the last time it was modified on disk. Then it compares this time with the last time it cached the compilation of that script. When the file was modified after being cached, the compile cache for the script will be regenerated.

This validation is not necessary for the production server when you know the files never change. To disable timestamp validation add the following configuration


CSS Unification is Not Enabled

When CSS unification is not enabled, it can result in multiple HTTP requests for each partial required during page load. This can have an adverse effect on performance.

Unification of CSS files results in combining multiple asset requests to a single request which can improve performance of page load.

Magento 2 has the worst frontend page load performance in the entire universe and this advice is not relevant because HTTP/2 protocol came out allowing parallel requests (imagine multiple cashiers instead of one). With multiple cashiers, requests were served quicker as separate smaller requests rather than combined. HTTP/2 gave such massive performance increases, you would expect almost immediate adoption.

People think that because their server sent out fewer requests, it means their server did less work…but that is false thinking. Your server sends out the same amount of code no matter what. If anything, your server may work harder because you merged. Merging CSS also has a few annoying issues…instead of letting your page render immediately, it now has to wait for your entire CSS to load.

For me…most CSS should be loaded as fast as possible and most JS should be as delayed as possible (UNLESS, there is critical JS like being used for slider above the fold).

“Magento 2 single page loads 7+Mb of Javascript and has a huge DOM. Images are not deferred and the main thread being kept busy for almost half a minute(cause all that JS needs evaluating) and you’re in Heaven. Probably the worst, most amateurish platform for delivering applications. Has to be the most costly disaster that blighted the world of Software in the last 30 years.”

Magento Community Software Engineer

CSS assets minification is not enabled

CSS files can be fairly large in size. When they are not minified or gzipped, the time to download at page load time can be high, providing a bad visitor experience

Magento commerce can be configured for various file optimization techniques including minification. Minification can be done from the command line by running bin/magento
config:store:set dev/js/minify_files 1

JS Minification is Not Enabled

JS files can be fairly large in size. When they are not minified or gzipped, the time to download at page load time can be high, providing a bad visitor experience.

Magento commerce can be configured for various file optimization techniques including minification. Minification can be done from the command line by running bin/magento
config:store:set dev/js/minify_files

Minification doesn’t improve performance much. You can’t even measure those improvements. Minification removes maintainability. It usually saves about 4–8kb of savings on site size. You can get more savings by compressing a single jpg or removing unused Magento Enterprise functionality.

Magento ECE-Tools Version is Outdated

Having an outdated version of Magento ECE-Tools can lead to issues with infrastructure upgrade, servers, application, and integrations.

Always run the latest compatible version of the Magento ECE-Tools

This Advice is useless because the best practices for Magento is not to use Platform.SH doing business as Magento Commerce Cloud.

Magento Cloud host has several virtual cores (vCPU) per server (2 threads per 1 physical CPU — Intel’s Hyperthreading technology) with everything running on it. PHP, MySQL, Galera Cluster, Redis, ElasticSearch, Java, HaProxy, Nginx, ZooKeeper, heavy Magento Crons, RenitMQ, Docker, NewRelic, BlackFire, GlusterFS Network File Server, other stuff and all these infrastructure elements run twice x2(production and staging share the same instance). And all these processes load several physical cores producing performance issues and Redis splitting will not help. You need real horizontal auto-scaling or serverless with 3 tire architecture to handle a huge amount of traffic.


Also, the Magento cloud doesn’t have proper infrastructure monitoring tools. To provide some tools to Merchants Magento Cloud uses New Relic however it produces 50–200% performance overhead and it is better not to use Old Relic on production.

Unused Magento Banner Functionality

When the Magento Banner functionality is enabled but is not being used, it can make an unnecessary AJAX request to the server

Unnecessary AJAX requests to a server can have a negative impact on the performance of the site.
This is especially true during high traffic periods

If the Magento Banned functionality is not required, it is recommended to follow these steps
• Disable to Magento Banner Module Output as described here. The name of the module is Magento_Banner.

This is true not only for Banners. It is applied to everything in Magento 2. Magento 2 framework has broken architecture — Many Ajax, Slow backend, legacy code, harmful AOD plugins, legacy ORM, Zend Framework 1. The best practice is not to use Magento Enterprise/Commerce because it has more code(Staging) and bad performance and more bugs. Magento 2/1(Open Mage) Open Source is the only right choice. Also, you should avoid using MSI (Multi-Source Inventory) and other bundled with Magento 2 core modules.

Magento Commerce vs Magento 2 Open Source performance:

To be continued…

You can send your issues and fixes and we can create open source list.

Send me it to



Yegor Shytikov

True Stories about Magento 2. Melting down metal server infrastructure into cloud solutions.