Mount S3 Bucket as a pub/media folder for Magento 2
How mount S3FS for Magento 2?
4 min readSep 5, 2020
S3FS allows Linux and macOS to mount an S3 bucket via FUSE. s3fs preserves the native object format for files, allowing the use of other tools Nginx, Magento, or any other legacy PHP framework.
S3FS Features
- a large subset of POSIX, including reading/writing files, directories, symlinks, mode, uid/gid, and extended attributes
- compatible with Amazon S3, and other S3-based object stores
- allows random writes and appends
- large files via multi-part upload
- renames via server-side copy
- optional server-side encryption
- data integrity via MD5 hashes
- in-memory metadata caching
- local disk data caching
- user-specified regions, including Amazon GovCloud
- authenticate via v2 or v4 signatures
S3Fs is a solid open-source project with 4.9K+ git hub starts.
Install:
cd {magento root dir}/pub
cp -r media/ ./media2
yum install gcc libstdc++-devel gcc-c++ curl-devel libxml2-devel openssl-devel mailcapsudo yum remove fuse fuse-s3fs
sudo yum install fuse-devel
Compile:
cd /usr/src/
git clone https://github.com/s3fs-fuse/s3fs-fuse.git
cd s3fs-fuse
./autogen.sh
./configure
make && make install
Automated Script to installs S3FS :
Configure/Mount Magento EC2 instance:
echo AKIARB3SN*****:wmKnZ/pqfA/0dp9PkeuM***** > ~/.passwd-s3fssudo chmod 600 ~/.passwd-s3fsmkdir -p /tmp/cache ./s3mnt
chmod 777 /tmp/cache ./s3mnts3fs -o allow_other -o default_acl=public-read -o use_cache=/tmp/cache -o retries=2 -o mp_umask=0022 -o uid=498 -o nonempty -o passwd_file=/home/ec2-user/.passwd-s3fs s3-mount-Magento pub/media# where uid php libux user s3fs supports files and directories which are uploaded by other S3 tools(ex. s3cmd/s3 console). Those tools upload the object to S3 without x-amz-meta-(mode,mtime,uid,gid) HTTP headers. s3fs uses these meta http headers for looking like filesystems, but s3fs can not know the meta data for file because there is no meta data. Thus s3fs displays "d---------" and "---------" for those file(directory) permission. There are several ways to solve. One is that you can give permission to files and directories by chmod command, or can set x-amz-meta- headers for files by other tools. Alternatively, you can use umask, gid and uid option for s3fs.or remove -o mp_umask=0022 -o uid=498 -o#unmountsudo umount -l /var/app/current/pub/media#allow publicsudo echo "user_allow_other" | sudo tee -a /etc/fuse.conf#install fpartsync sudo yum install fpartfpsync -n 10 -v -o '-aru --omit-dir-times --exclude=pub/media* --exclude=var/log/*' /var/www/html/pub/media2/ /var/www/html/pub/media/ -v#or rsync -aru --omit-dir-times --exclude=pub/media* --exclude=var/log/* /var/www/html/pub/media2/ /var/www/html/pub/media/ -vor using aws tools aws s3 sync pub/media/ s3://s3-mount-magento/mysql> UPDATE core_config_data SET value = 'https://s3-mount-bucket.s3-us-west-1.amazonaws.com/' WHERE path LIKE 'web/secure/base_media_url';#!/bin/bash
#https://github.com/s3fs-fuse/s3fs-fuse/wiki/Installation-Notes#amazon-linux
sed -i 's/enabled=0/enabled=1/' /etc/yum.repos.d/epel.repo
yum install -y gcc libstdc++-devel gcc-c++ curl-devel libxml2-devel openssl-devel mailcap;
yum remove -y fuse fuse-s3fs;
yum install -y fuse-devel;
yum install -y fpart git make automake;
yum install -y fuse fuse-devel curl-devel openssl-devel; #dkms-fuse gcsfuse;## Configuration
echo AKIARB3SNMC******/pqfA/0dp9Pke******xR > ~/.passwd-s3fs
chmod 600 ~/.passwd-s3fscd /usr/src/
git clone https://github.com/s3fs-fuse/s3fs-fuse.git
cd s3fs-fuse
./autogen.sh
./configure
make && make install
echo "Finish compiling\n"
#yum install make
mkdir -p /tmp/cache ~/s3mnt
chmod 777 /tmp/cache ~/s3mntecho "Trying to mount\n"
s3fs -o allow_other -o default_acl=public-read -o use_cache=/tmp/cache -o retries=2 -o mp_umask=000 -o nonempty -o passwd_file=~/.passwd-s3fs s3-mount-hbh ~/s3mntmountpoint -q /root/s3mntr/ && echo "S3-fs already Mounted" || echo "S3-fs not mounted"
Comment from the Kirill Morozov:
Add Fstub:
run from sudo
echo "{{BuketName}} /var/www/html/magento/pub/media/ fuse.s3fs _netdev,profile=s3fs,endpoint=us-east-2,default_acl=public-read,use_cache=/tmp/cach,retries=2,allow_other,mp_umask=000,nonempty, 0 0" > /etc/fstub
Limitations (written with a small font)
Generally, S3 cannot offer the same performance or semantics as a local file system. More specifically:
- random writes or appends to files require rewriting the entire object, optimized with multi-part upload copy
- metadata operations such as listing directories have poor performance due to network latency
- non-AWS providers may have eventual consistency, so reads can temporarily yield stale data (AWS offers read-after-write consistency since Dec 2020)
- no atomic renames of files or directories
- no coordination between multiple clients mounting the same bucket
- no hard links
- inotify detects only local modifications, not external ones by other clients or tools