Thoughts on Workflow

Small team of developers, currently using SpinUpWP + Vultr, $40 USD a month.

I have an older M1 MacBook Air, 16GB of RAM. Way more power than the current VPS.

We’re in Australia. Power & internet is quite reliable. The modem and router is on a UPS, tested working during a recent storm :grinning: MB Air obviously doesn’t care about power.

We get up to 40 dev sites, however of course they are very minimally accessed. In fact an issue with SUP is that every minute it runs a cron which is unchangeable, so CPU gets smashed with 40-50 requests every minute pending how many sites are stalled waiting to go live.

Looking at activity monitor each site gets a new SQL instance, not ideal but 100MB of RAM per instance, so 4GB at idle. PHP & NGINX use bugger all. So that seems fine.

Local makes NGINX config easy to edit which is great, I’ve tested htauth. That will reduce bots immensely.

Cloudflare tunnel for public access. Looks like I need to have local routing mode enabled to do this, which means I need to setup a tunnel each time given the port keeps changing.

Cloudflared is over SSL so all good there: https://community.cloudflare.com/t/how-does-ssl-work-when-using-cloudflare-tunnel/393152/3
And I’ve already setup tunnels so that side is tested and a-ok.

Blueprint can be used to setup htauth, wp plugins.

I’ll install Synology Drive to copy /Local Sites/ regularly over to local NAS. Unsure of the interval of the SQL dumps being made. Snapshots would be enabled to ensure we have 14 days worth of “versions”. Cloud Sync would push them to B2 to have an offsite copy.

WP File Manager and such all look isolated which is good.

SMB can be used to share /Local Sites/ to the developers, works both Windows & Mac.

So the only “issues” I can see:

  • I need to create the sites. Unsure yet how to 100% reliably screen share remotely as in person testing so far
  • I need to create a CF tunnel each time. Annoying but given the rest of the setup is 1 click, it’s not a big deal
  • I’ll just add a local mail capture plugin to Blueprint so developers & clients can see them without access
  • They can slap adminer.php into files if they need DB access for something

This seems pretty much secure as it can be, the laptop is only worth ~$300 so it’ll be financially ahead within a year, with way more CPU & RAM available than required.

Mac itself would be reset from scratch, iCloud and such disabled. Only applications running would be Local, Synology Drive, and remote screen sharing. Maybe a VPN if that route is taken. I’ll likely set 1.1.1.1 for DNS to bypass local pihole.

In theory, should have a number of years left of OS updates. Come the time, we get a new Mac Mini, use Time Machine to clone it verbatim over, and off we go (seems too easy!)

Now what is REALLY COOL is if Fast User Switching is enabled (the only part I haven’t tested), we actually have isolated instances! So we can have one user account with our own WP sites and other tools, and the other for the developers and the dev sites. Who we obviously trust but like anything keep the surface as minimal as possible just in case.

Is there anything I’ve overlooked? Thanks in advance!

2 Likes

Food for thought:

  • Most Australian ISPs have a ‘no server’ clause for residential connection/account. Solution is to have a business connection/account, which is slightly more expensive but comes with allegedly better up-time and support promises.

  • Is the Public-IP allocation for your internet connection dynamic (ISP session stable), static (permanently assign to you) or a nomad (can change multiple times during a single ISP connection sessions)? Better question is “how often does the whole Public-IP for your connection change?”. Some ISPs allow you to pay an extra few dollars to have a static IP which whould make it easier for CloudFlare to find you … because everytime the IP changes, Cloudflare will loose all the connections and not know where to find any of the 40+ sites (irrespective of the port to use). You might need to look into Dynamic DNS configurations and services … which is another learning curve and probably off topic for this forum.

  • Firewalls: Using a Cloudflare tunnel to a computer is part of the solution for routing, but it is not protection from having a computer connected to the other end of the public internet (as far as I know there might be some DDOS protection but probably not all the other protections the Vultr hosting network offers). At a minimum, it is recommended to run your own firewall computer as the only point of contact from outside your network. Then have the Developer computer localed in the firewalls Orange zone, it is protected from the Red zone (public internet) but can be setup to do port-forwarding aligned with CloudFlare configuratuon but can not access any computers in your Green zone (the rest of your private network) … which is another learning curve and probably off topic for this forum.

  • The onsite NAS would be vulnerable to fire/water/shock damage risks. Occassionally do a backup of the NAS and store it offsite.

  • The screen-sharing tests would benefit in understanding the memory and CPU demands on the MacBook itself. How many concurrent sessions can be run before the MacBook starts to stall and become less responsive?

  • It might be easier to break the journey up into miestones. Setup the next project on a different host (like Cloudways) where you can relieve the immediate pain-points experienced (control over cron scheduling, etc) while continuing to learn and setup an ‘office firewall and network’.

A working Cloudways example … a single-core 2GB $30 USD p/m with a staggered cron schedule (see below) across 12 low-traffic WordPress websites at ~90-95% idle CPU and rarely goes below 70%. Leveraging Redis + Object-Cache-Pro (free with 2GB systems) for persistent database read caching for a faster user experience. Including being burdened with regular external backups for each website (InfiniteWP) offloaded to Amazon S3.

                    |<------->|<--------->|<--------->|
www.website01.com    1,6,11,16,__,__,31,__,__,46,__,__
www.website02.com    _,6,__,__,21,26,31,36,__,__,51,__
www.website03.com    _,_,11,__,__,26,__,__,41,46,51,56
www.website04.com    2,7,12,17,__,__,32,__,__,47,__,__
www.website05.com    _,7,__,__,22,27,32,37,__,__,52,__
www.website06.com    _,_,12,__,__,27,__,__,42,47,52,57
www.website07.com    3,8,13,18,__,__,33,__,__,48,__,__
www.website08.com    _,8,__,__,23,28,33,38,__,__,53,__
www.website09.com    _,_,13,__,__,28,__,__,43,48,53,58
www.website10.com    4,9,14,19,__,__,34,__,__,49,__,__
www.website11.com    _,9,__,__,24,29,34,39,__,__,54,__
www.website12.com    _,_,14,__,__,29,__,__,44,49,54,59

Plus another 21 small PHP application websites (custom, not WordPress) all on the same server. Leveraging the bebefit of that servers network and application layer protections.

Sounds like you are enjoying the architecture journey; it can be fun. :vulcan_salute:

1 Like

Hey John,

Thanks for your reply!

  1. No issue already spoke with ISP. That seems to be legacy pre-NBN, or larger legacy providers who want enterprise $

  2. Static IP, no CGNAT.

  3. Multiple routers / networks is something on the todo list to look into further. While tunnel bypasses opening ports, the main issue is network lookups and other things that could possibly be leveraged, so isolating it somehow will add another layer of security

  4. Odds are anything that happens to the computer would happen to the NAS, so offsite backups. The good thing is being dev only a few sites are “active”, so pretty easy to buy a new computer or even grab a VPS and slap the main ones back up within 24 hours and restore the others. Just manual & tedious to transfers so many WP sites.

  5. Unless there’s something niche to Mac or Local, it’s hosted on 2 CPU 4 GB of RAM with mostly idle (nightly external backups are the most intensive operation), so 8 CPU cores & 16GB of RAM will be heaps

  6. SUP cron involves SSH’ing into every site and changing crontab timing. Just annoying paying for a “premium service” which still involves manual steps. I have a note like you have to keep track of the schedules so I don’t put too many at the same time.

Being development sites there isn’t a need for any caching, gets in the way. The sites are still snappy - i.e. WP site with custom theme, or maybe Astra/Divi etc, and a handful of plugins. We don’t really work on Elementor with 50 plugin type sites, and when we do, they are established sites by someone else so we use their hosts staging environments.

All sites are behind basic auth, which is disabled for specific API tests (Stripe callbacks etc), which surely adds a lot of protection, outside DDOS.

No sites send email which is how IP can be leaked.

Cloudflare can be enabled to block outside AU. Granted someone might get the IP address of the server somehow, and DDOS the router. I don’t know what protection it has, probably not a whole lot? They are Eero 6+. Anyway, no open ports.

But you know, end of the day it’s devs & clients, it’s mostly keeping bots out. I would be very surprised if anyone actually had a crack at malicious activity.

I’ve setup Ubuntu servers from scratch but always on a VPS, and just hammer nightly backups etc who cares about bandwidth :grinning: So having a local NAS and hosting is new territory.

2 Likes

Hi @infinity,

Indeed. You definetly have this well covered, including a journey for improvements and new learning fun to expand. Really appreciate you taking the time to close the loop and share the solution.

In that spirit, continuing to harden each WordPress site, like you already do, is probably the last important piece by closing down features not in use. I recommend Unbloater – WordPress plugin | WordPress.org as it is quite useful, quick to configure, removes a few unwanted CPU cycles while tidying up the Administrator experience.

Enjoy the adventure. :vulcan_salute:

1 Like