Mount AWS S3 as a Windows Drive with rclone

What Goal: Mount an AWS S3 bucket as a Windows drive letter (e.g., S:) using rclone. Scope: One S3 remote via rclone config, one batch file to mount, optional auto‑start. Prerequisites: AWS account + S3 bucket + IAM user with access to that bucket. rclone for Windows. WinFSP (required for rclone mount on Windows). Why Faster access: Browse S3 in File Explorer without separate apps or consoles. Simplicity: Open, copy, rename like a normal drive letter. Integrations: Any Windows app that works with drive letters can work with S3. How Step 1 — Install rclone + WinFSP ...

October 15, 2025 · 3 min · 614 words · Me

FolderSync + AWS S3: Private Android Photo Backup

What Goal: Back up Android photos/media privately to AWS S3 using FolderSync (Android). Scope: One S3 bucket, one IAM user, FolderSync account (S3-compatible), one folder pair (2‑way sync). Prerequisite: AWS account ready. Why Own your data: No Big Tech gallery lock-in. Reliability: S3 durability + Intelligent‑Tiering for cost control. Simplicity: No self‑hosted NAS, static IP, RAID, or server maintenance. How Step 1 — Create S3 bucket (Region: us‑east‑1 recommended) Name: your choice (e.g., mobile-device-bkp). Region: us-east-1 (often lowest cost; change if you need locality). Block Public Access: ON (keep the bucket private). Versioning: optional but recommended for safety. Storage class: add a lifecycle rule to transition objects to Intelligent‑Tiering immediately. Step 2 — Create IAM policy (access only this bucket) ...

October 12, 2025 · 2 min · 339 words · Me

Expense Tracker (Part 3/5): AWS Setup

Series links: Part 1/5 – Introduction Part 2/5 – Database Planning Part 3/5 – AWS Setup (you are here) Part 4/5 – Backend APIs Part 5/5 – Frontend What A light AWS setup to hold two things: your files (bills) and your data (transactions). Why S3 is reliable and affordable for documents/photos. RDS PostgreSQL gives you a managed database without running servers. How S3 bucket for web hosting (public read) Create a bucket for the static site (e.g., my-expenses-web). Make it public-read so the HTML/CSS/JS can be fetched by browsers. Optionally use CloudFront in front; with OAC you can keep the bucket private and still serve publicly via the CDN. Alternative hosts: Netlify, Vercel, or any simple web server. S3 bucket for documents (private) ...

June 6, 2025 · 2 min · 332 words · Me

Geo-Distributed EC2 Server Setup with Client Locking and Token Management

What We are designing a system where clients are routed to regional servers to reduce latency, while maintaining session consistency during multi-step API calls. Key components: Regional servers in different geographies Central discovery server tracking server heartbeat and load JWT tokens for authentication: Primary token expiry: 1–5 minutes Refresh token expiry: 7–30 days Client-server locking: ensures multi-step requests stay on the same server Load balancing and failover via Route 53 geo-proximity with health checks Why Simple geo-proximity DNS (Route 53) is insufficient for multi-step API workflows Multi-step POST requests can fail if a client jumps servers due to geo routing This happens because even with active-active database replication, there’s latency in the replication process When clients hit different servers too frequently, there’s a high chance that a server might not have the latest data required This data inconsistency leads to request failures, as the server processing a request might be working with stale data Need to lock clients to a server during critical operations Need flexibility to load balance or move clients across regions safely when not performing critical tasks Health checks in Route 53 ensure traffic isn’t routed to servers that are down How 1. Central Discovery Server Each regional server sends heartbeat with its unique ID, region code, and public URL Optionally collect telemetry/load data from each server (either directly from regional servers or via a central telemetry system) Discovery server maintains the active server list, their public URLs, and load information 2. DNS Setup Each regional server gets its own URL Central URL uses Route 53 geo-proximity with health checks to route clients to nearest healthy server 3. Client Login & Locking Client hits the central geo-proximity URL Login request routed to nearest server Server returns: JWT token Its own server URL → marks the client lock All further requests from this client use the locked server URL 4. Server Discovery Sync Regional servers periodically pull the active server list (with public URLs) + load information from discovery server (load data can originate from a central telemetry system or directly from servers) Enables load balancing within regions and global awareness 5. Refresh Token API & Closest Server Before sending a refresh token request, client calls /closestToMe on central geo URL Returns closest server identifier Payload includes: closestServerId criticalTaskInProgress (boolean) 6. Refresh Token Handling If criticalTaskInProgress = true: Do not switch servers Refresh token and maintain lock with current server If criticalTaskInProgress = false: Check closest server and region: Same region → pick server with lowest load, update token with new server URL Different region → switch client to that server and update token Ensures safe cross-region movement while maintaining active tasks 7. Load Balancing Regional servers use closest server identifier + server load to redistribute clients Maintains even load distribution while keeping active sessions safe 8. Frontend Considerations Detect if primary server URL changed in token response Show user-friendly message: “Your primary server has changed. Any missing data will be synced within 5 minutes.” Ensures users are aware but do not panic over temporary replication delays 9. Handling Server Failures If a server goes down, client will receive a 500-series error Client should wait 30 seconds with a timer: “Reconnecting…” During this time, discovery server confirms the server stopped sending heartbeats and updates its registry of available servers and their public URLs After the wait, attempt a refresh token request again Request will now hit the closest healthy server Refresh token response will include the new server URL Thoughts / Caveats Client lock is critical for multi-step operations Discovery server is the single source of truth for server status, public URLs, and load (whether collected directly or via a central telemetry system) Token expiry strategy (short-lived JWT, long-lived refresh token) balances security vs availability Cross-region movement and load balancing happen only when safe (no critical tasks) Frontend intelligence improves user experience during server switches Route 53 health checks ensure no traffic is sent to unhealthy servers Automatic refresh/reconnect handles server failures without breaking client workflows

March 14, 2025 · 4 min · 668 words · Me

Active-Active PostgreSQL with AWS DMS: Full Load + CDC

What Active-Active means two or more PostgreSQL databases can accept writes and stay in sync in near real-time. Unlike standard streaming replication (primary → replicas), this setup allows bi-directional writes. We’ll use AWS DMS to achieve this: Full Load: copy existing schema + data Change Data Capture (CDC): replicate ongoing changes from WAL logs Databases can be RDS, EC2 PostgreSQL, or on-premises. Latency is usually seconds. Why Multi-region, hybrid infrastructure, disaster recovery True bi-directional sync is complex; DMS simplifies it Avoid downtime and manual syncing Most failures happen during setup, not concept How 1. Decide DMS Deployment Provisioned: recommended for CDC. Runs 24/7, predictable performance. Serverless: flexible scaling, but dynamic cost and unsuitable for constant CDC. Rule: Always use Provisioned for bi-directional replication. ...

February 27, 2025 · 3 min · 576 words · Me