Posts

Showing posts from 2020

Azure VM Application Consistent MySQL DB Disk Snapshots

Image
Backup of Database is the pillar of our system which is necessary and mandatory to provide us data incase of crash, new machine provisioning and many other scenarios listed here .  As part of the backup process, a snapshot is taken, and the data is transferred to the Recovery Services vault with no impact on production workloads. The snapshot provides different levels of consistency, as described below: 1. Application-consistent: App-consistent backups capture memory content and pending I/O operations. App-consistent snapshots use a VSS writer (or pre/post scripts for Linux) to ensure the consistency of the app data before a backup occurs. When you're recovering a VM with an app-consistent snapshot, the VM boots up. There's no data corruption or loss. The apps start in a consistent state. 2. File-system consistent: File-system consistent backups provide consistency by taking a snapshot of all files at the same time. When you're recovering a VM with a file-system cons

Orchestrator RAFT Leader Check with Proxy pass with Basic Auth Using Nginx

Image
Recently we have setup Orchestrator in High Availability mode using RAFT . We are running a 3 node setup in which there used to be a leader and rest 2 are Healthy raft member. So To access orchestrator service we may only speak to the leader node using /api/leader-check as HTTP health check for our proxy. This url returns http 200 on leader and 404 on members. So using below code in open nginx we have setup http health check with basic auth. Prerequisite: Lua support should be enabled in nginx. Below code is to define upstreams with healthcheck: upstream orchestrator { server 10.xx.xx.35:3000 max_fails=2; server 10.xx.xx.37:3000 max_fails=2; server 10.xx.xx.40:3000 max_fails=2; } lua_shared_dict myhealthcheck 1m; lua_socket_log_errors off; include /etc/nginx/lua/active_health_checks.lua; Lua Script for health check:  Before creating script we will need a hash with base64 encoding below is the command to create

MySQL BLACKHOLE Engine as Replication Filter

Image
Today, I am going to tell very interesting use-case where we have used Blackhole engine as replication filter. We have an Aurora Cluster (let's call it C1) where multiple db's are hosted and multiple applications are writing data into it. While in another project one application wanted to read the data from one of the db's hosted on aurora cluster C1 & this new project is hosted into another account. Now the challenge is we don't want self hosted db which supports  replication filters (replicate-do-db) and bring only one db while wanted to use aurora only to in the new project as per the company standards, But problem is Aurora DB doesn't support native replication filters So we were not able to setup replication. To solve this problem we tried multiple approaches: Approach 1: Introducing intermediate Slave with replication filters, But using this approach we were introducing more infra and node management and that too self hosted will DB.