Posts

Azure VM Application Consistent MySQL DB Disk Snapshots

Image
Backup of Database is the pillar of our system which is necessary and mandatory to provide us data incase of crash, new machine provisioning and many other scenarios listed here .  As part of the backup process, a snapshot is taken, and the data is transferred to the Recovery Services vault with no impact on production workloads. The snapshot provides different levels of consistency, as described below: 1. Application-consistent: App-consistent backups capture memory content and pending I/O operations. App-consistent snapshots use a VSS writer (or pre/post scripts for Linux) to ensure the consistency of the app data before a backup occurs. When you're recovering a VM with an app-consistent snapshot, the VM boots up. There's no data corruption or loss. The apps start in a consistent state. 2. File-system consistent: File-system consistent backups provide consistency by taking a snapshot of all files at the same time. When you're recovering a VM with a file-system cons

Orchestrator RAFT Leader Check with Proxy pass with Basic Auth Using Nginx

Image
Recently we have setup Orchestrator in High Availability mode using RAFT . We are running a 3 node setup in which there used to be a leader and rest 2 are Healthy raft member. So To access orchestrator service we may only speak to the leader node using /api/leader-check as HTTP health check for our proxy. This url returns http 200 on leader and 404 on members. So using below code in open nginx we have setup http health check with basic auth. Prerequisite: Lua support should be enabled in nginx. Below code is to define upstreams with healthcheck: upstream orchestrator { server 10.xx.xx.35:3000 max_fails=2; server 10.xx.xx.37:3000 max_fails=2; server 10.xx.xx.40:3000 max_fails=2; } lua_shared_dict myhealthcheck 1m; lua_socket_log_errors off; include /etc/nginx/lua/active_health_checks.lua; Lua Script for health check:  Before creating script we will need a hash with base64 encoding below is the command to create

MySQL BLACKHOLE Engine as Replication Filter

Image
Today, I am going to tell very interesting use-case where we have used Blackhole engine as replication filter. We have an Aurora Cluster (let's call it C1) where multiple db's are hosted and multiple applications are writing data into it. While in another project one application wanted to read the data from one of the db's hosted on aurora cluster C1 & this new project is hosted into another account. Now the challenge is we don't want self hosted db which supports  replication filters (replicate-do-db) and bring only one db while wanted to use aurora only to in the new project as per the company standards, But problem is Aurora DB doesn't support native replication filters So we were not able to setup replication. To solve this problem we tried multiple approaches: Approach 1: Introducing intermediate Slave with replication filters, But using this approach we were introducing more infra and node management and that too self hosted will DB.

MongoDB InPlace Version Upgrade 3.4x to 3.6x

Image
Today I am putting simple and straight forward steps to in-place upgrade / downgrade mongoDB single as well as replica set cluster. Note: Steps are for CentOS kindly modify for other OS. Warning:  IMPORTANT Always backup all of your data before upgrading MongoDB and test steps in nonprod env first. 1. Checking compatibility version db.system.version.find(); 2. setting compatiblity version db.adminCommand( { setFeatureCompatibilityVersion: "3.4" } ) 3. change dir cd /tmp/ 4. download pkgs ( link ) wget https://repo.mongodb.org/yum/redhat/7/mongodb-org/3.6/x86_64/RPMS/mongodb-org-shell-3.6.14-1.el7.x86_64.rpm && wget https://repo.mongodb.org/yum/redhat/7/mongodb-org/3.6/x86_64/RPMS/mongodb-org-server-3.6.14-1.el7.x86_64.rpm 5. cd home cd 6. stopping mongo systemctl stop mongod 7. backing up conf cp /etc/mongod.conf /tmp/ 8. removing current Binary yum remove mongod* 9. cd download dir cd /tmp/ 10. installing n

Redash | A Powerful OpenSource Query Tool | UP and Running in 10 Mins

Image
As an improvement plan in DB Access you can introduce Redash in your company or to your clients. It is open source tool and can be hosted internally on private subnet. Redash helps you making sense of your data. You can connect and query your data sources, build dashboards to visualize data and share them with your colleagues. Some of redash features are: Write queries in their natural syntax and explore schemas Live auto-complete and keyboard shortcuts Create snippets for elements you frequently use Use query results as data sources to join different databases Redash support multiple integrations and will help us to minimize our ops work while smoothen overall user experience with db. Let's start deploying it. Using any of the below methods you can deploy Redash: AWS EC2 AMI Docker For AWS env it provides baked AMI's you just have to launch instances from AMI's and Redash will be up and ready with in 10 mins. There are several other methods

How to remove/deregister an instance from PMM?

Image
We all need a monitoring system like pmm to monitor our database. Let me give a brief about how pmm works. So PMM uses prometheus to store metrics/graphs and grafana to display them. As seen in the architecture diagram prometheus uses consul to get the list of host to scrap metrics. More information on architecture and installation are available in docs. We were facing issues related to down hosts and if setup alerts in grafana for down host and host is terminated it continuously throws alerts. So problem was how can we remove/deregister instances from pmm. We can access consul UI using below link: http://PMM-SERVER/consul/#/dc1/ Previously there was option available on this ui to deregister host. while in the new releases of consul it has been removed see link . But now we can use below procedure from command line to deregister hosts from consul. Get list of nodes: curl -s 'http://USER:PASSWORD@PMM-SERVER-IP/v1/internal/ui/nodes?dc=dc1' | python -mjso

Shell/Bash Commands Execution from MySQL Client or Stored Procedure/function

Image
Today I am going to explain ways to execute shell or bash commands from mysql clients or stored procedure and function. There are basically 2 method to do so: Method 1: Use MySQL Client inbuilt feature  To run single command: \! command or system command. eg \! uptime or system command  To get terminal \! bash or \! sh Method 2: Deploy external plugin ( lib_mysqludf_sys ) Step 1: Download lib_mysqludf_sys from github: git clone https://github.com/mysqludf/lib_mysqludf_sys.git Step 2: Install libmysqlclient15-dev, for Ubuntu you can use: apt-get install libmysqlclient15-dev Step 3: Note down o/p of: mysql -uroot -pxxxx -e "select @@plugin_dir;" Step 4: Change directory to git clone dir. cd lib_mysqludf_sys/ Step 5: Compile and put plugin in plugin dir gcc -Wall -I/usr/include/mysql -I. -shared lib_mysqludf_sys.c -o {value from step 3}lib_mysqludf_sys.so -fPIC Step 6: Create necessary functions for lib_mysqludf_sys: mysql -uroo