Posts

Showing posts from 2019

MongoDB InPlace Version Upgrade 3.4x to 3.6x

Image
Today I am putting simple and straight forward steps to in-place upgrade / downgrade mongoDB single as well as replica set cluster. Note: Steps are for CentOS kindly modify for other OS. Warning:  IMPORTANT Always backup all of your data before upgrading MongoDB and test steps in nonprod env first. 1. Checking compatibility version db.system.version.find(); 2. setting compatiblity version db.adminCommand( { setFeatureCompatibilityVersion: "3.4" } ) 3. change dir cd /tmp/ 4. download pkgs ( link ) wget https://repo.mongodb.org/yum/redhat/7/mongodb-org/3.6/x86_64/RPMS/mongodb-org-shell-3.6.14-1.el7.x86_64.rpm && wget https://repo.mongodb.org/yum/redhat/7/mongodb-org/3.6/x86_64/RPMS/mongodb-org-server-3.6.14-1.el7.x86_64.rpm 5. cd home cd 6. stopping mongo systemctl stop mongod 7. backing up conf cp /etc/mongod.conf /tmp/ 8. removing current Binary yum remove mongod* 9. cd download dir cd /tmp/ 10. installing n

Redash | A Powerful OpenSource Query Tool | UP and Running in 10 Mins

Image
As an improvement plan in DB Access you can introduce Redash in your company or to your clients. It is open source tool and can be hosted internally on private subnet. Redash helps you making sense of your data. You can connect and query your data sources, build dashboards to visualize data and share them with your colleagues. Some of redash features are: Write queries in their natural syntax and explore schemas Live auto-complete and keyboard shortcuts Create snippets for elements you frequently use Use query results as data sources to join different databases Redash support multiple integrations and will help us to minimize our ops work while smoothen overall user experience with db. Let's start deploying it. Using any of the below methods you can deploy Redash: AWS EC2 AMI Docker For AWS env it provides baked AMI's you just have to launch instances from AMI's and Redash will be up and ready with in 10 mins. There are several other methods

How to remove/deregister an instance from PMM?

Image
We all need a monitoring system like pmm to monitor our database. Let me give a brief about how pmm works. So PMM uses prometheus to store metrics/graphs and grafana to display them. As seen in the architecture diagram prometheus uses consul to get the list of host to scrap metrics. More information on architecture and installation are available in docs. We were facing issues related to down hosts and if setup alerts in grafana for down host and host is terminated it continuously throws alerts. So problem was how can we remove/deregister instances from pmm. We can access consul UI using below link: http://PMM-SERVER/consul/#/dc1/ Previously there was option available on this ui to deregister host. while in the new releases of consul it has been removed see link . But now we can use below procedure from command line to deregister hosts from consul. Get list of nodes: curl -s 'http://USER:PASSWORD@PMM-SERVER-IP/v1/internal/ui/nodes?dc=dc1' | python -mjso

Shell/Bash Commands Execution from MySQL Client or Stored Procedure/function

Image
Today I am going to explain ways to execute shell or bash commands from mysql clients or stored procedure and function. There are basically 2 method to do so: Method 1: Use MySQL Client inbuilt feature  To run single command: \! command or system command. eg \! uptime or system command  To get terminal \! bash or \! sh Method 2: Deploy external plugin ( lib_mysqludf_sys ) Step 1: Download lib_mysqludf_sys from github: git clone https://github.com/mysqludf/lib_mysqludf_sys.git Step 2: Install libmysqlclient15-dev, for Ubuntu you can use: apt-get install libmysqlclient15-dev Step 3: Note down o/p of: mysql -uroot -pxxxx -e "select @@plugin_dir;" Step 4: Change directory to git clone dir. cd lib_mysqludf_sys/ Step 5: Compile and put plugin in plugin dir gcc -Wall -I/usr/include/mysql -I. -shared lib_mysqludf_sys.c -o {value from step 3}lib_mysqludf_sys.so -fPIC Step 6: Create necessary functions for lib_mysqludf_sys: mysql -uroo

MySQL: How to monitor MySQL Replication Lag in MilliSeconds With PMM and pt-heartbeat

Image
There could be various requirements at application end which need realtime slaves. Let me introduce a solution to a problem we used to face how can we monitor whether our slaves are real time or they are lagging in Milliseconds. Unfortunately there is no built in feature in MySQL to get Replication Lag in MilliSeconds. Perhaps there is a tool provisioned in pt-toolkit named as pt-heartbeat . It generates heartbeat events on master and monitoring system can monitor time difference on slave to calculate lag. How to deploy pt-heartbeat in your environment (Assuming OS as UBUNTU xx.xx): Step 1: Download pt-heartbeat using below command. wget http://percona.com/get/pt-heartbeat -P /usr/bin/ && chmod +x /usr/bin/pt-heartbeat Step 2: It requires a database where it can create a table. Let's create it on master. mysql -uUser -pPassword -e "create database if not exists percona;" Step 3: Create heartbeat table in percona db. pt-heartbeat -D percon

MySQL: How To Sync Specific Table(s) from Master to Slave

Image
Most of us used to get errors like (Row not found, Duplicate row etc) on slave in Master slave replication and sometimes it is very difficult to find unsynced data and fix it while we know table name(s). There are few recommended tools from percona to check replication integrity and fixed unsync data: 1. pt-table-checksum : performs an online replication consistency check by executing checksum queries on the master, which produces different results on replicas that are inconsistent with the master. 2. pt-table-sync : synchronizes data efficiently between MySQL tables. Recommended tools work well with limited data with less time, But what if table size is more and we don't have time to analyze each row of the table with recommended tools which is time consuming? We can achieve this with very simple step though: Let's assume a simple cluster(MySQL 5.7.XX) with master m1 and slave s1 with GTID based replication replicating a db(db_ankit) with 5 tabl

A GH-OST can save your time!

Image
Today I am going to introduce you all to an awesome tool GH-OST . We are using it since many months to Alter tables online. Believe me it saves lot of time and efforts while altering big tables in MySQL. gh-ost has been developed at GitHub. To answer a problem we faced with ongoing, continuous production schema changes requiring modifications to MySQL tables. gh-ost changes the existing online table migration paradigm by providing a low impact, controllable, auditable, operations friendly solution. gh-ost stands for GitHub’s Online Schema Transmogrifier/Transfigurator/Transformer/Thingy. While migrating table gh-ost creates a ghost table in the likeness of your original table, migrate that table while empty, slowly and incrementally copy data from your original table to the ghost table, meanwhile propagating ongoing changes (any INSERT, DELETE, UPDATE applied to your table) to the ghost table. Finally, at the right time, they replace your original table with the ghost

HA Provisioning For Backend Instances For Maxwell

Image
In my previous post I have explain how we are running maxwell as service in Ubuntu. Now in this post I will be explaining how we achieved HA for Backend Instances (MySQL Hosts), Maxwell was using to read binlog. So the stack we have introduced to achieve high availability contains HAProxy (http://www.haproxy.org/) and GTID enabled slaves. HAProxy is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications. We have installed HAProxy on the local machine where we are running Maxwell and configured maxwell to connect to localhost where we have used HAProxy primary-secondary backup strategy using below config (/etc/haproxy/haproxy.cfg ): defaults mode tcp timeout connect 5000ms timeout client 86400s timeout server 86400s listen MySQL bind 127.0.0.1:3306 mode tcp option mysql-check user haproxy # mysql-check enables a database server check, t

Running Maxwell (MySQL Binlog Reader) As Service In Ubuntu

Image
As Many of DBA's are already running Maxwell ( http://maxwells-daemon.io/ ) if not Let me introduce you to Maxwell. It reads MySQL binlogs and writes row updates as JSON to Kafka, Kinesis, or other streaming platforms. Maxwell is very nice tool to build pipelines from MySQL to other NoSQL DB's like (Elastic Search, AeroSpike, Cassandra etc). A common approach which is being used widely is to run maxwell on a box which reads bin-logs from MySQL box and write it to Kafka (A Distributed Streaming Platform) Later consumer's consume data from kafka and write it to its own DB (Elastic Search, AeroSpike etc). Now major issue is to run maxwell as service on Ubuntu VM. Which we tried to solve using Upstart Job for Ubuntu 14.x and Systemd service for Ubuntu 16.x. Maxwell Binaries can be downloaded from https://github.com/zendesk/maxwell/releases . Upstart for Ubuntu 14.x (Assuming maxwell binary placed in /opt/maxwell/ directory): Create file: /etc/init/maxwell.c